the premise seems flawed, i think.
i feel what he’s saying is: we suck optimizing gfx performance now because gamers deem ai upscale quality as passable
this feels opposite to what the ps poll says that gamers enable performance mode more because the priority is more stable frames than shiny anti aliasing/post processing.
I don’t see how that’s the case. Most people prefer more fps over image quality, so minor artifacting from DLSS is preferable to the game running much slower with cleaner image quality. That is consistent with the PS data (which wasn’t a poll, to my understanding).
I also dispute the other assumption, that “we suck at optimizing performance”. The difference between now and the days of the 1080Ti when you could just max out games and call it a day, is that we’re targeting 4K at 120fps and up, as opposed to every game maxing out at 1080p60. There is no target for performance on PC anymore, every game can be cranked higher. We are still using CounterStrike for performance benchmarks, running at 400-1000fps. There will never be a set performance target again.
If anything, optimization now is sublime. It’s insane that you can run most AAA games on both a Steam Deck and a 4090 out of the same set of drivers and executables. That is unheard of. Back in the day the types of games you could run on both a laptop and a gaming PC looked like WoW instead of Crysis. We’ve gotten so much better at scalability.
Most people prefer more fps over image quality, so minor artifacting from DLSS is preferable to the game running much slower with cleaner image quality.
I don’t think we’re not much different in this portion. AI upscale is passable enough that gamers will choose it. If presented with a better, non-artifacting option, gamers will choose that since the goal is performance and not AI. If the stat is from PS data, and not from a poll, I think it just strengthens that users want performance more.
There will never be a set performance target again.
It’s not that there’s no set performance target. The difference is merely one, on the CounterStrike era, vs. many, now. Now, there’s more performance targets for PC than Counter Strike days. Games just can’t keep up. Saying “there will never be a set performance target” is just washing hands when a publishers/ directors won’t set directions and priorities which performance point to prioritize.
It might be that your point is optimizing for scalability, and that is fine too.
Yeah, optimizing for scalability is the only sane choice from the dev side when you’re juggling hardware ranging from the Switch and the Steam Deck to the bananas nonsense insanity that is the 4090. And like I said earlier, often you don’t even get different binaries or drivers for those, the same game has to support all of it at once.
It’s true that there are still some set targets along the way. The PS5 is one, the Switch is one if you support it, the Steam Deck is there if you’re aiming to support low power gaming. But that’s besides the point, the PS5 alone requires two to three setups to be designed, implemented and tested. PC compatibility testing is a nightmare at the best of times, and with a host of display refresh rates, arbitrary resolutions and all sorts of integrated and dedicated GPUs from three different vendors expected to get support it’s outright impossible to do granularly. The idea that PC games have become less supported or supportive of scalability is absurd. I remember the days where a game would support one GPU. As in, the one. If you had any other one it was software rendering at best. Sometimes you had to buy a separate box for each supported card.
We got used to the good stuff during the 900 series and 1000 series from Nvidia basically running console games maxed out at 1080p60, but that was a very brief slice of time, it’s gone and it’s not coming back.
RIP the future of high end computer graphics. 1972 to 2024. You had a good run.
I don’t know enough, but it sounds like Unix kernel will need a new way to separately give access to TPUs.
Lol, I’m so happy I went with AMD.
Great, so now all games require AI-upscaling you’re left out of any ability to play them
I mean… OK, but AMD just revealed a new set of AI-powered upscaling libraries along with Sony for the PSPro and is on record saying they’re backing out of high end gaming hardware to pivot to data center hardware, so… I hope you have more reasons than this, because I don’t think they disagree.
I do. Linux support is the big one. Nvidia has had their arm twisted and they still only barely changed their approach to their Linux driver.
This feels like its establishing a precedent for widespread adoption/implementation of AI into consumer devices. Manufactured consent.
“We compute one pixel… we hallucinate, if you will, the other 32.”
Between this and things like Sora, we are doomed to drown in illusions of our own creation.
If the visuals are performant and consistent, why do we care? I have always been baffled by the obsession with “real pixels” in some benchmarks and user commentary.
AI upscales are so immediately obvious and look like shit. Frame “generation” too. Not sour grapes, my card supports FSR and fluid motion frames, I just hate them and they are turned off.
That’s fine, but definitely not a widespread stance. Like somebody pointed out above, most players are willing to lose some visual clarity for the sake of performance.
Look, I don’t like the look of post-process AA at all. FXAA just seemed like a blur filter to me. But there was a whole generation of games out there where it was that or somehow finding enough performance to supersample a game and then endure the spotty compatibility of having to mess with custom unsupported resolutions and whatnot. It could definitely be done, particularly in older games, but for a mass market use case people would turn on SMAA or FXAA and be happy they didn’t have to deal with endless jaggies on their mid-tier hardware.
This is the same thing, it’s a remarkably small visual hit for a lot more performance, and particularly on higher resolution displays a lot of people are going to find it makes a lot of sense. Getting hung up on analyzing just “raw” performance as opposed of weighing the final results independently of the method used to get there makes no sense. Well, it makes no sense industry-wide, if you happen to prefer other ways to claw back that performance you’re more than welcome to deal with bilinear upscaling, lower in-game settings or whatever you think your sweet spot it, at least on PC.