Anymore = ever again
Any more = any further
They’re two different things.
They don’t make games that look like that anymore, even though we thought the graphics couldn’t get any more realistic back then.
Bruh, english has thousands of words with multiple means and the same spelling. You understood the meaning.
“Oh no, they’re trying to teach me about grammar! I must resist knowledge!”
You know everyone was scanning your comment hard to see if you made any grammar mistakes.
@ThirdWorldOrder @otp Right. Lol
It is my opinion that we reached peak graphics 6 or 7 years ago when GTX1080 was king. Why?
- Games from that era look gorgeous (eg Shadow of Tomb Raider), yet were well optimized to run high/ultra at FHD on RX570.
- We didn’t need to rely on fakery like DLSS and frame generation to get playable frame rates. If anything, people used to supersample for the ultimate picture quality. Even upping the rendering scale to 1.25 made everything so crisp.
- MSAA and SMAA antialiasing look better, but somehow even TAA from that era doesn’t seem as blurry. Today, might as well use FXAA.
Graphics today seem ass-backward to me: render at 60…70% scale to have good framerates, FX are often rendered at even lower resolution, slap on overly blurry TAA to hide the jaggies, then use some upsample trickery to get to the native resolution. And it’s still blurry, so squirt some sharpening and noise on top to create an illusion of detail. And still runs like crap, so throw in frame interpolation to get the illusion of higher frame rate.
I think it’s high time we should be able to run non-raytracing graphics at 4k native and raytracing at 2.5k native on 500€ MSRP GPU-s with no trickery involved.
GPUs are getting better, but the demand from the crypto and ML AI markets mean they can just jack up the price of every new card to higher than the last so the prices have stopped dropping with each new generation.
- We didn’t need to rely on fakery like DLSS and frame generation to get playable frame rates.
If truly believe what you wrote, then you should never look into the details of how a game world is rendered. It’s fakery stacked upon fakery that somehow looks great. If anything, the current move of ray tracing with upscaling is less fakery than what was before.
There’s a saying in computer graphics: if it looks right, it is right. Meaning you shouldn’t worry if the technique makes a mockary of how light actually works as long as the viewer won’t notice.
Sure, all graphics is about creating an illusion.
But there’s a stark difference between optimization like culling, occlusion planes, LOD-s, half-res rendering of costly FX (like AO) and using a crutch like lowering the rendering resolution of the whole frame to try and make up for bad optimization or crap hardware. DLSS has it’s place for 150…200€ entry-level GPU-s trying to drive a 2.5k monitor, not 700€ “midrange” cards.
But there’s a stark difference between optimization like culling, occlusion planes, LOD-s, half-res rendering of costly FX (like AO)
and using a crutch like lowering the rendering resolution of the whole frame to try and make up for bad optimization or crap hardware.
There is not a stark difference if you were to describe the techniques objectively and not twist it to what you feel they’re like.
There are so many steps in the render pipeline where native resolution isn’t used. Yet I don’t here the crowd complaining about shadow map size or how reflections are half res. Upscaling is just another tool that allows us to create better looking frames at playable refresh rates. Compare Alan Wake or Avatar with DLSS with any other game without DLSS and they will still come out on top.
DLSS has it’s place for 150…200€ entry-level GPU-s trying to drive a 2.5k monitor, not 700€ “midrange” cards.
Just because you’re unhappy with Nvidia’s pricing strategy doesn’t mean you should slander new render techniques. You’re mixing two different topics.
Tbf these games were made with crtvs in mind and crtvs blurred the edges making things look smoother. They only look so blocky nowadays because newer tvs have better resolution so you can clearly see all the blocky edges.
I think it depends… Definitely with 2d games, but for instance, the heads and fists in Goldeneye (N64) always looked blocky
I have never in my life seen someone refer to CRT TVs as crtvs and it’s really fucking with my head lmao
It’s a habit I picked up from my dad lol
He always called them crtvs because he thought the “tube” part of cathode ray tube was unnecessary when using the acronym. You know it’s a tube because what else would a cathode ray be in?
I remember when they started talking about “photo realistic” graphics…whatever that actually means.
In flight sim world “photo realistic” meant actual aerophotos as textures for the ground.
Looked passable…
…From 30000 ft altitude. From 1000 ft it was laughably horrible🙃
I was little when the OG Ace Combat game came out on the PS1 right? Polygonal jet engines & everything lol
Until i was like 11, whenever i saw real pictures of actual aircraft that were in the game i thought they were fake because their engines weren’t polygonal enough 🤣🤣🤣