![Avatar](/_next/image?url=%2Flemmy-icon-96x96.webp&w=3840&q=75)
ilinamorato
Well, if anyone was going to be a K-A-M fan, it’d be Grandpa “my forehead transplant donor was Cerean” Simpson.
Wikipedia says it’s 16,000x16,000 (which is way less than I thought). The way the math works, that’s 16x as big as a 4k monitor, so 16 GPUs would make sense. And there’s a screen inside and one outside, so double that. But I also can’t figure out why it needs five times that. Redundancy? Poor optimization? I dunno.
I have five guesses:
(1) That would require more diagnostics than an LED on a monitor is able to provide at a reasonable cost, (2) if you’re leaving the monitor on in a situation where burn-in is likely, you’re probably not at the monitor when it matters, (3) monitors are a mission-critical piece of hardware, meaning that them turning themselves off (or even just turning off certain pixels) randomly is not a great idea, (4) it’s probably the OS’s job to decide when to turn off the monitor, as the OS has the context to know what’s important and what isn’t, and how long it’s been since you’ve interacted with the device, and (5) it’s in the monitor manufacturer’s best interest for your monitor to get burn-in so that you have to replace it more often.
The actual answer is probably a combination of multiple things, but that’s my guess.
Honestly, setting a screen timeout (or even a screen saver!) is the solution to this problem. So the problem was more or less solved in the early 80s.