Why?
But say just for examples sake we didn’t read the article - what would a quick summary of why exactly palm pilots are relevant to iMax entail?
It takes two paragraphs to explain it in the article because IMAX is really complicated. The important thing is that it’s mainly a monitoring device and projectionists usually just have to leave it alone and let it do its thing.
The article doesn’t actually say what is does, just that it’s function is to ensure the loading of film occurs at a constant speed which, it is implied, keeps the audio in sync. It doesn’t say how it does it, how it’s connected, or whether it has a true control function or is just a monitoring device which is used by the projectionist to make alterations (using another part of the system) should the operation de-sync. The only operator they talked to claims to neve have interacted with the device.
Its job is to keep the QTRU moving at a consistent speed and to help keep the film’s video in sync with its audio.
Why emulate a piece of 20yr old PalmPilot software instead of writing a new one?
the answer is simple: it works. And it’s not like it’s a booming industry in need of reinvention. There are only 30 theaters worldwide that can even show a full 70mm print like Oppenheimer, 19 of them in the US. Most IMAX experiences are digital now,
Because nobody bothered to write the same software for more modern hardware. As long as it works, there’s no urgent need to upgrade. Eventually, it’s going to become hard to find hardware that can still run your ancient software, so at that point they’ll probably replace the whole things with a raspberry pi or something.
They’ve already replaced the hardware. The article shows a Palm Pilot emulator running on an iPad now.
The server it talks to was probably always some type of Linux box.
Oh they decided to emulated it then. Pretty neat. If it ain’t broken, don’t fix it. If you can kick the can down the road, go for it. Why do anything today that can also be ignored tomorrow.
tldr: if it ain’t broke, don’t fix it
I love little things like this. Emulating old equipment to run old equipment.
Honestly this shit is so cool. I love when you get to see under the hood of some big fancy tech, and it’s all being run by like a TI-83 or whatever.
I remember back in the day that there were companies still running DOS software on MS-DOS decades after it had been replaced by standalone Windows (i.e. the one that didn’t need DOS under it, like NT and newer).
And it was always the same reason: it works fine for us.
Not a criticism: I’ve tended to work in the forefront of Tech and got two lessons from that in this regard:
- Latest is seldom Greatest. Some of it might become Greatest (after it matures enough and the kinks have been worked out, so thank you all early adopters for enduring all that), but a lot is just New, not even overall better and even superior stuff might end up as hit&miss as it doesn’t get adopted widelly enough and just fizzles out. Being an early adopter is almost never worth it IMHO.
- If it works, don’t replace it without any actual concrete need now or foreseen in the near future. For tech “old” is just another word for “it has reliably worked for many years” (hence why it’s still around) and going for “new” only for its newness is not really a logical engineering choice. Absolutelly, “it needs frequent maintenance or updates and we are having trouble finding the people or the parts to do it” is a valid “need”, “there are other newer devices to do the same” is not.
I suppose that because of it being something I do professionally I end up doing engineering choices informed by that for my personal tech, rather than take the consumerist fad-following upgrade path.
The only issue with your second point is that it can eventually become a quagmire when you do need to upgrade it.
I work for a very old company who held to that philosophy for many years. And while any individual component could be looked at and seen as running fine, when they did finally decide it was time to upgrade they were faced with needing to upgrade everything simultaneously.
All of the tech was too old, so no current tech had the sort of backwards compatible bridge that helps you move forward. It’s like figuring out how to get your telegram system to also work on your WiFi network, nobody makes any interfaces for that.
Instead of slowly and gradually replacing components over time, they’re faced with a single major overhaul that’s put the entire company at risk because they have to completely shut down for over a month.
True.
I added “foreseen in the near future” because of thinking along those lines but in all fairness there isn’t really a clear point were the risk of being stuck becomes a such “need” due to “foreseen in the near future problems”.
What I’ve seen done is developing a whole new system in parallel with using the old one as it’s usually significantly easier (and less risky) to reverse engineer the functional and business requirements for the new system from what the old system actually does that it is to get try and put them together from what people think they want, as they are seldom aware of the nitty-gritty details and tend to have only a view of the perfect world usage of the system and not at all of the “what if somebody makes a mistake at this stage?” human error conditions that the system must handle amongst other issues - or in other words they generally “only know what they want when they see it”.
But yeah, that point of mine does relly on quite a vague judgement: it’s better than pursuing what’s new for being new but it’s not a clear actionable rule.