None of what I write in this newsletter is about sowing doubt or “hating,” but a sober evaluation of where we are today and where we may end up on the current path. I believe that the artificial intelligence boom — which would be better described as a generative AI boom — is (as I’ve said before) unsustainable, and will ultimately collapse. I also fear that said collapse could be ruinous to big tech, deeply damaging to the startup ecosystem, and will further sour public support for the tech industry.
Can’t blame Zitron for being pretty downbeat in this - given the AI bubble’s size and side-effects, its easy to see how its bursting can have some cataclysmic effects.
(Shameless self-promo: I ended up writing a bit about the potential aftermath as well)
I worry this is going to leave a crater in the software industry that will take a decade to fill.
I also fear that said collapse could be ruinous to big tech, deeply damaging to the startup ecosystem, and will further sour public support for the tech industry.
Yes… ha ha ha… YES!
Microsoft is making laptops with dedicated Copilot buttons.
I think they’d rather burn their company to the ground, all the while telling their customers that they just needed to wait a little while longer, rather than admit that they got it wrong.
Are you saying that Clippy is proof I’m right or proof I’m wrong? Or I’m I just being unfunny and not getting the joke.
who doesn’t like a good old wordless driveby post making no statement of intent and giving no direction
I swear, some days I feel like some of these people mumble “hail eris” as they wander past
Hallucinations — which occur when models authoritatively states something that isn’t true (or in the case of an image or a video makes something that looks…wrong) — are impossible to resolve without new branches of mathematics…
Finally, honesty. I appreciate that the author understands this, even if they might not have the exact knowledge required to substantiate it. For what it’s worth, the situation is more dire than this; we can’t even describe the new directions required. My fictional-universe theory (FU theory) shows that a knowledge base cannot know whether its facts are describing the real world or a fictional world which has lots in common with the real world. (Humans don’t want to think about this, because of the implication.)
There are a ton of companies selling value based companies on “AI” services, when in actuality, it’s just repackaged ChatGPT. Real AI based on ML requires a ton of resources and time in order to train a viable model that does 1 thing properly. Having ChatGPT summarize or write emails for you is not AI and is not adding value to your organization. I’m waiting for that realization to hit.
Real AI based on ML
a phrase so loadbearing you could build skyscrapers out of it
Summizing Emails is a valid purpose. If you want to be pedantic about what AI means, go gatekeeper somewhere else.
JFC how many novel-length emails do you get in a week?
I think a more constructive way to handle this problem is to train people to write better emails.