shy
You appear to have strong opinions on this, so probably not worth arguing further, but I disagree with you completely. If people are mistaking it then that is because the term is being used improperly, as the very language of the two words do not apply. AGI didn’t even gain traction as a term until recently, when people who were actually working on strong AI had to figure out a way to continue communicating about what they were doing, because AI had lost all of its original meaning.
Also, LLaMA is one of the LLMAIs, not a “common type” of LLM. Pretty much confirms you don’t know what you’re talking about here…
Universal Basic Income will never work because it doesn’t implement game theory properly. People who say that paying everyone a living wage wouldn’t cause inflation simply do not understand that every landlord would jack up their prices, all food and other essentials would manufacture fake scarcity to keep profits up, and no difficult, awful jobs would be done consistently enough for society to function. It really pissed me off when Kurzgesagt covered this topic and handwaved that no inflation would occur with a MBI system, and I heard the same thing on reddit all the time. Yes, it would. Competition doesn’t just leave human nature because you give people some money to live on.
What we really need if we want to fix the Great Depression 2.0 are pragmatic public services. The government needs to heavily fund new housing to increase supply (implicitly dropping prices), and heavily remove laws that force corporations to prioritize shortsighted shareholder profits over long-term stability, so people have more leverage when negotiating their own salaries. Ultimately I do not think these are problems that can be solved with more government oversight, and while capitalism is not a perfect system, the logic of it at least works better than communism. Communism failed faster and harder than capitalism for a reason, it’s not the cure-all people online seem to think it is.
We should call them LLMAIs (la-mize, like llamas) to really specify what they are.
And to their point, I think the ‘intelligence’ in the modern wave of AI is severely lacking. There is no reasoning or learning, just a brute force fuzzy training pass that remains fixed at a specific point in time, and only approximates what an intelligent actor would respond with through referencing massive amounts of “correct response” data. I’ve heard AGI being bandied about as the thing people really thought when you said AI a few years ago, but I’m kind of hoping the AI term stops being watered down with this nonsense. ML is ML, it’s wrong to say that it’s a subset of AI when AI has its own separate connotations.