Built on unearned hype.
I really don’t understand the hype about AI in it’s current state.
It’s not for you. Its for corporations who want to fire half their staff and replace them with an algorithm. That’s why it has such a high valuation.
Those corporations are about to find out the fun way that these algorithms, in their current and near-future states, cannot replace human beings.
Well, except for maybe lazy copywriters who pump out pointless listicles and executives who do - whatever it is they do - but any non-trivial task requiring creativity and understanding is beyond these tools.
You’re assuming that they care about running a viable service or product.
- Computers might be good at numbers and typesetting, but we’ll always need human secretaries and phone operators to keep things running.
- They might be able to beat a novice, but no computer will ever beat a human grandmaster at chess.
- Okay, then they can’t beat humans at Go or poker.
- Any non-trivial task requiring creativity and understanding is beyond these tools. ← you are here
- AI-run corporations will never be able to outcompete ones with ones with human boards and CEOs.
- An AI scriptwriter could never win an Oscar.
- I’m voting for the human candidate for president, I don’t think the AI one is up to the task.
They could fire 3 layers of management without spending a dime while increasing productivity.
It’s not related to the technology, is the venture industry trying tp figure out the next unicorn, which they have been trying to find for the last ten years.
Cloud? Neva heard of it! AI is where the money is at now.
Buzzwords, that’s all they are.
I wouldn’t say “the cloud” is exactly in the same realm. It’s broad and definitely had its heyday being thrown around in marketing, but it’s a very real facet in modern software. More specialized and actually useful AI will probably end up in a similar place eventually.
I think I’m talking myself out of my original point though lol. Kind of conflated LLMs and AI at first. I just wish LLMs weren’t the only things with money behind them.
Server farms are the real money maker. Doesn’t matter the fad, they’ll need processing power from somewhere.
Honestly, I can say I don’t really get it either. I would only use the open source models anyway, but it just seems rather silly from what I can tell.
I would only use the open source models anyway, but it just seems rather silly from what I can tell.
I feel like the last few months have been an inflection point, at least for me. Qwen 2.5, and the new Command-R, really make a 24GB GPU feel “dumb, but smart,” useful enough so I pretty much always keep Qwen 32B loaded on the desktop for its sheer utility.
It’s still in the realm of enthusiast hardware (aka a used 3090), but hopefully that’s about to be shaken up with bitnet and some stuff from AMD/Intel.
Altman is literally a vampire though, and thankfully I think he’s going to burn OpenAI to the ground.
What do you think about the possibility of decentralized AI through blockchain so that you could pay some tokens or something like that to rent the GPUs to run your AI for as long as you wish to instead of having to buy all the hardware and assemble it yourself?
Are you trying to solve science with it or something? You are supposed to turn carefully worded sentences into funny pictures and show people.
Maybe that explains it. Because I am blind, pictures mean very little to me. I think image memes were one of the most abhorrent things to ever exist. Because I miss out on so much because of that.
That’s because we’re using it wrong. It’s not a genie you go to for answers to your problems, it’s mighty putty. You could build a house out of it, but it’s wildly expensive and not at all worth it. But if you want to stick a glass bottle to a tree, or fix a broken plastic shell back together, it’s great
For example, you can have it do a web search, read through the results to see if it actually contains what you’re looking for, then summarize what it found and let you jump right there to evaluate yourself. You could have it listen to your podcasts and tag them by topic. You could write a normal program to generate a name and traits of a game character, then have the AI write flavor text and dialog trees for quest chains
Those are some projects I’ve used AI for - specifically, local AI running on my old computer. I’m looking to build a new one
I also use chat gpt to write simple but tedious code on a weekly basis for my normal job - things like “build a class to represent this db object”. I don’t trust it to do anything that’s not straightforward - I don’t trust myself to do anything tedious
The AI is not an expert, I am. The AI is happy to do busy work, every second of it increases my stress level. AI is tireless, it can work while I sleep. AI is not efficient, but it’s flexible. My code is efficient, but it is not flexible
As a part of a system, AI is the link between unstructured data and code, which needs structure. It let’s you do things that would have required a 24/7 team of dozens of employees. It also is unable to replace a single human - just like a computer
That’s my philosophy at least, after approaching LLMs as a new type of tool and studying them as a developer. Like anything else, I ran it on my own computer and poked and prodded it until I saw the patterns. I learned what it could do, and what it struggled to do. I learned how to use it, I developed methodologies. I learned how to detect and undo “rampancy”, a number of different failure states where it degrades into nonsense. And I learned how to use it as another tool in my toolbox, and I pride myself on using the right tool for the job
This is a useful tool - I repeatedly have used it to do things I couldn’t have done without it. This is a new tool - artisans don’t know how to use it yet. I can build incredible things with this tool with what I know now, and other people are developing their own techniques to great effect. We will learn how to use this tool, even in its current state. It will take time, its use may not be obvious, but this is a very useful tool
I like it. I use it every day. Much faster and better than sifting through garbage websites to find answers.
You guys want me to invest? I’m guaranteed to lower the stock value by ~30% within about a month with my shidas touch
Yeah, do it!
I had a similar touch when I was younger. I’ve worked for Circuit City, Toys ‘R’ Us, and Blockbuster Video. Sadly, Best Buy somehow survived.
Alrighty then, just need to wait for fucking novideo stock to get back to what I bought it at and I’ll dump and switch (don’t have anything spare to play around with sadly)
I hear you though, I also had(have?) similar physical effect on things like toys as a kid or some electronics since then…just brake shit randomly without trying, especially if I don’t own it.
I can’t wait for this current “A.I.” craze to go away. The tech is doofy, useless, wasteful, and a massive energy consumer. This is blockchain nonsense all over again, though that still hasn’t fully died yet, unfortunately.
Like blockchain there is some niche usefulness to the technology, but also like blockchain it’s being applied to a myriad of things it is not useful for.
Also it’s not fucking ai is it. I actually find the blatant misuse of this term incredibly annoying to be honest.
Arguably you are the one misusing the term. Even painfully mundane tasks like the A* pathfinding algorithm fall under the umbrella of artificial intelligence. It’s a big, big (like, stupidly big) field.
You are right that it’s not AGI, but very few people (outside of marketing) claim that it is.
The term AI was coined in 1956 at a computer science conference and was used to refer to a broad range of topics that certainly would include machine learning and neural networks as used in large language models.
I don’t get the “it’s not really AI” point that keeps being brought up in discussions like this. Are you thinking of AGI, perhaps? That’s the sci-fi “artificial person” variety, which LLMs aren’t able to manage. But that’s just a subset of AI.
It is, machine learning, neural networks and all the other parts in LLMs and generative algorithms like midjourney etc are all fields of artificial intelligence. The AI Effect just means the goalposts for what people think of as “proper” AI are constantly moving.
Drugs(silk road), scams&malware(pay 5 Bitcoin to unlock PC), money laundering&pump dump (unregulated market), and Nvidia hype (should have bought amd at 5$)
“we ran out of useful things to do with computing at the consumer level and now we are inventing problems” - “just bill’em” gates, 1984.
This work will have lots of applications in the future. I personally stay as far away from it as I can because I just have zero need for it to write souless birthday card messages for me but to act like the work is doing nothing is kinda stupid.
Every stage it’s been at people would say “oh this can’t even do X” and then it could and they’d so “oh it can’t do Y” and then it could and they’d say…do I really need to go on?
The biggest issue with it all right, for me anyway, now is that we’re trying to use it for the absolute dumbest shit imaginable and investors are throwing tonnes of money, that could solve real problems we don’t need AI for, into the grinder while poverty and climate change run rampant around us.
It has its uses, but it is being massively overhyped.
Having trialled Copilot and a few other AI tools in my workplace, I can confidently says it’s a minor productivity booster.
Whereas I have been finding uses for it to produce things that simply could not have produced myself without it, making it far more than a mere “productivity boost.”
I think people are mainly seeing what they want to see.
apparently so far the research disagrees with the productivity claims https://www.cio.com/article/3540579/devs-gaining-little-if-anything-from-ai-coding-assistants.html
The total market cap across all cryptocurrencies is currently about 2.5 trillion dollars, which isn’t far below its all-time high of 3 trillion. If that’s something you’d say “hasn’t fully died yet” then AI’s not going to go away any time soon by that standard.
Oh good! I remember when they said they couldn’t afford to pay independent copyright owners. Now they can pay for the work they stole!
When I copy and paste someone else’s work, I get called a plagiarist and get fired.
When OpenAI creates a robot that does it really really really fast, they make enough money to feed the planet hundreds of times over.
I don’t want to live on this planet any more.
Sure, but I think recognizing someone when they accomplish something of value is important regardless of the economic system in place.
And in this Capitalistic society OpenAI is doing nothing but literally capitalizing on the hard work of thousands of individuals without giving them any form of recognition
Capitalism is incompatible with meritocracy.
It rewards ruthless capture and enclosure of other people’s hard work and surplus value.
Read what Disney did to folk culture. Read what Edison did to his underlings, it is all a repeat of the “enclosures”, the thefts of our commons for private profits.
If you don’t know what the enclosures were, read this macabre story and focus on the people who attacked the fences and paid with their lives https://en.wikipedia.org/wiki/Elizabeth_Sugrue