Not only the pollution.
It has triggered an economic race to the bottom for any industry that can incorporate it. Employers will be forced to replace more workers with AI to keep prices competitive. And that is a lot of industries, especially if AI continues its growth.
The result is a lot of unemployment, which means an economic slowdown due to a lack of discretionary spending, which is a feedback loop.
There are only 3 outcomes I can imagine:
- AI fizzles out. It canāt maintain its advancement enough to impress execs.
- An unimaginable wealth disparity and probably a return to something like feudalism.
- social revolution where AI is taken out of the hands of owners and placed into the hands of workers. Would require changes that weād consider radically socialist now, like UBI and strong af social safety nets.
The second seems more likely than the third, and I consider that more or less a destruction of humanity
Stupid AI will destroy humanity. But the important thing to remember is that for a brief, shining moment, profit will be made.
Itās wild how we went fromā¦
Critics: āCrypto is an energy hog and its main use case is a convoluted pyramid schemeā
Boosters: āBro trust me bro, there are legit use cases and energy consumption has already been reduced in several prototype implementationsā
ā¦toā¦
Critics: āAI is an energy hog and its main use case is a convoluted labor exploitation schemeā
Boosters: āBro trust me bro, there are legit use cases and energy consumption has already been reduced in several prototype implementationsā
Theyāre not really comparable. Crypto and blockchain were good solutions looking for problems to solve. Theyāre innovative and cool? Sure, but they never had a widescale use. AI has been around for awhile, it just got recently rebranded as artificial intellectual, the same technologies were called algorithms a few years agoā¦ And they basically run the internet and the global economy. Hospitals, schools, corporations, governments, the militaries, etc all use them. Maybe certain uses of AI are dumb, but trying to pretend that the thing as a whole doesnāt have, or rather already has, genuine uses is just dumb
I feel like youāre being incredibly generous with the usage of AI here. I feel as though the post and comment above refer to LLM/image generation AI. Those ātypes of āAIāā certainly donāt run all those things.
The term AI is very vague because intelligence is an inherently subjective concept. If weāre defining AI as something that has consciousness then it doesnāt exist, but if weāre defining it as a task that a computer can do on itās own, then virtually everything that is automated is run by AI.
Even with generative AI models, theyāve been around for a while too. For example, lot of the news articles you read, especially about the weather or news arenāt written by actual people, theyāre AI generated. Another example would be scientific simulations, they use AI to generate a bunch of possible scenarios based on given parameters. Yet another example would be the gaming industry, what do you think generates Minecraft worlds? The point here is that AI has been around for awhile and is already being used everywhere. What weāre seeing with chatGPT and these other new models is that these models are now being released for public access. Itās like democratization of AI, and a lot of good and bad things are bound to come of it. Weāre at the infancy stage of this now, but just like with the world wide web before it, these technologies are going to fundamentally change how we do many things from now on.
We canāt fight technology, thatās a losing battle. These AIs are here and theyāre here to stay. So strap on and enjoy the ride.
So the problem isnāt the technology. The problem is unethical big corporations.
Technology is a cultural creation, not a magic box outside of its circumstances. āThe problem isnāt the technology, itās the creators, users, and perpetuatorsā is tautological.
And, importantly, the purpose of a system is what it does.
But not al users of AI are malignant or causing environment damage.
Saying the contrary would be a bad generalization.
I have LLM models running on a n100 chip that have less consumption that the lemmy servers we are writing on right now.
So youāre using a different specific and niche technology (which directly benefits and exists because of) the technology that is the subject of critique, and acting like the subject technology behaves like yours?
āGoogle is doing a bad with zā
āz canāt be bad, I use y and it doesnāt have those problems that are already things that happened. In the past. Unchangeable by future actions.ā
??
Technology is a product of science. The facts science seeks to uncover are fundamental universal truths that arenāt subject to human folly. Only how we use that knowledge is subject to human folly. I donāt think open source or open weights models are a bad usage of that knowledge. Some of the things corporations do are bad or exploitative uses of that knowledge.
You should really try and consider what it means for technology to be a cultural feature. Think, genuinely and critically, about what it means when someone tells you that you shouldnāt judge the ethics and values of their pursuits, because they are simply discovering āuniversal truthsā.
And then, really make sure you ponder what it means when people say the purpose of a system is what it does. Why that might get brought up in discussions about wanton resource spending for venture capitalist hype.
depends. for āAIā āartā the problem is both terms are lies. there is no intelligence and there is no art.
Any work made to convey a concept and/or emotion can be art. Iād throw in āintentā, having ādeeper meaningā, and the context of its creation to distinguish between an accounting spreadsheet and art.
The problem with AI āartā is itās produced by something that isnāt sentient and is incapable of original thought. AI doesnāt understand intent, context, emotion, or even the most basic concepts behind the prompt or the end result. Its āartā is merely a mashup of ideas stolen from countless works of actual, original art run through an esoteric logic network.
AI can serve as a tool to create art of course, but the further removed from the process a human is the less the end result can truly be considered āartā.
i wonāt, but art has intent. AI doesnāt.
Pollockās paintings are art. a bunch of paint buckets falling on a canvas in an earthquake wouldnāt make art, even if it resembled Pollockās paintings. thereās no intent behind it. no artist.
there is no intelligence and there is no art.
People said exact same thing about CGI, and photography before. I wouldnāt be surprised if somebody scream āITāS NOT ARTā at Michaelangelo or people carving walls of temples in ancient Egypt.
the āpeopleā youāre talking about were talking about tools. Iām talking about intent. Just because you compare two arguments that use similar words doesnāt mean the arguments are similar.
AI is a tool used by a human. The human using the tools has an intention, wants to create something with it.
Itās exactly the same as painting digital art. But instead o moving the mouse around, or copying other images into a collage, you use the AI tool, which can be pretty complex to use to create something beautiful.
Do you know what generative art is? It existed before AI. Surely with your gatekeeping you think thatās also no art.
Iām so sick of this. there are scenarios in which so-called āAIā can be used as a tool. for example, resampling. itās dodgy, but whatever, letās say the tech is perfected and it truly analyzes data to give a good result rather than stealing other art to match.
but a tool is something that does exactly what you intend for it to do. you canāt say 100 dice are collectively āa tool that outputs 600ā because you can sit there and roll them for as long as it takes for all of them to turn up sixes, technically. and if you do call it that, thatās still a shitty tool, and you did nothing worth crediting to get 600. a robot can do it. and it does. and that makes it not art.
Disagree. The technology will never yield AGI as all it does is remix a huge field of data without even knowing what that data functionally says.
All it can do now and ever will do is destroy the environment by using oodles of energy, just so some fucker can generate a boring big titty goth pinup with weird hands and weirder feet. Feeding it exponentially more energy will do what? Reduce the amount of fingers and the foot weirdness? Great. That is so worth squandering our dwindling resources to.
Disagree. The technology will never yield AGI as all it does is remix a huge field of data without even knowing what that data functionally says.
We definitely donāt need AGI for AI technologies to be useful. AI, particularly reinforcement learning, is great for teaching robots to do complex tasks for example. LLMs have shocking ability relative to other approaches (if limited compared to humans) to generalize to ānearby but different, enoughā tasks. And once theyāre trained (and possibly quantized), they (LLMs and reinforcement learning policies) donāt require that much more power to implement compared to traditional algorithms. So IMO, the question should be āis it worthwhile to spend the energy to train X thing?ā Unfortunately, the capitalists have been the ones answering that question because they can do so at our expense.
For a person without access to big computing resources (me lol), thereās also the fact that transfer learning is possible for both LLMs and reinforcement learning. Easiest way to explain transfer learning is this: imagine that I want to learn Engineering, Physics, Chemistry, and Computer Science. What should I learn first so that each subject is easy for me to pick up? My answer would be Math. So in AI speak, if we spend a ton of energy to train an AI to do math and then fine-tune agents to do Physics, Engineering, etc., we can avoid training all the agents from scratch. Fine-tuning can typically be done on ānormalā computers with FOSS tools.
all it does is remix a huge field of data without even knowing what that data functionally says.
IMO that can be an incredibly useful approach for solving problems whose dynamics are too complex to reasonably model, with the understanding that the obtained solution is a crude approximation to the underlying dynamics.
IMO Iām waiting for the bubble to burst so that AI can be just another tool in my engineering toolkit instead of the capitalistsā newest plaything.
Sorry about the essay, but I really think that AI tools have a huge potential to make life better for us all, but obviously a much greater potential for capitalists to destroy us all so long as we donāt understand these tools and use them against the powerful.
Since I donāt feel like arguing, I will grant you that you are correct in what you say AI can do. I am not really but whatever, say it can:
How will these reasonable AI tools emerge out of this under capitalism? And how is it not all still just theft with extra steps that is imoral to use?
Idk. I find it a great coding help. IMO AI tech have legitimate good uses.
Image generation have algo great uses without falling into porn. It ables to people who donāt know how to paint to do some art.
Wow, great, the AI is here to defend itself. Working about as well as youād think.
Considering most new technology these days is merely a distilation of the ethos of the big corporations, how do you distinguish?
And all for some drunken answers and a few new memes
In my country this kind of AI is being used to more efficiently find tax fraud and to create chatbots for users to understand taxes, that due to the much more reliable and limited training set does not allucinate and can provide clear sources for the information given.
Which magical country is this? Can I come?
;-)
Iām actually curious (kind of desperate for some good news nowadays). Not trying to make fun of you
Spain. AEAT is out tax authority and has begun using AI in recent years, as an early adopter. The Spanish government in general seems very favorable towards AI and itās funding a nationally trained model.