3 points

Every answer so far is wrong.

It can be used for good purposes, though I’m not sure if characterize creating a personalized Jarvis as good per se. But, more broadly, capitalist inventions do not need to be used only by capitalists for capital ends.

permalink
report
reply
9 points
*

Every answer so far is wrong.

I wouldn’t say wrong so much as leaving out the detail that LLMs aren’t evil and that open source LLMs are really what the world should be aiming for, if anything. Like any tool, it can be used as a weapon and for ill-purposes. I can use a hammer to build a house as much as I can use it to cave in someone’s skull.

But even in the open source world, LLMs have not lead to a massive increase of new tools, or a massive increase of finding bugs, or a massive increase in open source productivity… all things LLMs promise, but have yet to deliver on in the open source world. Which, based on how much energy they use, we ought to be asking if that’s actually truly beneficial to be burning so much energy for something which has, as of yet, to prove itself as actually bringing the promised increased open source productivity.

permalink
report
parent
reply
20 points

It’s a capitalist invention and, therefore, will be used for whatever capitalists deem it profitable to be. Once the money for AI home assistants starts rolling in, then you’ll see it adopted for that purpose.

permalink
report
reply
-9 points

It’s a free market invention and, therefore, will be used by whatever a free market decides it should be used for.

permalink
report
parent
reply
1 point

whatever a free market decides it should be used for

People say that AIs don’t “think” or “decide” things, but I think it’s better to personify an AI/LLM than “a free market”, lol

permalink
report
parent
reply
18 points

The people already with the money have orders of magnitude more freedom on average to decide and pursue opportunities.

Free market inventions do not guarantee persistent and open access.

permalink
report
parent
reply
-3 points

That’s just having money, and it works like that in every economy.

permalink
report
parent
reply
2 points

I think the gov should regulate the AI market, create standards that prevent abuse by bad people (such as image gen not being able to make CP ect.)

permalink
report
parent
reply
1 point
*

Someone’s been watching way too many movies and isn’t familiar yet with how mind bogglingly stupid “AI” actually is.

JARVIS can think on its own, it doesn’t need to be told to do anything. LLMs cannot think on their own, they have no intention, they can only respond to input. They cannot create “thoughts” on their own without being prompted by a human.

The reason they spout so much BS is because they don’t even really think. They cannot tell the difference between truth and fiction and will be just as happily confident in the truth of their statements whether they are being truthful or lying because they don’t know the fucking difference.

We’re fucking worlds away from a JARVIS, man.

Like half the stuff they claim AI does, like those “AI stores” Amazon had, where you just picked up stuff and walked out with it and the “AI would intelligently figure out what you bought and apply it to your account.” That AI was *actually a bunch of low paid people in third world countries documenting videos. It was never fucking AI to begin with because nothing we have even comes close to that fucking capability without human intervention.

permalink
report
reply
62 points

You can’t turn a spicy autocorrect into anything even remotely close to Jarvis.

permalink
report
reply
3 points

It’s not autocorrect, it’s a text predictor. So I’d say you could definitely get close to JARVIS, especially when we don’t even know why it works yet.

permalink
report
parent
reply
15 points

You’re just being pedantic. Most autocorrects/keyboard autocompletes make use of text predictors to function. Look at the 3 suggestions on your phone keyboard whenever you type. That’s also a text predictor (granted it’s a much simpler one).

Text predictors (obviously) predict text, and as such don’t have any actual understanding on the text they are outputting. An AI that doesn’t understand its own outputs isn’t going to achieve anything close to a sci-fi depiction of an AI assistant.

It’s also not like the devs are confused about why LLMs work. If you had every publicly uploaded sentence since the creation of the Internet as a training reference I would hope the resulting model is a pretty good autocomplete, even to the point of being able to answer some questions.

permalink
report
parent
reply
-7 points

Yes, autocorrect may use text predictors. No, that does not make text predictors “spicy autocorrect”. The denotation may be correct, but the connotation isn’t.

Text predictors (obviously) predict text, and as such don’t have any actual understanding on the text they are outputting. An AI that doesn’t understand its own outputs isn’t going to achieve anything close to a sci-fi depiction of an AI assistant.

There’s a large philosophical debate about whether we actually know what we’re thinking, but I’m not going to get into that. All I’m going to elaborate on is the thought experiment of the Chinese room that posits that perhaps AI doesn’t need to understand things to have apparent intelligence enough for most functions.

It’s also not like the devs are confused about why LLMs work.

Yes they are. All they know is that if you train a text predictor a ton, at one point it hits a bottleneck of usability way below targets, and then one day it will suddenly surpass that bottleneck for no apparent reason.

permalink
report
parent
reply
13 points

Any tool, in human hands, will be used for evil. The problem is humans.

permalink
report
reply

Asklemmy

!asklemmy@lemmy.ml

Create post

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it’s welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

Icon by @Double_A@discuss.tchncs.de

Community stats

  • 9.6K

    Monthly active users

  • 5.5K

    Posts

  • 301K

    Comments