84 points
*

Useful in the way that it increases emissions and hopefully leads to our demise because that’s what we deserve for this stupid technology.

permalink
report
reply
64 points

Surely this is better than the crypto/NFT tech fad. At least there is some output from the generative AI that could be beneficial to the whole of humankind rather than lining a few people’s pockets?

permalink
report
parent
reply
36 points

Unfortunately crypto is still somehow a thing. There is a couple year old bitcoin mining facility in my small town that brags about consuming 400MW of power to operate and they are solely owned by a Chinese company.

permalink
report
parent
reply
7 points

I recently noticed a number of bitcoin ATMs that have cropped up where I live - mostly at gas stations and the like. I am a little concerned by it.

permalink
report
parent
reply
-2 points

It takes living with a broken system to understand the fix for it. There are millions of people who have been saved by Bitcoin and the freedom that it brings, they are just mainly in the 2nd and 3rd worlds, so to many people they basically don’t exist.

permalink
report
parent
reply
-22 points
*

I’m crypto neutral.

But it’s really strange how anti-crypto ideologues don’t understand that the system of states printing money is literally destroying the planet. They can’t see the value of a free, fair, decentralized, automatable, accounting systems?

Somehow delusional chatbots wasting energy and resources are more worthwhile?

permalink
report
parent
reply
27 points

Printing currency isn’t destroying the planet…the current economic system is doing that, which is the same economic system that birthed crypto.

Governments issuing currency goes back to a time long before our current consumption at all cost economic system was a thing.

permalink
report
parent
reply
9 points

I’m fine doing away with physical dollars printed on paper and coins but crypto seems to solve none of the problems that we have with a fiat currency but instead continues to consume unnecessary amounts of energy while being driven by rich investors that would love nothing more than to spend and earn money in an untraceable way.

permalink
report
parent
reply
5 points

While the consumption for AI train can be large, there are arguments to be made for its net effect in the long run.

The article’s last section gives a few examples that are interesting to me from an environmental perspective. Using smaller problem-specific models can have a large effect in reducing AI emissions, since their relation to model size is not linear. AI assistance can indeed increase worker productivity, which does not necessarily decrease emissions but we have to keep in mind that our bodies are pretty inefficient meat bags. Last but not least, AI literacy can lead to better legislation and regulation.

permalink
report
parent
reply
13 points

The argument that our bodies are inefficient meat bags doesn’t make sense. AI isn’t replacing the inefficient meat bag unless I’m unaware of an AI killing people off and so far I’ve yet to see AI make any meaningful dent in overall emissions or research. A chatgpt query can use 10x more power than a regular Google search and there is no chance the result is 10x more useful. AI feels more like it’s adding to the enshittification of the internet and because of its energy use the enshittification of our planet. IMO if these companies can’t afford to build renewables to support their use then they can fuck off.

permalink
report
parent
reply
3 points
*

Using smaller problem-specific models can have a large effect in reducing AI emissions

Sure, if you consider anything at all to be “AI”. I’m pretty sure my spellchecker is relatively efficient.

AI literacy can lead to better legislation and regulation.

What do I need to read about my spellchecker? What legislation and regulation does it need?

permalink
report
parent
reply
4 points
*

Theoretically we could slow down training and coast on fine-tuning existing models. Once the AI’s trained they don’t take that much energy to run.

Everyone was racing towards “bigger is better” because it worked up to GPT4, but word on the street is that raw training is giving diminishing returns so the massive spending on compute is just a waste now.

permalink
report
parent
reply
2 points

Issue is, we’re reaching the limits of what GPT technologies can do, so we have to retrain them for the new ones, and currently available data have been already poisoned by AI generated garbage, which will make the adaptation of new technologies harder.

permalink
report
parent
reply
2 points

It’s a bit more complicated than that.

New models are sometimes targeting architecture improvements instead of pure size increases. Any truly new model still needs training time, it’s just that the training time isn’t going up as much as it used to. This means that open weights and open source models can start to catch up to large proprietary models like ChatGPT.

From my understanding GPT 4 is still a huge model and the best performing. The other models are starting to get close though, and can already exceed GPT 3.5 Turbo which was the previous standard to beat and is still what a lot of free chatbots are using. Some of these models are still absolutely huge though, even if not quite as big as GPT 4. For example Goliath is 120 billion parameters. Still pretty chonky and intensive to run even if it’s not quite GPT 4 sized. Not that anyone actually knows how big GPT 4 is. Word on the street is it’s a MoE model like Mixtral which run faster than a normal model for their size, but again no one outside Open AI actually can say with certainty.

You generally find that Open AI models are larger and slower. Wheras the other models focus more on giving the best performance at a given size as training and using huge models is much more demanding. So far the larger Open AI models have done better, but this could change as open source models see a faster improvement in the techniques they use. You could say open weights models rely on cunning architectures and fine tuning versus Open AI uses brute strength.

permalink
report
parent
reply
79 points

permalink
report
reply
26 points

Is it me or is there something very facile and dull about Gartner charts? Thinking especially about the “””magic””” quadrants one (wow, you ranked competitors in some area along TWO axes!), but even this chart feels like such a mundane observation that it seems like frankly undeserved advertising for Gartner, again, given how little it actually says.

permalink
report
parent
reply
19 points

And it isn’t even true in many cases. For example the internet with the dotcom bubble. It actually became much bigger and important than anyone anticipated in the 90s.

permalink
report
parent
reply
14 points

The graph for VR would also be quite interesting, given how many hype cycles it has had over the decades.

permalink
report
parent
reply
4 points

It’s also false in the other direction: NFTs never got a “Plateau of Productivity”.

A lot of tech hype are just convoluted scams or ponzi schemes.

permalink
report
parent
reply
4 points

The trough of disillusionment sounds like my former depression. The slope of enlightenment sounds like a really frun water slide.

permalink
report
parent
reply
3 points

Where are we on this? No way we’re at the bottom of the trough yet.

permalink
report
parent
reply
3 points
*

Well, how disappointed are you feeling, personally?

Do you see your negative opinions of generative AI becoming more intense, or deeper within the next 6-12 months, or have they hit a plateau of sustained disappointment mediated by the prior 6-12 months?

permalink
report
parent
reply
2 points

Going down to disillusionment two months ago.

permalink
report
parent
reply
2 points

fascinating. Thank you.

permalink
report
parent
reply
1 point

This human reaction to a lot of stuff. It’s interesting how it looks like a PID loop. https://theautomization.com/pid-control-basics-in-detail-part-2/

permalink
report
parent
reply
1 point

A poorly regulated pid loop…

permalink
report
parent
reply
2 points

One overshoot, one lesser undershoot and hit the target? When it’s a different thing each time? Makes me think maybe there is hope for these monkeys yet!

permalink
report
parent
reply
1 point
*

I’ve notice a phenomenon where detractors or haters inflate hype that doesn’t really exist and they do it so much that the their stories take on a whole new reality that never existed. Like it’s self feeding and then when that hype does out the “reality” sets in and is pretty much the same trend as that chart.

I’ve seen it from everything. A lot of times with stuff like the right building stories about immigrants. I think it’s media that drives it

permalink
report
parent
reply
30 points

LLMs need to get better at saying “I don’t know.” I would rather an LLM admit that it doesn’t know the answer instead of making up a bunch of bullshit and trying to convince me that it knows what it’s talking about.

permalink
report
reply
15 points

LLMs don’t “know” anything. The true things they say are just as much bullshit as the falsehoods.

permalink
report
parent
reply
10 points

I work on LLM’s for a big tech company. The misinformation on Lemmy is at best slightly disingenuous, and at worst people parroting falsehoods without knowing the facts. For that reason, take everything (even what I say) with a huge pinch of salt.

LLM’s do NOT just parrot back falsehoods, otherwise the “best” model would just be the “best” data in the best fit. The best way to think about a LLM is as a huge conductor of data AND guiding expert services. The content is derived from trained data, but it will also hit hundreds of different services to get context, find real-time info, disambiguate, etc. A huge part of LLM work is getting your models to basically say “this feels right, but I need to find out more to be correct”.

With that said, I think you’re 100% right. Sadly, and I think I can speak for many companies here, knowing that you’re right is hard to get right, and LLM’s are probably right a lot in instances where the confidence in an answer is low. I would rather a LLM say “I can’t verify this, but here is my best guess” or “here’s a possible answer, let me go away and check”.

permalink
report
parent
reply
5 points

I thought the tuning procedures, such as RLHF, kind of messes up the probabilities, so you can’t really tell how confident the model is in the output (and I’m not sure how accurate these probabilities were in the first place)?

Also, it seems, at a certain point, the more context the models are given, the less accurate the output. A few times, I asked ChatGPT something, and it used its browsing functionality to look it up, and it was still wrong even though the sources were correct. But, when I disabled “browsing” so it would just use its internal model, it was correct.

It doesn’t seem there are too many expert services tied to ChatGPT (I’m just using this as an example, because that’s the one I use). There’s obviously some kind of guardrail system for “safety,” there’s a search/browsing system (it shows you when it uses this), and there’s a python interpreter. Of course, OpenAI is now very closed, so they may be hiding that it’s using expert services (beyond the “experts” in the MOE model their speculated to be using).

permalink
report
parent
reply
1 point

Oh for sure, it’s not perfect, and IMO this is where the current improvements and research are going. If you’re relying on a LLM to hit hundreds of endpoints with complex contracts it’s going to either hallucinate what it needs to do, or it’s going to call several and go down the wrong path. I would imagine that most systems do this in a very closed way anyway, and will only show you what they want to show you. Logically speaking, for questions like “should I wear a coat today” they’ll need a service to check the weather in your location, and a service to get information about the user and their location.

permalink
report
parent
reply
3 points

It’s an interesting point. If I need to confirm that I’m right about something I will usually go to the internet, but I’m still at the behest of my reading comprehension skills. These are perfectly good, but the more arcane the topic, and the more obtuse the language used in whatever resource I consult, the more likely I am to make a mistake. The resource I choose also has a dramatic impact - e.g. if it’s the Daily Mail vs the Encyclopaedia Britannica. I might be able to identify bias, but I also might not, especially if it conforms to my own. We expect a lot of LLMs that we cannot reliably do ourselves.

permalink
report
parent
reply
9 points
*

I hate to break this to everyone who thinks that “AI” (LLM) is some sort of actual approximation of intelligence, but in reality, it’s just a fucking fancy ass parrot.

Our current “AI” doesn’t understand anything or have context, it’s just really good at guessing how to say what we want it to say… essentially in the same way that a parrot says “Polly wanna cracker.”

A parrot “talking to you” doesn’t know that Polly refers to itself or that a cracker is a specific type of food you are describing to it. If you were to ask it, “which hand was holding the cracker…?” it wouldn’t be able to answer the question… because it doesn’t fucking know what a hand is… or even the concept of playing a game or what a “question” even is.

It just knows that it makes it mouth, go “blah blah blah” in a very specific way, a human is more likely to give it a tasty treat… so it mushes its mouth parts around until its squawk becomes a sound that will elicit such a reward from the human in front of it… which is similar to how LLM “training models” work.

Oversimplification, but that’s basically it… a trillion-dollar power-grid-straining parrot.

And just like a parrot - the concept of “I don’t know” isn’t a thing it comprehends… because it’s a dumb fucking parrot.

The only thing the tech is good at… is mimicking.

It can “trace the lines” of any existing artist in history, and even blend their works, which is indeed how artists learn initially… but an LLM has nothing that can “inspire” it to create the art… because it’s just tracing the lines like a child would their favorite comic book character. That’s not art. It’s mimicry.

It can be used to transform your own voice to make you sound like most celebrities almost perfectly… it can make the mouth noises, but has no idea what it’s actually saying… like the parrot.

You get it?

permalink
report
parent
reply
2 points

LLMs are just that - Ms, that is to say, models. And trite as it is to say - “all models are wrong, some models are useful”. We certainly shouldn’t expect LLMs to do things that they cannot do (i.e. possess knowledge), but it’s clear that they can do other things surprisingly effectively, particularly providing coding support to developers. Whether they do enough to warrant their energy/other costs remains to be seen.

permalink
report
parent
reply
4 points
*

Knowing the limits of your knowledge can itself require an advanced level of knowledge.

Sure, you can easily tell about some things, like if you know how to do brain surgery or if you can identify the colour red.

But what about the things you think you know but are wrong about?

Maybe your information is outdated, like you think you know who the leader of a country is but aren’t aware that there was just an election.

Or maybe you were taught it one way in school but it was oversimplified to the point of being inaccurate (like thinking you can do physics calculations but end up treating everything as frictionless spheres in gravityless space because you didn’t take the follow up class where the first thing they said was “take everything they taught you last year and throw it out”).

Or maybe the area has since developed beyond what you thought were the limits. Like if someone wonders if they can hook their phone up to a monitor and another person takes one look at the phone and says, “it’s impossible without a VGA port”.

Or maybe applying knowledge from one thing to another due to a misunderstanding. Like overhearing a mathematician correcting a colleague that said “matrixes” with “matrices” and then telling people they should watch the Matrices movies.

Now consider that not only are AIs subject to these things themselves, but the information they are trained on is also subject to them and their training set may or may not be curated for that. And the sheer amount of data LLMs are trained on makes me think it would be difficult to even try to curate all that.

Edit: a word

permalink
report
parent
reply
3 points
*

if(lying)

don’t();

permalink
report
parent
reply
1 point

Scientist have developed just that recently. There was a paper about that. It’s not implemented in commercial models yet

permalink
report
parent
reply
0 points

Get the average human to admit they were wrong, and LLMs will follow suit

permalink
report
parent
reply
24 points
*

To be fair, it is useful in some regards.

I’m not a huge fan of Amazon, but last time I had an issue with a parcel it was sorted out insanely fast by the AI assistant on the website.

Within literally 2 minutes I’d had a refund confirmed. No waiting for people to eventually pick up the phone after 40 minutes. No misunderstanding or annoying questions. The moment I pressed send on my message it instantly started formulating a reply.

The truncated version went:

“Hey I meant to get [x] delivery, but it hasn’t arrived. Can I get a refund?”

“Sure, your money will go back into [y] account in a few days. If the parcel turns up in the meantime, you can send it back by dropping it off at [z]”

Done. Absolutely painless.

permalink
report
reply
64 points

How is a chatbot here better, faster, or more accurate than just a “return this” button on a web page? Chat bots like that take 10x the programming effort and actively make the user experience worse.

permalink
report
parent
reply
13 points

Presumably there could be nuance to the situation that the chat bot is able to convey?

permalink
report
parent
reply
8 points

But that nuance is probably limited to a paragraph or two of text. There’s nothing the chatbot knows about the returns process at a specific company that isn’t contained in that paragraph. The question is just whether that paragraph is shown directly to the user, or if it’s filtered through an LLM first. The only thing I can think of is that chatbot might be able to rephrase things for confused users and help stop users from ignoring the instructions and going straight to human support.

permalink
report
parent
reply
3 points

Like a comment field on a web form?

permalink
report
parent
reply
4 points

And it could hallucinate, so you would need to add further validation after the fact

permalink
report
parent
reply
29 points

That has nothing to do with AI and is strictly a return policy matter. You can get a return in less than 2 minutes by speaking to a human at Home Depot.

Businesses choose to either prioritize customer experience, or not.

permalink
report
parent
reply
3 points

There’s a big claim from Klarna - that I am not aware has been independently verified – that customers prefer their bot.

The cynic might say they were probably undertraining a skeleton crew of underpaid support reps. More optimistically, perhaps so many support inquiries are so simple that responding to them with a technology that can type a million words per minute should obviously be likely to increase customer satisfaction.

Personally, I’m happy with environmentally-acceptable and efficient technologies that respect consumers… assuming they are deployed in a world with robust social safety nets like universal basic income. Heh

permalink
report
parent
reply
10 points

You can just go to the order and click like 2 buttons. Chat is for when a situation is abnormal, and I promise you their bot doesn’t know how to address anything like that.

permalink
report
parent
reply
5 points

I like using it to assist me when I am coding.

permalink
report
parent
reply
3 points

Do you feel like elaborating any? I’d love to find more uses. So far I’ve mostly found it useful in areas where I’m very unfamiliar. Like I do very little web front end, so when I need to, the option paralysis is gnarly. I’ve found things like Perplexity helpful to allow me to select an approach and get moving quickly. I can spend hours agonizing over those kinds of decisions otherwise, and it’s really poorly spent time.

I’ve also found it useful when trying to answer questions about best practices or comparing approaches. It sorta does the reading and summarizes the points (with links to source material), pretty perfect use case.

So both of those are essentially “interactive text summarization” use cases - my third is as a syntax helper, again in things I don’t work with often. If I’m having a brain fart and just can’t quite remember the ternary operator syntax in that one language I never use…etc. That one’s a bit less impactful but can still be faster than manually inspecting docs, especially if the docs are bad or hard to use.

With that said I use these things less than once a week on average. Possible that’s just down to my own pre-existing habits more than anything else though.

permalink
report
parent
reply
4 points

An example I did today was adjusting the existing email functionality of the application I am working on to use handlebars templates. I was able to reformat the existing html stored as variables into the templates, then adjust their helper functions used to distribute the emails to work with handlebars rather than the previous system all in one fell swoop. I could have done it by hand, but it is repetitive work.

I also use it a lot when troubleshooting issues, such as suggesting how to solve error messages when I am having trouble understanding them. Just pasing the error into the chat has gotten me unstuck too many times to count.

It can also be super helpful when trying to get different versions of the packages installed in a code base to line up correctly, which can be absolutely brutal for me when switching between multiple projects.

Asking specific little questions that may take up the of a coworker or the Sr dev lets me understand the specifics of what I am looking at super quickly without wasting peoples time. I work mainly with existing code, so it is really helpful for breaking down other peoples junk if I am having trouble following.

permalink
report
parent
reply
2 points
*

So how “intelligent” do you think the amazon returns bot is? As smart as a choose-your-own-adventure book, or a gerbil, or a human or beyond? Has it given you any useful life advice or anything?

permalink
report
parent
reply
3 points

Doesn’t need to be “intelligent”, it needs to be fit for purpose, and it clearly is.

The closest comparison you made was to the cyoa book, but that’s only for the part where it gives me options. It has to have the “intelligence” to decipher what I’m asking it and then give me the options.

The fact it can do that faster and more efficiently than a human is exactly what I’d expect from it. Things don’t have to be groundbreaking to be useful.

permalink
report
parent
reply
2 points

Smarter than Zork, worse than a human. Faster response times than humans though.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
19 points

Useful for scammers and spam

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 557K

    Comments