Avram Piltch is the editor in chief of Tom’s Hardware, and he’s written a thoroughly researched article breaking down the promises and failures of LLM AIs.

2 points

That’s a philosophical debate we can’t really answer and not a lie, the question is if we do anything other than copy. The without any doubt biggest elephant in the room is the fact that AIs don’t remember and iterate like we do yet but that’s probably just a matter of time, other than that the very different environment we learn in is another huge issue if you try to make any comparison. It’s a tricky question that we might never know the answe to but it’s also facinating to think about and I don’t think rejecting the idea alltogether is a especially good answer.

permalink
report
reply
12 points

There’s a lot of opinion in here written in as if it’s fact.

permalink
report
reply
1 point

Here I was thinking I could trust Mr Tom

permalink
report
parent
reply
3 points
*

In the long run it doesn’t really matter if the LLM is or is not trained on all the information out there, as the LLM will be able to search the Web on demand and report back with what it finds. BingChat essentially already does that and we have a few summarizer bots doing similar jobs. The need to access Websites directly and wade through all the clickbait and ads in the hope you find the bit of information you are actually interested will be over.

The LLM will be Adblock, ReaderMode, SQL and a lot more rolled into one, a Swiss army knife for accessing and transforming information. Not sure where that leaves the journalists, but cheap clickbait might lose a lot of value.

permalink
report
reply
1 point

Bruh, the LLM will be trained on 90% click bait. It’s gonna be just as trashy.

permalink
report
parent
reply
15 points

Let’s be clear on where the responsibility belongs, here. LLMs are neither alive nor sapient. They themselves have no more “rights” than a toaster. The question is whether the humans training the AIs have the right to feed them such-and-such data.

The real problem is the way these systems are being anthropomorphized. Keep your attention firmly on the man behind the curtain.

permalink
report
reply
1 point
*

You know, I think ChatGPT is way ahead of a toaster. Maybe it’s more like a small animal of some kind.

permalink
report
parent
reply
2 points

One could equally claim that the toaster was ahead, because it does something useful in the physical world. Hmm. Is a robot dog more alive than a Tamagotchi?

permalink
report
parent
reply
1 point
*

There are a lot of subjects where ChatGPT knows more than I do.

Does it know more than someone who has studied that subject their whole life? Of course not. But those people aren’t available to talk to me on a whim. ChatGPT is available, and it’s really useful. Far more useful than a toaster.

As long as you only use it for things where a mistake won’t be a problem - it’s a great tool. And you can also use it for “risky” decisions but take the information it gave you to an expert for verification before acting.

permalink
report
parent
reply
7 points

Yes, these are the same people who are charging a fee to use their AI and profiting. Placing the blame and discussion on the AI itself conveniently overlooks a lot here.

permalink
report
parent
reply
12 points

Machines don’t Lear like humans yet.

Our brains are a giant electrical/chemical system that somehow creates consciousness. We might be able to create that in a computer. And the day it happens, then what will be the difference between a human and a true AI?

permalink
report
reply
3 points

If you read the article, there’s “experts” saying that human comprehension is fundamentally computationally intractable, which is basically a religious standpoint. Like, ChatGPT isn’t intellegent yet, partly because it doesn’t really have long term memory, but yeah, there’s overwhelming evidence the brain is a machine like any other.

permalink
report
parent
reply
2 points

fundamentally computationally intractable

…using current AI architecture, and the insight isn’t new it’s maths. This is currently the best idea we have about the subject. Trigger warning: Cybernetics, and lots of it.

Meanwhile yes of course brains are machines like any other claiming otherwise is claiming you can compute incomputable functions which a physical and logical impossibility. And it’s fucking annoying to talk about this topic with people who don’t understand computability. Usually turns into a shouting match of “you’re claiming the existence of something like a soul, some metaphysical origin of the human mind” vs. “no I’m not” vs. “yes you are but you don’t understand why”.

permalink
report
parent
reply
1 point

…using current AI architecture, and the insight isn’t new it’s maths.

That is not what van Rooij et al. said, which is who was cited in here. They published their essay here, which I haven’t really read, but which appears to make an argument about any possible computer. They’re psychologists and I don’t see any LaTeX in there, so they must be missing something.

Unfortunately I can’t open your link, although it sounds interesting. A feedforward network can approximate any computable function if it gets to be arbitrarily large, but depending on how you want to feed an agent inputs from it’s environment and read it’s actions a single function might not be enough.

permalink
report
parent
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.8K

    Monthly active users

  • 3.5K

    Posts

  • 82K

    Comments