67 points

There’s magic?

permalink
report
reply
48 points

Only if you believe in it. Many CEOs do. They’re very good in magical thinking.

permalink
report
parent
reply
6 points

I have a counter argument. From an evolutionary standpoint, if you keep doubling computer capacity exponentially isn’t it extraordinarily arrogant of humans to assume that their evolutionarily stagnant brains will remain relevant for much longer?

permalink
report
parent
reply
10 points

You can make the same argument about humans that you do AI, but from a biological and societal standpoint. Barring any jokes about certain political or geographical stereotypes, humans have gotten “smarter” that we used to be. We are very adaptable, and with improvements to diet and education, we have managed to stay ahead of the curve. We didn’t peak at hunter-gatherer. We didn’t stop at the Renaissance. And we blew right past the industrial revolution. I’m not going to channel my “Humanity, Fuck Yeah” inner wolf howl, but I have to give our biology props. The body is an amazing machine, and even though we can look at things like the current crop of AI and think, “Welp, that’s it, humans are done for,” I’m sure a lot of people thought the same at other pivotal moments in technological and societal advancement. Here I am, though, farting taco bell into my office chair and typing about it.

permalink
report
parent
reply
4 points
*

If you keep doubling the number of fruit flies exponentially, isn’t it likely that humanity will find itself outsmarted?

The answer is no, it isn’t. Quantity does not quality make and all our current AI tech is about ways to breed fruit flies that fly left or right depending on what they see.

permalink
report
parent
reply
4 points

As a counter argument against that, companies are trying to make self driving cars work for 20 years. Processing power has increased by a million and the things still get stuck. Pure processing power isn’t everything.

permalink
report
parent
reply
23 points

Magic as in street magician, not magic as in wizard. Lots of the things that people claim AI can do are like a magic show, it’s amazing if you look at it from the right angle, and with the right skill you can hide the strings holding it up, but if you try to use it in the real world it falls apart.

permalink
report
parent
reply
5 points

I wish there was actual magic

permalink
report
parent
reply
7 points

It would make science very difficult.

permalink
report
parent
reply
1 point

Look at quantum mechanics

permalink
report
parent
reply
15 points

Everything is magic if you don’t understand how the thing works.

permalink
report
parent
reply
10 points

I wish. I don’t understand why my stomach can’t handle corn, but it doesn’t lead to magic. It leads to pain.

permalink
report
parent
reply
3 points

Have you eaten hominy corn? The nixtamalisation process makes it digestible.

permalink
report
parent
reply
8 points
*

Sam Altman will make a big pile of investor money disappear before your very eyes.

permalink
report
parent
reply
7 points

If you’re a thechbro, this is the new magic shit, man! To the moooooon!

permalink
report
parent
reply
7 points

The masses have been treating it like actual magic since the early stages and are only slowly warming up to the idea it‘s calculations. Calculations of things that are often more than the sum of it‘s parts as people start to realize. Well some people anyway.

permalink
report
parent
reply
2 points
*

oh the bubble’s gonna burst sooner than some may think

permalink
report
parent
reply
1 point

Next week, some say

permalink
report
parent
reply
3 points

If only.

permalink
report
parent
reply
50 points
*

Yea, try talking to chatgpt about things that you really know in detail about. It will fail to show you the hidden, niche things (unless you mention them yourself), it will make lots of stuff up that you would not pick up on otherwise (and once you point it out, the bloody thing will “I knew that” you, sometimes even if you are wrong) and it is very shallow in its details. Sometimes, it just repeats your question back to you as a well-written essay. And that’s fine…it is still a miracle that it is able to be as reliable and entertaining as some random bullshitter you talk to in a bar, it’s good for brainstorming too.

permalink
report
reply
24 points

It’s like watching mainstream media news talk about something you know about.

permalink
report
parent
reply
6 points

Oh good comparison

permalink
report
parent
reply
5 points
*

Haha, definitely, it’s infuriating and scary. But it also depends on what you are watching for. If you are watching TV, you do it for convenience or entertainment. LLMs have the potential to be much more than that, but unless a very open and accessible ecosystem is created for them, they are going to be whatever our tech overlords decide they want them to be in their boardrooms to milk us.

permalink
report
parent
reply
7 points
*

Well, if you read the article, you’ll see that’s exactly what is happening. Every company you can imagine is investing the GDP of smaller nations into AI. Google, Facebook, Microsoft. AI isn’t the future of humanity. It’s the future of capitalist interests. It’s the future of profit chasing. It’s the future of human misery. Tech companies have trampled all over human happiness and sanity to make a buck. And with the way surveillance capitalism is moving—facial recognition being integrated into insane places, like the M&M vending machine, the huge market for our most personal, revealing data—these could literally be two horsemen of the apocalypse.

Advancements in tech haven’t helped us as humans in while. But they sure did streamline profit centers. We have to wrest control of our future back from corporate America because this plutocracy driven by these people is very, very fucking dangerous.

AI is not the future for us. It’s the future for them. Our jobs getting “streamlined” will not mean the end of work and the rise of UBI. It will mean stronger, more invasive corporations wielding more power than ever while more and more people suffer, are cast out and told they’re just not working hard enough.

permalink
report
parent
reply
2 points
*

I don’t think they have that much potential. They are just uncontrollable, it’s a neat trick but totally unreliable if there isn’t a human in the loop. This approach is missing all the control systems we have in our brains.

permalink
report
parent
reply
13 points
*

I really only use for “oh damn, I known there’s a great one-liner to do that in Python” sort of thing. It’s usually right and of it isn’t it’ll be immediacy obvious and you can move on with your day. For anything more complex the gas lighting and subtle errors make it unusable.

permalink
report
parent
reply
3 points

Oh yes, it’s great for that. My google-fu was never good enough to “find the name of this thing that does this, but only when in this circumstance”

permalink
report
parent
reply
7 points

ChatGPT is great for helping with specific problems. Google search for example gives fairly general answers, or may have information that doesn’t apply to your specific situation. But if you give ChatGPT a very specific description of the issue you’re running into it will generally give some very useful recommendations. And it’s an iterative process, you just need to treat it like a conversation.

permalink
report
parent
reply
3 points

It’s also a decent writer’s room brainstorm kind of tool, although it can’t really get beyond the initial pitch as it’s pretty terrible at staying consistent when trying to clean up ideas.

permalink
report
parent
reply
5 points

I find it incredibly helpful for breaking into new things.

I want to learn terraform today, no guide/video/docs site can do it as well as having a teacher available at any time for Q&A.

Aside from that, it’s pretty good for general Q&A on documented topics, and great when provided context (ie. A full 200MB export of documentation from a tool or system).

But the moment I try and dig deeper I to something I’m an expert in, it just breaks down.

permalink
report
parent
reply
2 points

That’s why I’ve found it somewhat dangerous to use to jump into new things. It doesn’t care about bes practices and will just help you enough to let you shoot yourself in the foot.

permalink
report
parent
reply
3 points

Just wait for MeanGirlsGPT

permalink
report
parent
reply
49 points

Good. It’s dangerous to view AI as magic. I’ve had to debate way too many people who think they LLMs are actually intelligent. It’s dangerous to overestimate their capabilities lest we use them for tasks they can’t perform safely. It’s very powerful but the fact that it’s totally non deterministic and unpredictable means we need to very carefully design systems that rely on LLMs with heavy guards rails.

permalink
report
reply
14 points

Conversely, there are way too many people who think that humans are magic and that it’s impossible for AI to ever do <insert whatever is currently being debated here>.

I’ve long believed that there’s a smooth spectrum between not-intelligent and human-intelligent. It’s not a binary yes/no sort of thing. There’s basic inert rocks at one end, and humans at the other, and everything else gets scattered at various points in between. So I think it’s fine to discuss where exactly on that scale LLMs fall, and accept the possibility that they’re moving in our direction.

permalink
report
parent
reply
7 points

It’s not linear either. Brains are crazy complex and have sub cortexes that are more specialized to specific tasks. I really don’t think that LLMs alone can possibly demonstrate advanced intelligence, but I do think it could be a very important cortex for one. There’s also different types of intelligence. LLMs are very knowledgeable and have great recall but lack reasoning or worldview.

permalink
report
parent
reply
3 points

Indeed, and many of the more advanced AI systems currently out there are already using LLMs as just one component. Retrieval-augmented generation, for example, adds a separate “memory” that gets searched and bits inserted into the context of the LLM when it’s answering questions. LLMs have been trained to be able to call external APIs to do the things they’re bad at, like math. The LLM is typically still the central “core” of the system, though; the other stuff is routine sorts of computer activities that we’ve already had a handle on for decades.

IMO it still boils down to a continuum. If there’s an AI system that’s got an LLM in it but also a Wolfram Alpha API and a websearch API and other such “helpers”, then that system should be considered as a whole when asking how “intelligent” it is.

permalink
report
parent
reply
5 points

I find the people who think they are actually an AI are generally the people opposed to them.

People who use them as the tools they are know how limited they are.

permalink
report
parent
reply
3 points

Not being combative or even disagreeing with you - purely out of curiosity, what do you think are the necessary and sufficient conditions of intelligence?

permalink
report
parent
reply
1 point

A worldview simulation it can use as a scratch pad for reasoning. I view reasoning as a set of simulated actions to convert a worldview from state a to state b.

It depends on how you define intelligence though. Normally people define it as human like, and I think there are 3 primary sub types of intelligence needed for cognizance, being reasoning, awareness, and knowledge. I think the current Gen is figuring out the knowledge type, but it needs to be combined with the other two to be complete.

permalink
report
parent
reply
1 point

Thanks! I’m not clear on what you mean by a worldview simulation as a scratch pad for reasoning. What would be an example of that process at work?

For sure, defining intelligence is non trivial. What clear the bar of intelligence, and what doesn’t, is not obvious to me. So that’s why I’m engaging here, it sounds like you’ve put a lot of thought into an answer. But I’m not sure I understand your terms.

permalink
report
parent
reply
0 points

I think it’s a big mistake to think that because the most basic LLMs are just autocompletes, or that because LLMs can hallucinate, that what big LLMs do doesn’t constitute “thinking”. No, GPT4 isn’t conscious, but it very clearly “thinks”.

It’s started to feel to me like current AIs are reasonable recreations of parts of our minds. It’s like they’re our ability to visualize, to verbalize, and to an extent, to reason (at least the way we intuitively reason, not formally), but separared from the “rest” of our thought processes.

permalink
report
parent
reply
3 points

Depends on how you define thinking. I agree, LLMs could be a component of thinking, specifically knowledge and recall.

permalink
report
parent
reply
2 points

Yes, as Linus Torvalds said humans are also thinking like autocomplete systems.

permalink
report
parent
reply
31 points

Those recent failures only come across as cracks for people who see AI as magic in the first place. What they’re really cracks in is people’s misperceptions about what AI can do.

Recent AI advances are still amazing and world-changing. People have been spoiled by science fiction, though, and are disappointed that it’s not the person-in-a-robot-body kind of AI that they imagined they were being promised. Turns out we don’t need to jump straight to that level to still get dramatic changes to society and the economy out of it.

I get strong “everything is amazing and nobody is happy” vibes from this sort of thing.

permalink
report
reply
7 points

Also interesting is that most people don’t understand the advances it makes possible so when they hear people saying it’s amazing and then try it of course they’re going to think it’s not lived upto hype.

The big things are going to completely change things like how we use computers especially being able to describe how you want it to lay out ui and create custom tools on the fly.

permalink
report
parent
reply
2 points

Exactly, it’s the people who know that are amazed by the subtle intricacies of AI and the implications of it. It’s the people that don’t know saying, “I asked it to write a horror story about a killer clown, and it ended up sounding like Stephen King. What a rip off machine.”

permalink
report
parent
reply
1 point

Here is an alternative Piped link(s):

everything is amazing and nobody is happy

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

permalink
report
parent
reply
18 points
*

I hope it collapses in a fire and we can just keep our foss local models with incremental improvements, that way both techbros and artbros eat shit

permalink
report
reply
1 point

Unfortunately for that outcome, brute forcing with more compute is pretty helpful for now

permalink
report
parent
reply
2 points

And even if local small-scale models turn out to be optimal, that wouldn’t stop big business from using them. I’m not sure what “it” is being referred to with “I hope it collapses.”

permalink
report
parent
reply
0 points

I was referring to the hype bubble therefore the money surrounding it all

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 554K

    Comments