You are viewing a single thread.
View all comments View context
6 points

It’s a computer that understands my words and can reply, even complete tasks upon request, nevermind the result. To me that’s pretty groundbreaking.

permalink
report
parent
reply
17 points

That is exactly what it doesn’t. There is no “understanding” and that is exactly the problem. It generates some output that is similar to what it has already seen from the dataset it’s been fed with that might correlate to your input.

permalink
report
parent
reply
24 points

It’s a probabilistic network that generates a response based on your input.

No understanding required.

permalink
report
parent
reply
6 points

Ask it to write code that replaces every occurrence of “me” in every file name in a folder with “us”, but excluding occurrences that are part of a word (like medium should not be usdium) and it will give you code that does exactly that.

You can ask it to write code that does a heat simulation in a plate of aluminum given one side of heated and the other cooled. It will get there with some help. It works. That’s absolutely fucking crazy.

permalink
report
parent
reply
4 points
*

Ask it to finish writing the code to fetch a permission and it will make a request with a non-existent code. Ask it to implement an SNS API invocation and it’ll make up calls that don’t exist.

Regurgitating code that someone else wrote for an aluminum simulation isn’t the flex you think it is: that’s just an untrustworthy search engine, not a thinking machine

permalink
report
parent
reply
6 points

Maybe, that really depends on if that task or a very similar task exists in sufficient amounts in its training set. Basically, you could get essentially the same result by searching online for code examples, the LLM might just make it a little faster (and probably introduce some errors as well).

An LLM can only generate text that exists in its training data. That’s a pretty important limitation, which has all kinds of copyright-related issues associated with it (e.g. I can’t just copy a code example from GitHub in most cases).

permalink
report
parent
reply
-1 points

Yet it can outperform humans on some tests involving logic. It will never be perfect, but that implies you can test its IQ

permalink
report
parent
reply
9 points
  1. Not consistently and not across truly logical tests. They abjectly fail at abstract reasoning. They do well only in very specific cases.
  2. IQ is an objectively awful measure of human intelligence. Why would it be useful for artificial intelligence?
  3. For these tests that are so centered around specific facts: of course a model that has had the entirety of the Internet encoded into it has the answers. The shocking thing is that the model is so lossy that it doesn’t ace the test.
permalink
report
parent
reply
4 points

“Test it’s IQ”. The fact that you think IQ is a useful test for intelligence tells me everything I need to know

permalink
report
parent
reply
15 points

It’s a probabilistic network that generates a response based on your input.

Same

permalink
report
parent
reply

Technology

!technology@lemmy.ml

Create post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

Community stats

  • 4.5K

    Monthly active users

  • 2.8K

    Posts

  • 45K

    Comments

Community moderators