7 points

cracks? it doesn’t even exist. we figured this out a long time ago.

permalink
report
reply
14 points

They are large LANGUAGE models. It’s no surprise that they can’t solve those mathematical problems in the study. They are trained for text production. We already knew that they were no good in counting things.

permalink
report
reply
8 points

“You see this fish? Well, it SUCKS at climbing trees.”

permalink
report
parent
reply
40 points

So do I every time I ask it a slightly complicated programming question

permalink
report
reply
11 points

And sometimes even really simple ones.

permalink
report
parent
reply
6 points

How many w’s in “Howard likes strawberries” It would be awesome to know!

permalink
report
parent
reply
2 points

I’d be happy to help! There are 3 "w"s in the string “Howard likes strawberries”.

permalink
report
parent
reply
4 points
*

So I keep seeing people reference this… And I found it curious of a concept that LLMs have problems with this. So I asked them… Several of them…

Outside of this image… Codestral ( my default ) got it actually correct and didn’t talk itself out of being correct… But that’s no fun so I asked 5 others, at once.

What’s sad is that Dolphin Mixtral is a 26.44GB model…
Gemma 2 is the 5.44GB variant
Gemma 2B is the 1.63GB variant
LLaVa Llama3 is the 5.55 GB variant
Mistral is the 4.11GB Variant

So I asked Codestral again because why not! And this time it talked itself out of being correct…

Edit: fixed newline formatting.

permalink
report
parent
reply
16 points
*

Here’s the cycle we’ve gone through multiple times and are currently in:

AI winter (low research funding) -> incremental scientific advancement -> breakthrough for new capabilities from multiple incremental advancements to the scientific models over time building on each other (expert systems, LLMs, neutral networks, etc) -> engineering creates new tech products/frameworks/services based on new science -> hype for new tech creates sales and economic activity, research funding, subsidies etc -> (for LLMs we’re here) people become familiar with new tech capabilities and limitations through use -> hype spending bubble bursts when overspend doesn’t keep up with infinite money line goes up or new research breakthroughs -> AI winter -> etc…

permalink
report
reply
9 points

Someone needs to pull the plug on all of that stuff.

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 12K

    Posts

  • 539K

    Comments