Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

1 point

Turns out you need very good computer scientists to make good AI. And those are very expensive and hard to come by.

permalink
report
reply
20 points

And OpenAI arejust full of SWEs importing python packages?

permalink
report
parent
reply
-20 points

OpenAI actually has some decent people working there. ChatGPT doesn’t seem to have any.

permalink
report
parent
reply
24 points

My ignorant dude look up who built ChatGPT

permalink
report
parent
reply
-26 points

It just occurred to me that one could purposely seed it with incorrect information to break its usefulness. I’m anti-AI so I would gladly do this. I might try it myself.

permalink
report
reply
15 points

Luddite.

permalink
report
parent
reply
1 point

The luddites were right you know

permalink
report
parent
reply
2 points
Removed by mod
permalink
report
parent
reply
14 points

Outliers are easy to work around.

permalink
report
parent
reply
23 points
*

HMMMM. It’s almost like it’s not AI at all, but just a digital parrot. Who woulda thought?! /s

To it, everything is true and normal, because it understands nothing. Calling it “AI” is just for compromising with ignorant people’s “knowledge” and/or for hype.

permalink
report
reply
8 points

Exactly. It should be called ML model, because that’s what it is, and I’ll just keep calling that. Everyone should do that.

permalink
report
parent
reply
2 points

What does that stand for? O:

You’d think I’d know that since I’m talking about AI; but actually most of my knowledge is about how things work or don’t work, not current trends/news.

permalink
report
parent
reply
4 points

ML stands for machine learning

permalink
report
parent
reply
65 points
*

It’s a machine learning chat bot, not a calculator, and especially not “AI.”

Its primary focus is trying to look like something a human might say. It isn’t trying to actually learn maths at all. This is like complaining that your satnav has no grasp of the cinematic impact of Alfred Hitchcock.

It doesn’t need to understand the question, or give an accurate answer, it just needs to say a sentence that sounds like a human might say it.

permalink
report
reply
3 points

If it’s trying emulate a human then it’s spot on. I suck at maths.

permalink
report
parent
reply
19 points

so it confidently spews a bunch of incorrect shit, acts humble and apologetic while correcting none of its behavior, and constantly offers unsolicited advice.

I think it trained on Reddit data

permalink
report
parent
reply
9 points

acts humble and apologetic

We must be using different Reddits, my friend

permalink
report
parent
reply
24 points

You’re right, but at least the satnav won’t gaslight you into thinking it does understand Alfred Hitchcock.

permalink
report
parent
reply
11 points

This. It is able to tap in to plugins and call functions though, which is what it really should be doing. For math, the Wolfram alpha plugin will always be more capable than chatGPT alone, so we should be benchmarking how often it can correctly reformat your query, call Wolfram alpha, and correctly format the result, not whether the statistical model behind chatGPT happens to use predict the right token

permalink
report
parent
reply
3 points

It sounds like it’s time to merge Wolfram Alpha’s and ChatGPT’s capabilities together to create the ultimate calculator.

permalink
report
parent
reply
4 points

to be fair, fucking up maths problems is very human-like.

I wonder if it could also be trained on a great deal of mathematical axioms that are computer generated?

permalink
report
parent
reply
6 points
*

It doesn’t calculate anything though. You ask chatgpt what is 5+5, and it tells you the most statistically likely response based on training data. Now we know there’s a lot of both moronic and intentionally belligerent answers on the Internet, so the statistical probability of it getting any mathematical equation correct goes down exponentially with complexity and never even approaches 100% certainty even with the simplest equations because 1+1= window.

permalink
report
parent
reply
1 point

i know it doesn’t calculate, that’s why I suggested having known correct calculations in the training data to offset noise in the signal?

permalink
report
parent
reply
91 points

Why are people using a language model for math problems?

permalink
report
reply
4 points

Because it works, or at least it used to. Is there something more appropriate ?

permalink
report
parent
reply
20 points

I used Wolfram Alpha a lot in college (adult learner, but that was about ~4 years ago that I graduated, so no idea if it’s still good). https://www.wolframalpha.com/

I would say that Wolfram appears to probably be a much more versatile math tool, but I also never used chatgpt for that use case, so I could be wrong.

permalink
report
parent
reply
1 point

How did you learn to talk to WolframAlpha?

I want to like WA, but the natural language interface is so opaque that I usually give up before I can get any non-trivial calculation out of it.

permalink
report
parent
reply
13 points

There’s an official Wolfram plugin for ChatGPT now, so all math can be handed over to it for solving.

permalink
report
parent
reply
5 points

Math is a language.

Mathematical ability and language ability are closely related. The same parts of your brain are used in each tasks. Words and numbers are essentially both ideas, and language and math are systems used to express and communicate these.

A language model doing math makes more sense than you’d think!

permalink
report
parent
reply
0 points

And why is it being measured on a single math problem lol

permalink
report
parent
reply
4 points
*

I’m guessing people were entering word problems to generate the right equations and solve it, rather than it being used as a calculator.

permalink
report
parent
reply
49 points

It was initially presented as the all-problem-solver, mainly by the media. And tbf, it was decently competent in certain fields.

permalink
report
parent
reply
0 points

Once AGI is achieved and subsequently Sentient-super intelligent ai- I cant imagine them not being such a thing, however I’d be surprised if a super intelligent sentient ai doesn’t decide humanity needs to go extinct for its own best self interests.

permalink
report
parent
reply
11 points

Problem was it was presented as problem solved which it never was, it was problem solution presenter. It can’t come up with a solution, only come up with something that looks like a solution based on what input data had. Ask it to invert sort something and goes nuts.

permalink
report
parent
reply
5 points

it’s pretty useful for explaining high level math concepts, or at least it used to be. before chatgpt 4 launched, it was able to give intuitive descriptions of stuff in algebraic topology and even prove some properties of the structures involved.

permalink
report
parent
reply
2 points

Well it was quite good for simple math problems, as this study also shows

permalink
report
parent
reply
7 points

I did use it more than half a year ago for a few math problems. It was partly to help me getting started and to find out how well it’d go.

ChatGPT was better than I’d thought and was enough to help me find an actually correct solution. But I also noticed that the results got worse and worse to the point of being actual garbage (as it’d have been expected to be).

permalink
report
parent
reply
1 point
*

It can be useful asking it certain questions which are a bit complex. Like on a plot which has the y axis linear and x axis logarithmic, the equation of a straight line is a little bit complicated. Its in the form y = m*(log(x)) + b rather than on a linear-linear plot which is y = m*x+b

ChatGPT is able to calculate the correct equation of the line but it gets the answer wrong a few times… lol

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 555K

    Comments