Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale.

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

You are viewing a single thread.
View all comments
28 points

Like any tool, it depends how you use it. I have been learning a lot of math recently and have been chatting with AI to increase my understanding of the concepts. There are times when the textbook shows some steps that I don’t understand why they’re happening and I’ve questioned AI about it. Sometimes it takes a few tries of asking until you figure out the right question to ask to get the right answer you need, but that process of thinking helps you along the way anyways by crystallizing in your brain what exactly it is that you don’t understand.

I have found it to be a very helpful tool in my educational path. However I am learning things because I want to understand them, not because I have to pass a test and that determination in me to want to understand is a big difference. Just getting hints to help you solve the problem might not really help in the long run, but it you’re actually curious about what you’re learning and focus on getting a deeper understanding of why and how something works rather than just getting the right answer, it can be a very useful tool.

permalink
report
reply
16 points

Why are you so confident that the things you are learning from AI are correct? Are you just using it to gather other sources to review by hand or are you trying to have conversations with the AI?

We’ve all seen AI get the correct answer but the show your work part is nonsense, or vice versa. How do you verify what AI outputs to you?

permalink
report
parent
reply
8 points

You check it’s work. I used it to calculate efficiency in a factory game and went through and made corrections to inconsistencies I spotted. Always check it’s work.

permalink
report
parent
reply
3 points

Exactly. It’s a helpful tool but it needs to be used responsibly. Writing it off completely is as bad a take as blindly accepting everything it spits out.

permalink
report
parent
reply
5 points

I use it for explaining stuff when studying for uni and I do it like this: If I don’t understand e.g. a definition, I ask an LLM to explain it, read the original definition again and see if it makes sense.

This is an informal approach, but if the definition is sufficiently complex, false answers are unlikely to lead to an understanding. Not impossible ofc, so always be wary.

For context: I’m studying computer science, so lots of math and theoretical computer science.

permalink
report
parent
reply
3 points

I’m not at all confident in the answers directly. I’ve gotten plenty of wrong answers form AI and I’ve gotten plenty of correct answers. If anything it’s just more practice for critical thinking skills, separating what is true and what isn’t.

When it comes to math though, it’s pretty straightforward, I’m just looking for context on some steps in the problems, maybe reminders of things I learned years ago and have forgotten, that sort of thing. As I said, I’m interested in actually understanding the stuff that I’m learning because I am using it for the things I’m working on so I’m mainly reading through textbooks and using AI as well as other sources online to round out my understanding of the concepts. If I’m getting the right answers and the things I am doing are working, it’s a good indicator I’m on the right path.

It’s not like I’m doing cutting edge physics or medical research where mistakes could cause lives.

permalink
report
parent
reply
1 point

Its sort of similar to saying poppy production overall is pretty negative, but if smart critical people use it sparingly and apprehensively, opiates could be of great benefit to that person.

Thats all well and good and all but AI is not being developed to help critical thinkers research slightly easier, its being created to reduce the amount of money companies spend on humans.

Until regulations are in place to guide the development of the technology in useful ways then I dont know any of it should be permitted. What’s the rush for anyways?

permalink
report
parent
reply
2 points
*

I personally use it’s answers as a jumping off point to do my own research, or I ask it for sources directly about things and check those out. I frequently use LLMs for learning about topics, but definitely don’t take anything they say at face value.

For a personal example, I use ChatGPT as my personal Japanese tutor. I use it discuss and break down nuances of various words or sayings, names of certain conjugation forms etc. etc., and it is absolutely not 100% correct, but I can now take the names of things that it gives me in native Japanese that I never would have known and look them up using other resources. Either it’s correct and I find confirming information, or it’s wrong and I can research further independently or ask it follow up questions. It’s certainly not as good as a human native speaker, but for $20 a month and as someone who likes enjoys doing their own research, I fucking love it.

permalink
report
parent
reply
2 points

Hey, that’s a cool thing to do! I’ll try it. Learning a new language through LLMs sounds cool.

permalink
report
parent
reply
1 point

He is cross checking

permalink
report
parent
reply
1 point
*

I, like the OP, was also studying math from a textbook and using GPT4 to help clear things up. GPT4 caught an error in the textbook.

The LLM doesn’t have a theory of mind, it wont start over and try to explain a concept from a completely new angle, it mostly just repeats the same stuff over and over. Still, once I have figured something out, I can ask the LLM if my ideas are correct and it sometimes makes small corrections.

Overall, most of my learning came from the textbook, and talking with the LLM about the concepts I had learned helped cement them in my brain. I didn’t learn a whole lot from the LLM directly, but it was good enough to confirm what I learned from the textbook and sometimes correct mistakes.

permalink
report
parent
reply
1 point

If you didn’t have AI, what would you have done instead?

permalink
report
parent
reply
-1 points

I mean, why are you confident the work in textbooks is correct? Both have been proven unreliable, though I will admit LLMs are much more so.

The way you verify in this instance is actually going through the work yourself after you’ve been shown sources. They are explicitly not saying they take 1+1=3 as law, but instead asking how that was reached and working off that explanation to see if it makes sense and learn more.

Math is likely the best for this too. You have undeniable truths in math, it’s true, or it’s false. There are no (meaningful) opinions on how addition works other than the correct one.

permalink
report
parent
reply
3 points

The problem with this style of verification is that there is no authoritative source. Neither the AI nor yourself is capable of verifying for accuracy. The AI also has no expectation of being accurate or revised.

I don’t see how this is any better than running google searches on reddit or other message boards looking for relevant discussions and basing your knowledge on those.

If AI was enabling something new that might be worth it but allowing someone to find slightly less/more shitty message board posts 10% more efficiently isnt worth what’s happening. There are countries that are capable of regulation as a field fills out, why can’t america? We banned tiktok in under a month didnt we?

permalink
report
parent
reply
4 points
*

Sometimes it leads me wildly astray when I do that, like a really bad tutor…but it is good if you want a refresher and can spot the bullshit on the side. It is good for spotting things that you didnt know before and can factcheck afterwards.

…but maybe other review papers and textbooks are still better…

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 517K

    Comments