This is pretty much what I’d expect AI to be best at.
Oh yeah? Can it tilt the board all the way to one corner, then pop the other corner and send the ball flying right to the end?
No, it’s amateur at best.
That’s addressed in the article actually. They had to program it so as not to cheat when they found it actually trying to cheat.
I’m not really surprised, the main challenge of that game is motor control, something any machine can do with more precision than a human
I agree but also disagree. It’s true that machines are capable of fine motor control much more quickly and accurately than humans. But this by itself is often not enough.
This achievement should be somewhat surprising because of Moravec’s paradox: the observation that, opposite to what early AI researchers expected, intelligence and reasoning skills are comparatively easy for a computer to simulate, while sensorimotor skills are in fact incredibly hard. Notice how, for example, chess engines started beating human players in the 90s or so, but we still don’t have a robot that can do something as simple as pick raspberries (because surprise, for a machine picking a raspberry is actually hard as shit).
When the AI can solve one of these I’ll be impressed:
Here is an alternative Piped link(s):
https://piped.video/UA33LOViUfw
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
You don’t need AI to do that, seriously, such a buzzword where a relatively simple algorithm would suffice, don’t tell me it’s harder than double pendulums or those ball bouncing contraptions tech students make since a decade or more
Not needing AI isn’t the point. The point is that AI can do it, and AI doesn’t require a programmer to design and debug a bespoke algorithm to accomplish a task. It would take a human a lot longer than 6 hours to perfect an algorithm to do this.
Not needing AI isn’t the point. The point is that AI can do it, and AI doesn’t require a programmer to design and debug a bespoke algorithm to accomplish a task.
Maybe we should stop to call “AI” everything that is able to solve something by bruteforce.
A true AI, given the board and the rules, should have understood in less than a picoseconds that it need to avoid the holes, like a human does. What this AI did was simply to learn the rules, and a human is still faster in this (at this game at least).
It would take a human a lot longer than 6 hours to perfect an algorithm to do this.
Man, the game has the solution drawn on it. A human perfect the algorithm in less than 6 seconds and it probably solve the game in way less than 6 hours. The point of the game is to follow, not to find, the path.
AI is shorthand for a neural network algorithm that learns to accomplish a task through training instead of being told (very explicitly) how to do it by a human. There’s no point in arguing about how people use language. It’s completely arbitrary. You better get used to people calling neural network programs AI because it’s not going away.
What this AI did was simply to learn the rules… the game has the solution drawn on it… The point of the game is to follow, not to find, the path.
You have a very deep misunderstanding of the complexity of this feat and so I’m not surprised you don’t think its impressive. Just follow the path… So easy! -_-
At the start of this task, the AI knew almost nothing. All it knew is it had “hands”, and it had a directive to get the ball to the end.
It didn’t know any of the following:
- what a ball is and that it rolls
- the fact that lines on the board indicate a safe path
- what gravity is and why the ball moves when the knobs are turned
- that turning the knob farther makes the ball go faster, to a point
- that the dark spots on the board (holes) make the ball drop and make you have to start over
- that the thick lines are walls
- that walls block the ball!
You see what I’m getting at here? It understood nothing! Sure, you can explain the rules to a human and they’d be able to start learning how to play, but the real learning is learning the hand eye coordination to get the ball to do anything you want.
Even the concept of “explain the rules” is not simple. Sure it’s simple for a brain that evolved over millions of years and uses natural language. But explaining rules to a computer means programming it. You have to hard code all of the rules of the game, and in this case, all of the physics of the game. You have to write the code that explains all of that to a traditional computer before it can even start attempting to play this game.
This AI needed none of that. It learned everything on the fly!
A human could… probably solve the game in way less than 6 hours
Ha! It’s clear you’ve never played this game. Even if you could get your first win in 6 hours, you wouldn’t then be able to repeat the win every time thereafter.
This AI solving the game in 6 hours is literally the equivalent of one year old baby learning to play and finishing the maze in 6 hours! That is jaw-droppingly amazing, like the author says!
How are you not impressed?
(All analogies are bad. The baby would never have the attention span or motivation to actually play the game. That’s the one inherent advantage the program has. It does what it’s told. Plus the AI has perfect motor control right out of the box. It doesn’t know that it’s spinning motors, but it’s control of them is perfect, and a baby is still learning how to make their muscles do anything at all.)