Here’s a nice video of a guy training an AI to do a relatively simple task (driving a Trackmania trac) with a very limited amount of inputs with low variability, 2-3 outputs and very hardset restraints.
Compared to what he does, a rather narrow defined re-enforcement training scheme, Microsofts AI takes many more inputs and has many more outputs and all the inputs are highly variable (massive amounts of data like dictionaries, images, movies, entire texts, speech, etc compared to a handful of parameters with values from -1 to 1) and also is a mix between re-enforcement, supervised and unsupervised training. With different subnetworks trained for different things eventually working together to do the master task they have in mind.
https://www.youtube.com/watch?v=Dw3BZ6O_8LY
What is shown in the video is what you’d do for a tiny subsystem of the AI Microsoft, Google, Apple and the like develop.
Kinda like if you watched a video about “this is what it takes to make the bolt that keeps your wheels on your car” you’d only have seen a fraction of what it takes to make the whole car.
Here is an alternative Piped link(s):
https://www.youtube.com/watch?v=Dw3BZ6O_8LY
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.