I am fully expecting that a few decades from now we will be able to ask an AI for a game that’s X hours long, of Y difficulty, featuring a story with Z, and it will pump out a complete video game to your specifications.
So, I will just use ChatGPT to create some cool character descriptions, Midjourney to draw the characters and then this to turn them into 3D models.
I might be crazy, but I’m wondering if we’ll bypass this in the long run and generate 2D frames of 3D scenes. Either having a game be low-poly grayboxed and then each frame is generated by an AI doing image-to-image to render it out in different styles, or maybe outright “hallucinating” a game and it’s mechanics directly to rendered 2D frames.
For example, your game doesn’t have a physics engine, but it does have parameters to guide the game engine’s “dream” of what happens when the player presses the jump button to produce reproducible actions.
I feel like this is incredible for Indie devs but AAA companies will be the ones to end up using it.
Note that this is just generating fixed models – no skeleton to provide for movement, no animations – though I imagine that that is also viable to do with a similar approach.