If you’ve ever thought to yourself “This would be a great multiplayer map” (and I can’t have been the only one), then your time has come.
New AI-generated world models allow users to generate playable worlds from image prompts, letting you create a platforming game from your local soft play center, or a sci-fi shooter based on your local gym. We’ve even seen Google recreate Doom in real-time using AI image generation.
OK, so it’s still early days, but if you’re curious, examples have been emerging from the last couple of months — including a nifty walkthrough of a diffusion world model in Counter-Strike: Global Offensive.
AI could generate a game level in real-time
Ever wanted to play Counter-Strike in a neural network?These videos show people playing (with keyboard & mouse) in 💎 DIAMOND’s diffusion world model, trained to simulate the game Counter-Strike: Global Offensive.💻 Download and play it yourself → https://t.co/vLmGsPlaJp🧵 pic.twitter.com/8MsXbOppQKOctober 11, 2024
The new tech leans on the much swifter image pipeline than what was previously possible, potentially allowing for players to walk through worlds as they’re generated in real-time.
Naturally, that takes a lot of compute power, so you’d likely need a beefy rig to enjoy it, and there are plenty of shortcomings, too. A lack of more bespoke maps would likely mean inferior textures, bizarre geometry (expect more than a few dead ends to corridors), and at present, the model can only hold the image of the world for a short period.
That means you could walk through a door, turn around, and said door is gone. On the other hand, this could create a dream-like impermanence, feeling closer to an Inception-like inability to know how you got to the moment where you are, but able to see firsthand as it collapses in on itself.
It’s a fascinating concept, and it’ll be interesting to see if any developers start to weave diffusion world models into their projects.