The Star Trek holodeck is one of the most alluring of sci-fi technologies: you give a few verbal instructions to a computer and boom, you’re on a street in 1940s San Francisco, or wherever you want to be. We may never have holograms you can touch, but the part where a computer can generate any 3D scene it’s asked for is currently being worked on by a small studio in London.
At the Game Developers Conference in San Francisco on Wednesday, Anything World CEO Gordon Midwood asked me what I wanted to see. I said I wanted to see a donkey, and a few seconds later a donkey walked around the screen in front of us. Sure, it kind of ran like a horse, and yes, all it did was meander through a field, but those are just details. The software lived up to its basic promise: I asked for an easel and an easel appeared.
For the next demonstration, Midwood took his hands away from the keyboard. “Let’s make an underwater world and add 100 sharks and a dolphin,” he said into a microphone. A few seconds later, I watched a dolphin show up at the wrong party: 100 swimming sharks.
Developers looking to use Anything World as a game development or prototyping tool will incorporate it into an engine like Unity, but as Midwood showed, it can also produce scenes, objects, and creatures on the fly. It was the coolest thing I saw on the GDC show floor, and others have already noticed its potential. Roblox is investigating a deal with the company, and Ubisoft is already using the software for prototyping and for a collaborative project called Rabbids Playground.
How it works
With so much blockchain stuff haunting GDC, the sight of an older tech buzzword was comforting: Anything World uses machine learning algorithms developed in part during a University of London research project that lasted more than a year. Basically, they’ve developed automated methods to teach a system to analyze 3D models from sources like Sketchfab and classify, segment, rank, and animate (or not) them in a way that makes sense to people. At present, it can pull from more than 500,000 models.
Of course, sometimes Anything World gets it wrong: The software once thought a table was a quadruped, and another time it believed the top of a pineapple were the legs of a spider, which was “scary,” Midwood says.
It’s still early days (at least compared to Star Trek: The Next Generation, which is set in the 2360s), but even at this rather rough stage it’s nice to see an automated learning system linking up the 3D models it’s been given to what it is ‘know’ about animal locomotion – I felt strangely proud of my trotting donkey, as if I was somehow responsible for giving life just by asking for it.
For non-developers, Midwood thinks Anything World has potential in super-accessible game creation tools, or just as a nice and handy thing to have on hand. For example, you could use it to create on-the-fly green screen sets while streaming, or basically treat it like a holodeck computer, put on a VR headset and request a scene to relax in.
Meta (the company formerly known as Facebook) demonstrated something similar last month, but without animated creatures. In response, Anything World released a parody demo. Interpreting what people want at the natural language level is arguably one of the end goals of all software, so it’s no surprise that there’s competition in the “make 3D things appear by asking for it” industry. However, Anything World’s technology looks stronger than Meta’s at the moment. It’s also a fairly small company, with six machine learning experts and nine other people in technical roles working on the tool.
In the future, Anything World plans to release versions with higher fidelity models and animations – an Unreal Engine version is coming and plans to use Epic’s Quixel models – as well as its own consumer application. At the moment it is available for use with Unity.
Anything World is a long way from a Star Trek computer’s understanding of the physical world—I doubt it knows anything about 1940s San Francisco—but just because donkeys can now walk a bit like horses doesn’t mean it. that they will be tomorrow too. Midwood won’t promise me a holodeck just yet, but he’s confident that the system’s ability to label and animate 3D models will only get more detailed and complex.