Spatial Computing is coming — it just needs a lot more time in the oven
OK, we may have all gotten a little too excited about Spatial computing in 2024. I’ll certainly admit that I did. Whenever I catch a glimpse of the future of computing, I can’t help but get hyped — it’s why I love this job.
But when the cold light of reality hits, you start to realize that it’s not all it’s cooked up to be. And you can see it in everything that’s happening. Meta has stuck rigidly to talking about having fun on its Quest 3 and Quest 3S headsets with games and binge watching. Apple Vision Pro production has been cut and the Cupertino crew is reportedly working with Sony to bring PSVR 2 controller support for (yep, you guessed it) games.
So what is it going to take to make spatial computing a thing? Well, as the headline suggests, a lot more time is needed for this to cook. I still believe in the vision of spatial computing, but with a lot of companies trying to twist what it is, let’s make something clear up top.
What actually “is” spatial computing?
Everybody seems to be throwing this term around — from grand visions to small gains. Let me sum it all up in one simple sentence.
Spatial computing is to be able to get things done in a mixed reality space. That means not tethering to any other devices, and the app ecosystem and performance on-board to be productive.
When you keep that in mind, you realize just how we’re nowhere near that yet — not just in a technological sense, but people just aren’t that interested in working like this yet. And probably unintentionally, Google perfectly explained the reason why in its recent Android XR video reveal.
…Did you hear it? At the moment, devices that are capable of doing this are “episodic products.” It’s just too much of a pain to don a headset to get stuff done versus opening up a laptop to work.
Pair that with the sketchy reliability of trying to type in thin air and hand tracking, alongside the oversimplification of the UI needed for gesture control over the vastly more nuanced, detailed and complex interactions you can have with a computer, it can feel like spatial computing is trying to fix a problem no one had.
The “killer app”
In response, you’re seeing headset makers talk about spatial computing in a slightly different way. Now it’s more about projecting your computer screen on a larger external monitor in augmented reality.
Don’t get me wrong, the experience is great. But it’s not the full thing. Plus, if you were to do this using Apple hardware, you can easily be spending $5,000 for the whole effect. I’d call this version 0.5 of spatial computing, and the most cost-effective way to do it would be to pick up a pair of the best AR glasses like the Xreal One.
What CES 2025 can bring
And the explosive rise in smart glasses has made one thing clear — people want mixed reality hardware in something the size of a pair of specs. I predicted that an AR glasses revolution was coming to CES 2024, and will happily admit I was only half right.
The truth is we are moving towards this. There is display technology that doesn’t require the giant glass prisms in front of your eyes, Meta Orion showed device-free computing is possible, and Xreal’s Air 2 Ultra demonstrated hand-tracking within some specs.
But these all require workarounds that don’t put the computational power in the actual glasses. We’re still a few years off, but we will definitely be seeing glimpses of the next steps towards this.
More from Tom’s Guide
Source link