Unreal, Semantic Mapping and Augmented Reality
Pondering what the future of AR might look like, Unreal style
Unreal 5 is cool. Way cool. First, if you haven’t seen it, have a look.
Sure, it seems like it’s mostly tailored to creating big $100M game titles. But I have this theory that it also provides a sneak peek at what the future of AR will look like:
Once you get past how stunning the graphics will look on your next generation Playstation, the Unreal 5 demo reveals that it’s solving a few insanely difficult problems.
And how they SOLVE those problems have massive implications for augmented reality.
Adapting Digital Animations
There was a micro-discussion on Twitter about how Unreal 5 provides hints of how AR could work:
(It was part of a larger thread on the pricing models for Unity vs Unreal).
I’ve been thinking more about this. And ran across this example of animations that have been trained using TensorFlow:
"We present a deep learning framework to interactively synthesize such animations in high quality, both from unstructured motion data and without any manual labeling," states the abstract. "We introduce the concept of local motion phases, and show our system being able to produce various motion skills, such as ball dribbling and professional maneuvers in basketball plays, shooting, catching, avoidance, multiple locomotion modes as well as different character and object interactions, all generated under a unified framework."
Semantic Understanding
Now imagine the above where the basketball player isn’t just responding to other virtual objects (the ball, the other player). Imagine it can respond to physical objects.
You start to see this in what Unity is doing with MARS. The Wallace and Gromit AR experience leverages these capabilities:
Lead creative and studio manager Will Humphrey told Unity: “Unity MARS has been the toolkit that has allowed us to realize a new horizon, a shift in the potential of immersive experiences by enabling them to become truly dynamic. Put simply, Unity MARS is adding intelligence to AR.
The next Holy Grail for spatial computing is being able to semantically map physical space. To not just tag “this is a table” but to tag everything, and to understand the semantic relationship between them. (“This is a pedestrian, this is a crosswalk”).
Once we are able to map spaces and clearly understand the objects within that space we will be able to do…well, some really really cool things. Power-up your digital objects with the type of AI seen in the basketball video above and they become really, really smart.
And then tap into systems like what Unreal offers for animations:
The Unreal team showed off another little feature which I could see be adapted for AR: programmatic animation. It may seem like a tiny little feature, but watch how the character puts her hand on the door.
Reality and AR will merge, and it will be really, crazy-level amazing.