Where Will All The Scans Go?
Everything is being scanned. It's a foundation for AR. But a lot of scans don't seem to have a home.
Am I missing something? Where can I put my scans? I don’t mean just upload the scan to SketchFab or wherever where it’s a 3D asset. I mean my own little “mini site” which is purpose-built for treating it as a representation of reality.
With millions getting access to LiDAR on the next generation of iPhones, anyone will be able to create pretty amazing, rich, 3D “photos” of reality.
But where are the easy-to-use/no-code platforms for collecting scans, annotating them, mapping them to locations (if needed), sharing them, and creating time-lapse views?
In Toronto, there’s some amazing graffiti. What if I want to scan them…not just photograph, but scan the alley that it’s in? And let’s say I get a group of my friends to join in? Where can we store those scans and ‘pin’ them to a map?
Or say an environmental group that wants to do scans of the banks of a local lake in order to track erosion and update them each season? Or a historical society wants to scan local buildings before they get replaced by condos (hey, I live in Toronto, that’s what’s happening)?
I’m not talking here about capturing anchors or point clouds: I’m talking about richer 3D “photos” of the world around us.
The scans I collect could be of places or they could be of objects. But they’d leverage the fascination with, and the deep value in, capturing reality in three dimensions.
Or maybe I’m missing some key tool? Or platform? I’d love to hear what they are and how they’re being used.
Scan Everything
Everything is being scanned:
From avocados to Amsterdam, from your living room to the local park, it has become easier than ever to take 3D ‘snapshots’ of physical things.
This has profound implications for the next major shift in computing: to one in which spatial awareness, machine learning and new devices combine to radically change, well, everything.
Scans will be used to localize our phones, cars and glasses so that augmented reality (AR) content can be precisely placed. These point cloud scans are being uploaded to “AR Clouds” and the objects in them recognized by AI, annotated and cross-indexed.
They will also be used to delete and edit reality.
From Point Clouds to Photogrammetry
Matt Miesnieks says it’s all the same thing: 3D reconstruction.
But one of the words I would have added to Matt’s response was “resolution”. Because there’s a difference between the kinds of large-scale point clouds being collected buy Pixel8:
And the detailed photogrammetry you can find when you look for “scans” on SketchFab:
The AR Cloud(s)
A bunch of companies are building AR Clouds. Niantic, Google, Apple, even Tesla. Smaller companies are creating discrete clouds and companies like Pixel8 (noted above) are building toolsets for collaborative clouds.
How they do that will often depend on the use case. Drive a Tesla and you’ll know where its focus lies: on the road ahead (although your car is collecting a ton of additional and often peripheral data which is being uploaded to the ‘cloud’). Reality is edited because of the availability of this cloud, with the driver first getting the benefit of being able to see the road ahead, and then….speed limit signs.
But what happens to all of those “richer” scans?
Well, first, they’re being pulled into game engines. Just check out Quixel: their Megascan libraries are…insane. And are used in everything from architectural renders to movies to major games.
But the ground is shifting: scans are no longer the province of individuals or organizations with fancy cameras or gear. Anyone can do a scan, especially if you have an iPad with LiDAR. And this will become even more true when Apple throws a LiDAR on your phone.
So where will all these scans go?
And so, we need platforms for crowd-sourced scans which provide a no-code way for groups to collect, annotate, and potentially map scans to physical locations.
In the meantime, let’s go scan something.