Clock Icon 6 min read

The Amazing Future of 3D Mapping and Contextual Search

by Jack Boland   |   Apr 15, 2014

It’s Sunday in 2017, and you’re navigating a giant, over-lit Kroger trying to complete this week’s grocery list. As you pass by the vague “International” aisle, your phone buzzes with a notification: “You read an enchilada recipe yesterday. Some of the ingredients you need are down that aisle, on the left.”

Part helpful, part creepy, this is the probable future of consumer/product interaction made possible through perceptual computing.

Perceptual computing is a broad and bourgeoning field. For now, we’ll discuss it in terms of a machine’s ability to parse and understand physical objects.

Many of us are familiar with perceptual computing. It’s seen a huge boom in popularity thanks to consumer-facing products like the Xbox Kinect/Xbox One. Keeping pace, the tech industry’s heaviest hitters have been pushing development, with Intel funding their Perceptual Computing division and Apple buying out 3D imaging firm PrimeSense. Even Amazon is jumping in the ring, adding infrared scanning and 3D positioning to their upcoming smartphone, according to a Boy Genius Report.

Advanced 3D mapping has emerged as the next evolutionary phase. This technology is in the process of a critical transition from bulky, expensive, and stationary to cheap, mobile, and simple enough for consumer use. The next wave of innovation is bringing impressive 3D imaging to phones and tablets.

The race is on to build a customer-ready experience, and a few companies are ahead of the pack.

Who are the big players?

Google ATAP Project Tango

The most recent star in spatial computing, Project Tango, came from Google’s Advanced Technology and Projects.

Project Tango is building a smart phone that reads spaces in real time, converting data points into three-dimensional models.

What makes Tango revolutionary is its processing chip. The Android phone harbors the Myriad 1 visual processor from Movidius. Imaging processors typically use a lot of power, but the Myriad 1 was built with battery life in mind, resulting in a lean, efficient, and powerful processor. This allows for "always on" scanning.

Google has asked developers to build apps with Tango’s dev kit and test the limits of what the tech can accomplish. While Mountain View isn’t making any product push yet, I have no doubt that Google’s sights are set on the marketing opportunities that 3D mapping presents.

Given Google's place as a pioneer in smartglasses and perceptual computing, the company is sure to use 3D mapping to create game-changing immersive experiences.

Occipital Structure Sensor

In October 2013, Occipital raised over 1.2 million dollars through Kickstarter for their Structure Sensor. Strap the mind-blowing Structure Sensor to your iPad and take accurate mesh maps of spaces and structures.

The Structure has its own battery and processor, so you won’t burn up your device’s power trying to capture images.

Right now, the Structure supports 4 demo apps: Fetch, Ball Physics, Room Capture, and Object Scanner. Let’s focus on the latter two.

Room Capture snags an interior map and creates a model file complete with measurements. Object scanner does the same thing except for individual objects.

Combine the two, and you have a tool that’s great for shopping for furniture. Even better, this opens the door for brick and mortar stores to provide customers with incredible virtual shopping experiences. But we’ll get to that…

Mobile sensors like the Structure could open the world up for 3D exploration that surpasses Google Streetview's wildest aspirations. Occipital tested this thought by demoing the Structure Sensor at Ripley's Believe It Or Not:

LazeeEye

LazeeEye is still funding its Kickstarter campaign. It also looks a bit like a laser-guided toothbrush.

Still, the LazeeEye is making some enticing promises against the competition.

Available 3D scanners don’t come cheap, and they certainly don’t attach to your phone. The aforementioned Structure Sensor is an immodest $350 and only attaches to new iPads. Backers can get the LazeeEye beta for as low as $50 unassembled.

LazeeEye isn’t reinventing the smartphone, either. Project Tango is progressive, but it would require you to buy a new phone. LazeeEye employs existing hardware and should work with a variety of devices.

Matterport

Matterport is most worth mentioning for the company’s connection to Project Tango, which uses Matterport’s software stack.

Though the actual scanning camera is large and costly, it generates extremely high-quality, realistic interior models within minutes.

The company is mobile- and retail-friendly. Their site even features interactive demos of potential uses in retail, construction, hospitality, and other industries.

Take a look at Matterport's impressive demo in which they model TechCrunch HQ:

Applying the Technology

3D mapping in mobile devices is as or more significant than accelerometers and geo-location. This tech presents an array of exciting possibilities for everyday consumers.

Gaming, virtual reality, facial recognition, 3D printing – it's obvious how affordable, mobile modeling could revolutionize each of these industries.

What really excites me, though, is the potential for deeper human-technology interaction in search and marketing.

Contextual Search

By granting phones spatial awareness, we’re edging closer to giving them the ability to understand the context of our queries and voice commands. Accurate interpretation of context is the Holy Grail for search engines.

Search engines want to give users relevant results. Right now, algorithms attempt to return highly-specific information not just based on your query, but also on previous searches, browsing history and geo-location. In the near future, search may be able to take into account your proximity to specific items, not just brick-and-mortar location points. Based on your previous visits, your contacts, or your calendar appointments, search may return specific directions through buildings, not just to street addresses.

However, this would require a vast new index of shape, color, and product data, as well as an updated algo. I don't see Google slowing down on their data collection any time soon, though.

Precision Marketing

Re-imagine the grocery store scenario. Based on your digital grocery list, your phone pulls a 3D map of the store and creates a path of least resistance for each product. Along the way, you get targeted suggestions and offers from products on your list, or recipe suggestions for things you pick up.

Producers and shops could handily scan their promoted items and tag them to notify targeted shoppers nearby. One app, FlyBy, is already using Project Tango's tech to accomplish something similar, though from a social angle.

Yes, this is insanely invasive. It's also not possible, given current camera systems, if your phone is in your pocket. But as more people adopt wearable tech, or we find ways to make our devices smaller and more discreet, it's likely that users become more comfortable with marketing as part of the human-technology experience.

A lot needs to happen before this not-so-distant future is possible. That being said, the right people (read: tech companies with infinite money) are in the driver's seat. All the jigsaws are in place for computer vision to explode, and I couldn't be more excited.

Portrait of Jack Boland

Jack Boland