How augmented reality is mapped and made can vary and as we’re innovating the technology the techniques will change over time and by device. Because of the promise it holds for the future, we are looking at “Inside Out Tracking” which is being pioneered by the Augmented Reality Industry, and the Virtual Reality industry alike, which really the have more in common than they do in difference. Nonetheless, this form of tracking space in real time may be one of the many keys that catalyze 3D media and computing to ubiquity.

Augmented Reality allows us to engage our specific surroundings in whatever form they take. The world is our canvas. Mapping utilizes simultaneous localization and mapping (SLAM), more commonly associated with robotic mapping, which constructs and maps an unknown environment while simultaneously (hence the name) tracking the agent’s location. Headsets that map “inside-out” are self-contained, meaning that the computational problem is done on the same device the user mounts on their head. This is different from Vive’s lighthouse and Oculus’s external cameras, as they track and map from what one must consider “outside-in”. This accomplishment of mapping through AR connects us to the physical world without hardwiring or digitizing it.

I always felt the barrier that stood in the way of my bottle of beer telling me its temperature, OUE (ounces until empty) or all the information I might find on the brand’s website simply a tap-of-the-label away was that the bottle was not wired. There are no electronic components in my bottle of beer, no nodes, no copper, no semiconductors or metalloids of any type, nada. And until there is computational abilities and an IP address connected directly to by brew, I will not have these instant insights. But the way around this is robotics and computer vision sensing Ultra Sound, Light Detection and Ranging (LiDAR), cameras and so on to process through equations that (for the most part) resolve the problem that seems like chicken-egg. As the map evolves, the agent continues to locate itself and further map the area.

 

LIDAR - Light Detection and Ranging

LIDAR – Light Detection and Ranging

 

Inside-out tracking is already in your home, probably. If you have an optical mouse (one that uses a little red light, instead of that tiny rubber ball that would eventually collect a bunch fo dust and need to be cleaned) then you are already familiar with this technology!

One of the best methods to date is coming out of the Microsoft HoloLens labs and is known as Dense Tracking and Mapping in Real-Time (DTAM). The system is designed for reconstructing and camera tracking in real-time that relies on dense every pixel method, rather than on feature structure. They estimate the detailed texture depth during key frames of a hand-held RBG camera. Globally spatially regularized energy functional is minimized with a non-convex optimization frame work. If you’re interested in the technical details of this method, here from the Department of Computing, Imperial College London is the research in full.

 

 

Light Detection and Ranging is a process that obtains information about objects or areas from a distance called. In other words, through remote sensing. While remote sensing is traditionally known as the collection of data (either passive or active) by satellites or aircrafts detecting energy that is reflected from the earth, it is also a method that uses pulsed laser light to measure variable distances. The combination of the light pulses with other data are computed to form a three-dimensional model of shapes and layout, including surface characteristics. However, it should be noted that purposes and intents of augmenting the reality of our local environment, surface details such as texture are not being understood by current head mounted displays. If you were to look a a visual of the computation of the environment, it would look like someone through a giant heavy net over the room, and the way that net settles in it’s gravitational pull to the ground, is the way the computer sees the surroundings.

Further, the fusion of data from cameras and inertial sensors of head mounted displays (or eyewear) estimate the structure of surroundings. Studies show that two camera sensors are better than one, but an improvement to the accuracy of the inertial sensors proved to proved to partially compensate for the loss of a camera. This makes mapping of un-ventured environments manageable, where prior no information about the surroundings’ structure was required.

To summarize and simplify, inside-out tracking is very much like the process of photogrammetry, where an object is view from multiple angles and a software can then determine the shape of the object. You can try this at home with mobile applications like 1-2-3D catch or Scann3D. This AR mapping is done in real-time and is simultaneously tracking position and the surrounding environment. It is a pretty complicated mathematics, and considering our demand for instant gratification these days, the mapping process’s speed might be below standards of expectation. If you know what room you will be be augmenting, you can do a walk through to collect data of the terrain, which will quacking your devices processing and make the fusion of digital objects into your physical world more seamless (though far from perfect).

 

Scann3D

Scann3D

 

Additionally, keep an eye out for Google – Tango. This is an AR project that can flip the phone game upside down. Normally when we think of Google mobile, Android comes to mind but the Info-Tech-Titan is of course far more involved, starting with a continued go at the hardware game, releasing the Pixel that is optimized for DayDream VR. However, the first phone to launch Tango-equipped is the Lenovo Phab 2 Pro, quickly followed by Asus with ZenFone 3. Nonetheless, Project Tango will have it’s own post next week as we further explore AR’s mapping abilities and impact on, well, everything.