At tonight’s TriUXPA meeting, Mona Singh did a great job of giving us some perspective on the emerging technologies that are called “Augmented Reality”. She showed us some apps on her smart phone and let us all see some small examples of what is out there already in terms of information that augments the display looking through the camera of her smart phone. The picture below shows her with her container of ketchup at which she pointed her phone in a demonstration of how companies are using “AR” to provide more information to consumers about their products. The other part of the picture shows the group gathered for her presentation. Thanks to TEKSystems for hosting the event.
But really what we’re talking about here is an augmented display, where additional information is displayed on the screen of a device (or a windshield of a car, or glasses worn by the user). So we’re not augmenting reality, but augmenting the view of reality. For me, an example of something that could more accurately be called augmented reality is my cup that changes color when you put a cold drink in it – the cup itself is changing color, not just my view of it through a device or a special pair of glasses. Or the road signs along the highway that say what is at this next exit – the road is real, the signs are real and augment the real road. But it’s probably too late to change the use of that phrase. You see, what I would call it would be “augmented display” but that sounds too wimpy. So how about “Augmented Recognition” so we can still use the acronym “AR”. What Mona taught us is that half the challenge of this technology is to recognize the buildings on the street or where the road is or whose face is in the view. So the technology is really augmenting what it recognizes with more digital information about what it recognizes.
But that being said, she shared some apps that are readily available:
There are also others:
Mona clarified for us that Google Glass, at least at this point in their development, is really not an augmented reality, just a more intimate display, by placing information in a small rectangle in the upper right corner of your field of view. It can be voice activated to take a picture or video, or gesture-activated (tapping a finger) to take some action, but it does not display information about the items in your view per se.
Another challenge besides handling the recognition of things in your view, and then displaying information on or around those items, is converging the information with the items in your view. While head up display (HUD) technology has been around for decades, the trend seems to be to make it more accessible to a range of devices in a range of different contexts.
I can think of a few further challenges for this technology. One is what I’d call “shared augmentation”. So while I’m driving, I’d like the augmented information to display not on my view of the windshield but on my passenger’s view, since she has volunteered to be my navigator. While I pilot the craft working in reality, she can help interpret the mass of data about which exit to take based on which stores or mountain vistas or fueling stations are there. Certainly one of the challenges will be to filter the information to show only what I’m most interested in. Just as Netflix is beginning to learn what movies I like and Pandora knows what music I like, my AR device should learn what information I’m most interested in seeing in a given context. Perhaps there is even a high-level (or meta) challenge here – to augment the augmentation. So if I’ve got information augmenting my view and I want to know where the information is coming from (commercial or governmental or non-profit, for example) can I flick a switch and see the information about the information?
There is more to write, but I’ll post this for now and add to it later. It was a great presentation and great group that enjoyed the discussion. One of the members recommended watching Sight on YouTube for a taste of what lies ahead.