Let’s talk about augmented reality and why it’s such a big deal … and so completely awesome.
Did you know that AR contact lenses are coming down the pipeline? I mean, for real? The developer Innovega received funding from the Defense Advanced Research Projects Agency and prototypes have already been demonstrated to the U.S. military. The company claims on its website that it is now developing a version tailored for consumers. Here is an article about it from earlier this year if you don’t believe me.
The times are definitely changing, though I think I’ll focus on the more accessible aspects of AR today. Mind you, I will be sticking HUD lenses into my eyeballs at the first sensible opportunity. I promise to let you know how that goes. I expect it to be something like this:
Dramatization: Mind not actually being blown by a fully immersive augmented environment.
Before we set off, I’d like you to take a minute to check out your smartphone. If you’re not already using it to read this blog post, go ahead and haul it out for a look-see.
What if I told you that the device in your hand might well serve as your personal portal to an entirely new future of technological connectivity? Think about how home computers changed things in the 1980s and how the advent of the World Wide Web changed everything even more in the 1990s.
If that sounds like a silly thing to say, let’s take a moment to appreciate how much that plastic rectangle in your hand is already capable of doing. Does it give you access to cloud-based information storage systems, such as Google Drive or Dropbox? Can you record video, maybe even 4K video? Take high-resolution photos? Can your phone identify you using biometric recognition patterns? Maybe your bank account is even fingerprint-accessible through an app?
Also, are you laughing at me right now, because duh, of course it can do all those things? It’s not called a smartphone for nothing. Chuckle on, friends, because you know if you said 10 years ago that your phone could do all that, people would be giving you the blankest of stares.
Image source: http://money.cnn.com.
Horror of horrors, phones in 2007 were being released with “Social Networking Software” and “GPS.”
Psycho-Sensory Facts for Communicators
As communicators, we often forget that people experience the world through all five senses. And not all information we perceive is treated equal. This...
Read moreIf someone went back in time to warn them about Tinder, heads would surely explode.
But it is easy to forget that the term “cloud computing” was barely on the radar of our everyday e-jargon a decade ago. Nowadays, my iPhone SE processes photogrammetry off a cloud server to stitch a three-dimensional object of your head, and a standard feature on our beloved talkboxes includes onboard AI systems to, you know, calculate the lighting coefficient for our selfies.
“I’m sorry Dave, I’m afraid I can’t share that on Facebook.”
So yes, our extremely powerful cellphones are becoming more powerful every other week. I bring this up because this fact is hugely important when examining the future of AR. It’s important because powerful devices are absolutely required to harness the full benefit of AR technology, but statistically, people still have a very limited interest in using special hardware, and the hardware is also a little clunky to be carrying around with you. Here is where the importance of the supersmart smartphone becomes evident.
According to this demographic survey, 92 percent of Americans between the ages of 18-29 have a smartphone. That’s basically an entire generation. And since smartphones are rolling out with machine learning and AI-powered multicore processing chips, depth-sensing cameras and other AR-friendly features, the audience for mobile AR content is vast and growing rapidly.
Why then hasn’t AR (or VR, for that matter) seen mass-scale mobile adoption yet? If you want to know what I think (and I’m guessing you probably do, since you’ve already read this far), I think it’s the lack of genuine, functional connectivity, though I also think it’s only a matter of time before that wall comes down.
What I mean is that even though we’ve seen some breakthrough moments (I’m looking at you, Pokémon Go) most easily accessible AR applications floating around right now are at best, helpful. At worst they are buggy and gimmicky, but most importantly, they are almost all single-user experiences.
I point my phone at a thing, and it tells me what it’s capable of telling me. Then some cool special effects and 3-D objects might pop up. Some critters are running around on my screen. A notice is telling me to turn left at the next intersection.
It’s fun, it’s even informative, but it’s just me. I’m all alone.
While it’s true that I could also bring others into my AR experience to a certain extent by pushing some of the omnipresent “share” buttons and forwarding a snapshot off into social media space, I only get to show off. I’m taking a selfie of my experience rather than bringing you into it.
Enter the AR cloud, where everything you know about mobile interaction will start changing.
What the heck is an AR cloud, though? I am superglad you asked.
To provide a loose explanation, I would say that an AR cloud is a method of organizing global data in such a way that it could be accessible to multiple users in different locations and on different devices. That sounds a little dry, though, and it’s not very good at demonstrating the raw intrigue either, so let’s expand the concept a little.
Right now, mobile AR largely works in something of a closed, device-dependent way. Your phone is “localizing” its position in one way or another, using data drawn from different “local” sources such as your phone’s gyrometer, GPS signals and other more esoteric methods. It uses this information to create a “map” or “point-cloud” of the surrounding area and then uses this map to project three-dimensional images and/or informational overlays into your environment, which you can see and potentially interact with.
The problem is that other devices can’t share what you see. This means that although it’s real enough to you, the user, it’s not “real” to anyone else. The maps are local to your device and the software on that device. There is no straightforward “universal localizer” that can align your own “map” with someone else’s, and certainly not across platforms. My 3-D cube only exists in my augmented reality on my personal iPhone, not necessarily yours.
So what we end up with is kind of like if Google searches could only be performed using an Internet search headset, and if you wanted to share a link with someone, you could only take a picture of what you saw on the site, but the receiver wouldn’t be able to go to the site and look around for themselves. This would be a very awkward way to share things on the Internet, and it feels rather awkward in AR for the same reasons.
An AR cloud would allow a world where we would be capable of augmenting not only our own reality, but the reality of others. With an AR cloud, you could be using your desktop Windows computer to populate a three-dimensional model of an IKEA desk in the living room of your grandparents, then virtually assembling it while they follow along with the process on their iPhone. My 3-D cube exists in your space and my space at the same time; we can interact with it together.
I hate to use Pokémon Go as an example, but imagine if there were only one Pikachu in Central Park on Tuesday, and anyone on any device with the Pokémon app could be chasing it around. Admittedly, this would probably lead to an escalating amount of Pokémon-related violence, so it’s probably not the best example, but in terms of immersion, it’s not hard to imagine what multiuser AR might be capable of.
We aren’t all the way there yet, but as extremely powerful mobile devices and a tangible desire for deeper global connectivity keep pushing the envelope, we are barreling toward this technological apex. It’s already happening as I type this. There are startups like Bent Image Lab in Portland and Escher Reality in Boston that are actively developing these types of AR cloud systems.
I’m not even going to get started on what Apple and Google and Microsoft are up to, but it’s safe to say that there’s a lot of funding and interest.
The takeaway here is that AR platforms are not quite to scale in terms of widespread adoption, so, if you’re not incorporating the AR phenomenon into your business strategy yet, don’t stress, you still have time. But it’s not a bad idea to start educating yourself now. Otherwise, you might wake up and discover that every cubic inch of virtual space is already occupied.