AR > VR

Ed Mengel
5 min readJun 10, 2016

There’s a next wave of computer interfaces coming, and many are wondering which will win the reality wars, augmented reality (AR) or virtual reality (VR). I’ll admit, I’ve been in the VR camp most of my life, from waiting with bated breath for the Sega Virtual Reality headset made by IDEO that never came, to trying to find the nearest arcade with Dactyl Nightmare. I even married the only girl in the world who had a Virtual Boy growing up. I’ll also admit that Google cardboard is game changing. A $20 accessory changing a device that everyone already owns into a platform for immersive experiences is revolutionary. It’s taking “geek chic” mainstream, but AR is going to win the war for ubiquity, and I’ll explain why.

Note: I’m use the term AR here to include Mixed Reality (MR). Some refer to MR as beyond AR, but I see it as just living up to the full vision of AR.

First some definitions, VR is the experience of wrapping the user in a virtual world. Basically trying to convince the user they are actually present in this fabricated illusion. AR is taking reality and “augmenting” it with additional information and virtual objects. These objects could be screens, text and numbers, glowing balls of light, or even characters.

So why will AR win? A few reasons…

  1. It’s less work on the developer to not have to convince the user.

Creating a virtual world is tough. Making it believable is tougher. Not having to create a horizon, walls, floors, ceilings, or a sky saves the developer work, but more importantly, you don’t have to convince the user that this reality is real. If the user feels the virtual world isn’t behaving like it should, they will reject the reality (source: The Matrix). Not having to convince them, leaves more time for details on our virtual objects (like the glowing balls of light above).

2. The interface is more natural

Just like above, the developer needs to render a virtual representation of the user’s hands, tools, and anything else they might use in this reality, and it has to look convincing. One more thing to worry about, and even a slight delay can have disastrous results to usability. More importantly, we already know how to interact with the real world. We’ve been doing it most of our lives, and it comes natural. It’s a delightful skeuomorphism, in that it isn’t at all. I love watching startups like Osmo build user interfaces out of the real world. It’s so easy a child can use it! Combine that with 3D printing and imagine what playing blocks or Legos will look like in 10 years. One child builds the castle, and another draws the dragon that spells its demise

3. It’s integrated

By integrating with reality, you can find more use cases, because you can just attach to things I do every day vs having to create new worlds and use cases to do things in. But also, linking to reality can have a multiplier affect on the experience. Take the example of Night Terrors, an AR game you can play in your house where your phone is the only way to see the monsters that are chasing you. I can’t think of a better we to get nightmares than for an app to convince me ghosts are attacking me in my own home for a few hours. The relatability of it adds something. If I watch Friday the 13th, it’s OK, because I’m never staying at Camp Crystal Lake. If I play Doom, I’m fine, because I’m never going to Mars (probably).

4. It’s NOT immersive

Sure some people, particularly hardcore gamers, love an immersive experience, but most don’t. That’s one of the reasons that mobile gaming has surpassed any prior platform. I don’t have to commit to buying a gaming machine, I don’t have to commit to sitting on my couch. I don’t have to commit to a long gaming session. I don’t even have to commit to waiting for my PC or PS4 to boot! I can be sitting at the doctor’s office or in line at the DMV. The casualness of it makes it OK for non-gamers to participate, and that’s most of the people in the world.

5. Locality is implied, but can be circumvented

How often do you wish that your devices could be more smart about where you are? Sure, great progress has been made. “Hey Siri, remind me to call my wife when I leave work”, app suggestions to use apps common at this location or that you use at this time of day, or when your Mac remembers your monitor configuration from the last time you connected to those specific external monitors. If we take it further, I can have one set of screens and open apps at work, and another at home. I can leave virtual objects and tools at work, and they will be right there where I left them when I get back. Quite often we perform the same task at the same location so having a return to that location remind you where you left off can be very helpful, but even better is that developer can build a UI to let you break the rules of locality, like creating a way to switch workspaces while sitting at your desk, or “Hey Cortana, open up the app I was using at work here in my living room”. “Alexa, play that song I was listening to at work”, or my personal favorite, hold up hand palm out, and virtual remote starts to shake and flys into your hand.

6. It’s social

Most experiences are social in nature, and may involve people collocated with you, and those who are not. It’s hard to have a shared experience if you can’t see them. Sure we could create virtual avatars in the new world, but it adds overhead and lessens the experience (see point 1). Having natural, collaborative manipulation of both physical and virtual objects opens up large caches of use cases. As the amount of expertise you need to get something done grows deeper and more diverse, the best tools you will use to get anything done will be the most social ones. Even better, you can still watch a movie on the big screen with someone, and it’s a shared experience instead of each viewer in their own world.

Yeah, so VR’s gonna flop huh?

Probably not, there’s a strong enough market for immersive games and movies that VR will carve out a niche. Personally, I think Sega’s Night Trap would have been way better in VR! How about another remake?

So if AR becomes ubiquitous, which platform will reign supreme?

It’s too early to call, but I’m keeping my eye on Magic Leap, Microsoft Hololens, and Meta. I think the real interesting war is that of the user interface, will it be Myo, Leap Motion, Google’s new Wiimote, Natural Language Parsing (NLP), or maybe we will just jump ahead to using our minds. Honestly, I think the winner will be the one that properly combines multiple methods of input. If I point and ask my wife “who’s that standing over there”, she can detect the direction my finger is pointing in, understand my question, and even do facial recognition on the area that I’m pointing to. Until UI’s get that natural, there will be a lot of room for improvement.

Disclaimer: The ideas posted in this article are my own and not representative of my employer.

--

--

Ed Mengel

Interests in Big Data, Search, Data Analytics, Graphs, Great Products, and Martial Arts