The view from the top of the Empire State Building isn’t the same as looking at a miniature Empire State Building sitting on top of your kitchen table. When it comes to virtual and augmented realities, the difference is in the experience.
It’s not surprising how often virtual and augmented realities are perceived as synonymous. Both rely on physical movement and provide the benefit of an enhanced experience. But the distinction in user experience characteristics, technical challenges, and opportunities requires further investigation.
What are virtual reality (VR) and augmented reality (AR)?
VR is a completely immersive experience where users wear headsets through which they see only virtual environments. In other words, the actual physical environments around users disappear. Virtual environments can be created through 360-degree video, where viewers turn their heads to see from any angle, or via computer-generated (CG) content.
AR is an enhanced experience where users layer CG content over their current physical environment. This content can take a variety of forms: images, animations, video, sound, GPS information, and more.
VR: Full effect with full immersion
Without question, VR offers the most dramatic experience. Virtual environments have already emerged in a variety of use cases, within multiple industries.
Training. Virtual environments allow for a more complete, realistic training experience. They offer risk-free simulation and training situations, allowing workers to learn and practice appropriate responses and procedures without endangering themselves or others (e.g. firefighters, surgeons, etc.).
Retail. To help invigorate retail experiences, retailers use VR to transform and personalize customer engagement. Apparel brand Outdoor Voices, for instance, lets customers try on their gear, while surrounded by wilderness.
Education. In a nationwide survey of teachers, 93% felt students would be motivated by VR experiences. Another 83% perceived VR in the classroom as a useful teaching aid. Potential applications include virtual field trips, discovering 3D modeling, and applying 3D for subjects such as human anatomy.
For VR, the utility is undeniable, the potential is palpable, and the clouds around the future are clearing.
The VR lens: Cardboard, daydream, and beyond
Mobile app development in VR started with Google Cardboard, which is available for both iOS and Android. The simplistic Cardboard headset allows users to slip their smartphones in, and, using an app, view 3D imagery through the lenses. Although the Cardboard lacks the sophistication of a more advanced VR headset, at just $7, it has allowed millions of users a low-cost entry point to VR.
The next generation of mobile VR was Google’s Daydream, which adds a hand-held controller to accompany a headset. Daydream offers a richer, more active VR experience, but the downside is its lack of iOS support. Still relatively affordable (right around $50), there are hundreds of thousands of Daydream users.
Other VR headsets on the market include the Oculus Go, Oculus Rift, HTC Vive, Pimax, and Hololens. These systems start around $200 and go upwards of $3,000.
So far, VR has gravitated toward low fidelity environments, due to limited visual processing power available on mobile devices. This will not always be the case. As software techniques like Google’s Seurat combine with evolving hardware to produce higher quality imagery, VR equipment, and the VR landscape in general, will shift.
The future for VR is untethered
Gaming companies, like Remotr and Paperspace, allow players to fire up a game on a powerful gaming PC in the cloud and stream the game to their homes. Why can’t VR work in a similar way? With a demanding rendering requirement of 60-90 frames per second, untethering VR wasn’t an easy solve.
CG environments are typically rendered in real time, which meant the VR headset had to be tethered to a stout computer to handle interactivity. Fortunately, advances in streaming hold the promise of freedom. As streaming viability increases, visuals can be created on a remote computer and then streamed to a mobile device.
Display progress means expanding views and killing motion sickness
Phones have already achieved what Apple refers to as “retina” resolution, where the user can’t distinguish individual pixels on the display. However, VR splits the phone screen in half (one half per eye) and uses lenses to magnify the image. This results in pixelated image quality in phone-based VR.
Higher-density screens will definitely improve the visual mobile VR experience, but there will be challenges around phones straining to drive all of the additional, indistinguishable pixels. Additionally, expanding the field of vision to 200 degrees will help eliminate motion sickness and the sensation of viewing the world through scuba goggles.
Moving toward mixed reality through environmental awareness
The precise location of a chest of drawers is difficult to remember when at the foot of a mountain. VR has faced the persistent challenge of the viewer not being able to see the physical boundaries and obstacles in their actual environment.
New sensors allow VR experiences to integrate environmental awareness, rendering real walls and furniture into the virtual environment. These sensors also allow props, like a flashlight, to be recognized in the virtual world. This means the user can cast light or bash zombies, as circumstances demand. Prop-enhanced VR gaming, for example, requires very specialized settings and hardware, but is poised to migrate to mobile in a more flexible, adaptive form.
AR: The layer cake of visual technologies
Companies of all shapes and sizes are embracing AR for a wide range of business-driving functions. AR does not require specialized hardware, making it a lighter, less cost-prohibitive tactic for brands to test.
AR has demonstrated effectiveness in simplifying and enhancing:
- Supply chain management processes
- How-to assembly tasks
- Store aisle marketing simulations
- Online sales tactics
With its ability to layer images on and pull data from actual environments, AR is able to, in many ways, give us the best of both worlds.
Opening doors for dynamic development
Both Apple and Google are eager to give app developers a boost in building augmented reality experiences compatible with all kinds of mobile devices. Apple’s ARKit and Google’s ARCore give developers powerful new tools to create AR experiences. These developer frameworks have caused a surge of interest in AR, with a range of apps from games to architectural measuring tools to “see what this car would look like in your garage” apps to educational experiences.
Marker recognition promotes customization
Among other improvements, Apple’s introduction of ARKit 1.5 confers on AR apps the capability of “marker recognition”. This gives apps means to respond to predefined images, or “markers,” in the real world environment. Marker recognition identifies products on shelf, allowing stores to become personalized, interactive environments. Customers can point their phone at a food item to view its nutritional information.
One step further, customized visual filters can be applied. Users can enter: “I have a gluten sensitivity; which of these cereals can I eat?” and suitable items will be flagged. Similarly, when scanning the shelves at the bookstore, books with favorable reviews or from Goodreads will appear highlighted or flashing.
Mapping a more in-depth environment
Historically, AR experiences superimposed CG content over real environments. Current hardware advances are reforming this model, opening the door to a host of next-level interactions.
Google’s Project Tango proved the value of mobile depth-sensing. This allowed phones to map environments in 3D, feeding the information to AR apps. (Apple uses similar tech for FaceID on the iPhone X). Now, ARCore incorporates those same mapping functions. Depth sensing will allow virtual objects to interact with the real world in better, more meaningful ways. Characters in an AR game could, for example, convincingly hide behind furniture, jump on tabletops, etc.
VR or AR–our reality is changing
Imagine one app providing nutritional information on a bag of popcorn, while another alerts you to a friend entering the store, and a third adds a gold coin to your cart as part of a treasure hunt game. And then everything in the store resets before your eyes as you go into virtual mode. It’s coming.
Despite the technical and experiential differences, both VR and AR are positioned to become major players in the coming years. Companies like Magic Leap will offer comfortably-fitting glasses that link to our phones and allow content to be seamlessly embedded into the environment around us. In short, hardware advances will make both AR and VR brighter, better, and more applicable.
From a business perspective, VR/AR will become a valuable addition to brand toolkits worldwide. New, meaningful customer experiences will drive scalable business goals while engaging customers in ways that have never before been possible.