As we’re entering 2017, one thing seems clear in the UX world: Augmented Reality (AR) is one of the most exciting players in the game. A number of recent developments point to just how much AR is making its presence felt in the UX world. From Snapchat’s highly popular face-swapping capability and Waverly Labs Smart Earpiece Language Translator to PokemonGo and the emergence of brick-and-mortar VR Playlabs like Jump Into The Light and Samsung 837, AR is already a major mind-altering, and market-altering, technology change agent.
So what is AR and what makes it so vital for the UX world to understand it? Well, first we need to be clear about what AR is and is not. For something to qualify as AR, it actually needs to meet three criteria:
- It must respond in context to new external information and compensate for changes in the user’s environment;
- It must interpret gestures and actions in real time without requiring any (or barely any) overt command inputs from users;
- It must integrate with the user in such a way that it doesn’t restrict their movements.
In other words, what AR provides is an opportunity for users to engage with the reality via an enhancing interface that provides them with dynamic changes in response to external inputs. This makes it distinct from Virtual Reality (VR), because VR is an isolating experience that simply displays an altered reality to users, and that reality is almost entirely composed of pre-fabricated elements. It is also different from holograms, which don’t respond to real-world inputs, but are instead dynamically static displays.
So why is it so important for those working in UX design to start understanding and implementing AR? If AR is going to truly mean something for the development horizon of UX technicians, then it obviously has to offer some very unique and appealing benefits to both the developer and the consumer. Fortunately, there are plenty. To name just three:
Huge Interaction Cost Savings
Because AR interfaces don’t require commands from the user but instead can accomplish tasks using contextual information collected by the computer, they decrease the interaction costs necessary to perform a task. A great example of such an AR interface has in fact become quite common – the automobile parking-assistance system. Rather than having the driver enter coordinates as the car moves, the AR software responds to changing environmental factors (the car’s relationship to its surroundings) and feeds information to the driver, resulting in an incredible decrease in interaction costs to perform the task of getting your car into a tight spot without hitting the light pole.
Major Reductions in the User’s Mental Expenditures
Consider a mechanic whose job is to inspect complex pieces of machinery – an airplane or a sanitation system – to make sure all the parts are in order and not expired. If the mechanic is holding a tablet, he or she is going to have to not only learn how to use the software, but to remember the names of all the parts, where they are filed in the system, and how to accurately record and research them. But if the mechanic wears a HoloLens, then the entire history of the machine – its service records, its part details, its overall workflow structure, etc. – will appear right in front of his or her eyes without them having to learn or remember how to access that information. The reductions in mental expenditure – not to mention the opportunities that emerge from having hands free – allow for the mechanic to pay attention and quickly respond to more details, improving the likelihood of catching a small but possibly critical defect.
Minimizing of Inefficient Attention Switches
Many tasks require that a user moves from one source of information to another in order to arrive at completion. For instance, a surgeon often has to reference a variety of body measurements in order to assess the safest route for his scalpel. AR can increase the efficiency of this task by minimizing the number of times the surgeon needs to switch their attention from one monitor or source to another. By operating within an AR interface, different reality referents can be combined while changing alongside external conditions and providing the user with a data conglomerate that has been compiled according to certain scripted task parameters. The savings in time and mental energy, along with the benefits that accrue from a more effectively organized information sphere, can be pivotal in assuring successful outcomes.
UX design is, at its heart, the process of creating perfect, performable usability, but of course it’s also about a lot more. It’s about maximizing the user’s efficiency and productivity. It’s about marrying the needs and economic demands of businesses with the whims and demands of their customers. And it can even be about giving people a magical experience that makes them feel happy, free, and immersed in something amazing. With that in mind, the importance of AR for UX can’t be overstated. The goal of the UX designer is to know what kind of AR capabilities are out there, what’s on the AR horizon, and how it can benefit UX.
We’d love to hear from you on the topic of AR and UX!
- What other benefits that AR brings to UX can you think of?
- What design and industry roadblocks do you see standing in the way of implementing AR into UX?
- What’s the most exciting next-gen AR app you see on the horizon in the context of web portal development?