2016 has been a remarkable year that’s brought continued growth and awareness to the worlds of Augmented, Virtual and Mixed Reality. Set to become a $165 Billion dollar industry by 2020, there’s still a common question that lingers among many newcomers trying to understand this fast moving digital phenomena we are just beginning to watch evolve: What’s the difference between them and how will it impact the digital world as I currently know it?
Virtual Reality (VR) is a digital environment that shuts out the real world. VR is able to transpose the user. In other words, it can bring us someplace else. Through closed visors or goggles, VR blocks out the room and puts our presence elsewhere. VR allows users to do so many things like watch a 3D experience, attend an exclusive concert or sporting event, ride in a roller coaster, play games, create art projects – and in some cases – hang out with friends in virtual worlds. So the question is, what does this have to do with Mixed Reality? VR is a gateway for a lot of the more advanced technology that will be used in Mixed Reality.
Augmented Reality (AR) places digital content on top of the physical world you see around you. AR works by adding 2D or 3D layered content on top of real world objects or locations, allowing the user to unlock additional information that may be relevant, therefore turning the physical world around them into digital media.
Some things you can currently do within an AR experience:
- Play games
- Watch Videos
- Connect to Websites
- Listen to Music
- Enter a Sweepstakes
Currently the majority of AR is used on mobile devices, tablets and integrated into mobile apps but that is quickly changing. Mixed Reality, also a form of Augmented Reality, has a much higher degree of complexity and is much more realistic.
A hybrid of both, AR and VR, Mixed Reality (MR) is far more advanced than Virtual Reality because it combines the use of several types of technologies including sensors, advanced optics and next gen computing power. All of this technology bundled into a single device will provide the user with the capability to overlay augmented holographic digital content into your real-time space, creating scenarios that are unbelievably realistic and mind-blowing.
Mixed Reality works by scanning your physical environment and creating a 3D map of your surroundings so the device will know exactly where and how to place digital content into that space – realistically – while allowing you to interact with it using gestures. Much different than Virtual Reality where the user is immersed in a totally different world, Mixed Reality experiences invite digital content into your real-time surroundings, allowing you to interact with them.
The use of transparent lenses, spatial sound and an understanding of your physical environment will allow holograms to look, sound and behave like real objects that are able to interact with the environment around them and also with each other.
The future scope of Mixed Reality:
Mixed Reality will not just be another advanced gaming console to play the latest version of Halo or Madden NFL. Instead, it will add a whole new world of interactions, apps, games and experiences we are yet to imagine. The world around you will become an entirely new canvas for you to play, learn, communicate and interact with.
The scope for this technology is endless: sports, music, television, art, fashion, business, education, medicine, interior design, retail, construction, real estate and virtually everything we do will be affected. During the first evolution, Mixed Reality will replace your mobile device and then slowly start to replace all remaining devices including TV’s, laptops, tablets, etc. All of the content you consume on those other devices will soon become readily available within your new world of Mixed Reality and through one single pair of lenses.
The key will be to start developing smaller affordable devices able to ensure that massive amounts of data can be transferred without being slowed down or limiting the experience. For this, 5G Networks will play a vital role – and it’s right around the corner. Additionally, off-loading compute functions from the device to the cloud will free up significant bulk in the lens or headset. Ultimately, we need to get to the point where these capabilities become seamless and affordable. We’re getting very close, and are likely to see big improvements in 2017.