The language wars of 2024: Apple vs reality


Tomorrow, February 2, Apple will begin delivering its long-awaited mixed reality (MR) headset, the Vision Pro, and I am confident it will be a remarkable product that wows consumers and invigorates the industry.

You might think my opening sentence would please Apple, but if I was a developer with a new app for the Vision Pro, three words would violate Apple’s recent language guidelines — mixed reality headset.

Instead, Apple calls the device a spatial computer, which is a thoughtful term justified by the product’s advanced features. That said, the developer requirements from Apple contain a language restriction that has been raising eyebrows: “Refer to your app as a spatial computing app. Don’t describe your app experience as augmented reality (AR), virtual reality (VR), extended reality (XR) or mixed reality (MR).”

I appreciate the need for disciplined branding, but I worry that Apple is going too far by actively working to suppress language that has a long and rich history. I say this as someone who began working in this field when the phrase virtual reality had just emerged and before augmented reality, mixed reality or extended reality had been coined.

GB Event

GamesBeat Summit Call for Speakers

We’re thrilled to open our call for speakers to our flagship event, GamesBeat Summit 2024 hosted in Los Angeles, where we will explore the theme of “Resilience and Adaption”. 

Apply to speak here

This means I’ve lived through a handful of frustrating shifts in the language of immersive technology over the last 30-plus years. The biggest headscratcher was about a decade ago when extended reality was adopted as an overarching term in the field. To me, the word “extended” is a weak and vague modifier and I’d prefer spatial computing to replace its usage.

Still, I can’t help but worry that Apple is going too far when it pushes to suppress longer-standing language like AR, VR and MR. Maybe I’m just nostalgic, but when I first started working in the field, the phrase virtual reality was about as hip as it got in the world of technology. I was a young researcher conducting VR experiments at NASA and the photo below was a large poster in the lab where I was working. To me, it was a deeply inspiring image, capturing both the present and future of the field.

NASA photo of a “virtual reality” experience circa 1992

The fact is, the human experience depicted in the photo above has been called virtual reality for almost 40 years. If you are an app developer for the Vision Pro and you create a fully simulated immersive experience for the user, is it really such a problem to describe it as virtual reality? After all, the VR headset shown in the photo above is now in the Smithsonian. This is our history and culture, and it should not be branded away by any corporation.

Of course, the Apple Vision Pro is orders of magnitude more sophisticated than the NASA headset shown in the photo above, not just because it’s higher fidelity, but because it adds entirely new capabilities. The most significant capability is the power of the Vision Pro to seamlessly combine the real world with spatially projected virtual content to create a single unified experience — a single perceptual reality.

This is called augmented reality or mixed reality depending on the capabilities (which I will get to below) and both phrases have a long history in academia, government research labs and industry.

Spatial alignment of real and virtual realms

So, what’s the difference between AR and MR?  

This is probably the most misunderstood schism in the world of immersive technologies, so it’s worth taking a quick trip back in time to explain how today’s divide came to be.

For most of my career, only one phrase was needed, augmented reality, but its definition has been diluted over the years as marketeers pushed simpler and simpler systems to fall under the banner, confusing the public. I suspect the pendulum will swing back in the future, but for the next five to ten years, both phrases are helpful.

As background, I began working on merging real and virtual environments in 1991 before the field had language to describe such a combined experience. My focus back then was to explore the basic requirements needed to create a unified perceptual reality of the physical and virtual. I called this pursuit “design for perception” (admittedly not very catchy) and found that the real and virtual realms needed to be spatially aligned in full 3D with sufficient precision that the flaws are beyond the limits of human perception.

In addition, both realms needed to be simultaneously interactive — for example, the user needs to be able to reach out and engage naturally with both real and virtual at the same time, creating the illusion that the virtual content is an authentic part of the physical surroundings. And, finally, the real and virtual need to engage each other, because without that, consistency the illusion is lost. If you grab a virtual book and place it on a real table and it falls through, it’s not perceived as a unified reality and suspension of disbelief is gone.

Google Glass wasn’t AR

Because no language existed back then, I referred to merging the real and the virtual as creating “spatially registered perceptual overlays.” Also, not very catchy. Fortunately, the phrase augmented reality was coined at Boeing soon after and quickly took off. I liked this language a lot.

After all, the phrase clearly describes the objective of the technology: To add virtual content to a real environment that is so naturally integrated, the two worlds merge together in your mind, becoming a single reality. And, for almost 20 years, that’s what the phrase augmented reality meant (while simpler devices that merely embellished or annotated your field of view were called head-up displays).     

Then in 2013 Google Glass happened. I liked that product and I believe it was ahead of its time. Unfortunately, the media incorrectly referred to it as augmented reality. It was not. It didn’t enable virtual content to be placed into the real world in a way that was immersive, spatially registered or interactive.  Instead, it was what we now call smart glasses, which is a useful technology and will become even more useful as AI becomes integrated into this class of product — but it wasn’t AR.

Google Glass (2013) — Wikimedia Commons

Still, the phrase augmented reality got watered down during the 2010’s, not just because of Google Glass but because smartphone makers used the phrase to describe simple visual overlays — even though they were not immersive and lacked 3D registration with the real world or interactivity between the real and the virtual. This was before LiDAR and other 3D scanning technologies were added to phones, enabling spatial registration and interactivity.   

I’m sure I wasn’t the only one who was frustrated by the language being watered down. I imagine that the team at Microsoft working on the first commercial product (the HoloLens) that enabled a true AR were equally annoyed. In fact, I speculate that this is why Microsoft, upon launching their innovative HoloLens product, focused their marketing language on the phrase mixed reality.

The language had been around since 1993, but it was with the HoloLens launch, which was also ahead of its time, that MR really took off. It basically came to mean — genuine AR.

And so, we now have two terms that describe different levels of augmenting a user’s surroundings with spatially registered virtual content. To help clarify the difference between AR, MR and VR, we can look at definitions that were published in 2022 by the U.S. Government Accountability Office (GAO). 

I have to assume the GAO cares about the differences between these phrases to clarify if government contracts are paying for VR, AR or MR devices. To address this, the GAO put out a public document that featured this simple image to summarize the differences.

AR vs MR vs VR as described by the U.S. GAO (GAO-22-105541)

It’s worth noting that the difference between AR and MR has nothing to do with the hardware and everything to do with the experience. I say that because many people incorrectly believe that AR hardware refers to glasses with transparent screens you can peer through and MR hardware refers to headsets that use “passthrough cameras” to capture the real world and display it to the user on internal screens.

This is not true. I say that as someone who used passthrough cameras in the first system I built for the U.S. Air Force back in 1992 (the Virtual Fixtures platform). I made that design choice because it allowed me to register the 3D coordinate systems for the real and the virtual with higher precision, not because it changed the user experience. And besides, simple phone-based AR also uses cameras, so that is not the differentiator.

Apple Vision Pro is an MR headset

This leads me back to the Apple Vision Pro — it’s an MR headset, not because it uses passthrough cameras, but because it enables users to experience the real world merged with interactive virtual content that is spatially registered with a user’s natural surroundings with precision, creating one unified reality.  And because mixed reality is the superset technology, the Vision Pro can also provide simpler augmented reality experiences and fully simulated virtual reality experiences. 

And for all three (VR, AR and MR), I fully expect the Vision Pro to amaze consumers with immersive experiences of a quality that far exceeds any device that has ever been built at any price. It’s a true achievement.  

The Vision Pro also enables other capabilities that are entirely unique, including a spatial operating system (visionOS) that breaks exciting new ground by relying on a user’s gaze direction for input. In other words, I agree that the Vision Pro is not only an MR headset, but also a spatial computer and, frankly, a work of art. I also believe that spatial computing is a great overarching term for AR, MR and VR experiences.

My only recommendation is that Apple not be too heavy handed in suppressing the common language of the field. After all, I am old enough to remember Apple’s biggest product launch, the famous “1984” super bowl ad that unveiled the Mac. It featured a runner throwing a massive hammer to shatter an Orwellian future where Big Brother controls society by replacing accepted language with “newspeak” and enforcing it with “thought police.”

From that perspective, I hope apps on the Apple Vision Pro will soon be able to reference VR, AR and MR experiences for users. That is, if 2+2 still equals 4.

Louis Rosenberg founded Immersion Corp and Unanimous AI and developed the first mixed reality system at Air Force Research Laboratory. His new book, Our Next Reality, is available from Hachette.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Source link