February 27, 2021 - Many in the market see AR/mixed reality as the 4th wave of computing after mobile phones. Tech giants have been ploughing considerable investment into AR/mixed reality over the past few years in a bid to position themselves for the next platform shift. There is a healthy level of skepticism for this category given the myriad of hurdles that have to be overcomed — in technology (hardware, optics, user interface), comfort and content. (The use of AR and mixed reality has become interchangeable even though there are nuanced differences but for this article, I’ll just refer to this space as AR.)
What are some the key technical/hardware challenges that need to be overcome? The biggest challenges lie in optics, where hurdles are greater in AR compared to VR.
(1) Contraptions to project images using lenses have been around for decades. However, to focus an image, these systems require a distance between the lenses and the display; they cannot be easily shrunk down. Microsoft’s HoloLens uses diffractive optics where the lens is infused with millions of tiny structures to overcome this constraint. However, because light is bouncing off millions of structures, it can lead to poor color uniformity.
(2) Compared to VR, AR displays and optics are positioned at a corner (so as not to block your vision of the real world). This display is then inefficiently routed to the eye. There is tremendous loss of light in this process.
In addition, contrast is required to see superimposed images, if AR is supposed to be used in the day outside, where bright light is also coming in from the external environment, high luminosity displays will be needed (at least 1mil nits). (Nits is a measure of brightness of eye coming into your eye. A typical computer monitor ~100–200nits.)
(3) AR is true see-through; combining virtual images with the real world requires creating an opaque/semi-opaque pixel which is also capable of full transparency when needed. This is a lot more difficult than VR where we can just occlude the surroundings and project the virtual images.
(4) As pixels get smaller (in a bid for higher resolution), diffraction becomes pronounced. This is when you see color separation in images.
(5) There is an apparent tradeoff between field-of-view and resolution. In HoloLens 2, Microsoft used diffractive pupil replication (to increase the size of the eye box so that the user continues to see the image even if they move their eye) and managed to achieve a FOV of 52deg, up from 34deg of the first HoloLens while keeping resolution the same. However as seen in graphic, there was increased flicker and the real resolution of HoloLens 2 declined, leading to inferior image quality.
(6) For mass consumers to adopt this, the battery life needs to be in the order of a day. The Microsoft Hololens 2 only has a battery life of 2–3hrs right now and users have to carry a separate battery pack. One way around this is to adopt a tethered solution; where the device shares battery life and computing power with a laptop/PC. But that also constricts use cases and adoption.
(7) Comfort — eye relief (can bespectacled folks comfortably wear it too)? Does it have high optical efficiency which will influence battery size, weight and heat?
(8) Safety is an addition issue. When real worlds and virtual worlds blend, it can cause distractions and make travel more hazardous. This is why when I see AR windshields being touted by the likes of Panasonic in this year’s CES (not a wearable and thus closer to being production-ready), it does make me wonder if the potential risks could outweigh the benefits. I mean, what’s to stop an adversary from inserting a faux deer and causing the driver to inadvertently swerve?
I’ve only scratched the surface of the litany of challenges AR has to overcome for mass consumption. Nonetheless, the likes of Facebook and Apple are in this to control what they see as potentially the next platform after mobile phones. For Facebook it is like an arms race to avoid being beholden to yet another platform owner (which is what they are experiencing now with mobile ads via iOS). For Apple, they aren’t in this to sell 100m/200m units. If AR is the next platform technology after mobile phones, they want to be selling to a billion users and they would want to cream off value not just off the top of the funnel (i.e. app store) but through Apple-native applications. Therefore, the bar is set very high for their wearables that will need have mass consumer appeal. All the technical issues need to be resolved, coupled with comfort, long battery life, efficiency, safety, privacy issues, and killer applications that will incent a change from using a smartphone which is already highly-performant. (Currently there is no such killer app yet.) There needs to be computer vision to track what the user is focusing on, and potentially even audio AI to screen out noise and be able to funnel the right sound the user should be focused on back to the user. On top of this, it has got to look good and eventually priced right. Magic Leap tried to single-handedly build a consumer AR ecosystem, raised billions of dollars and eventually failed. Hence all the healthy and necessary skepticism.
However, AR has been in play for enterprise users and for a long time in the military. Enterprise users don’t have to satisfy a whole bunch of tradeoffs at the same time, e.g. they can have wearables that prioritize comfort over looks. There are many use cases in healthcare, first responders, engineering, retail etc. that have taken off. Perhaps wisely so, Microsoft focused their AR efforts solely on the enterprise space. Rather than seeing AR as a goal in and of itself, startups/companies that utilize immersive technology to bring true value are the ones that will see the greatest ROI in the near term.
Some recent defense applications listed below, since it is of interest to us and because demand for AR solutions continue to see great demand even as defense ministries around the world trim their training budgets: