Apple Vision Pro First Impressions: More ‘Why’ than ‘Whoa’

It’s an astonishingly technologically-advanced headset — nothing less, but also nothing more

Eshu Marneedi
27 min readJun 7, 2023
An image of the Apple Vision Pro.
The Apple Vision Pro. Image: Apple

For nearly 5 years, we’ve known that Apple would make a mixed-reality AR/VR headset. We just didn’t know when it would come to fruition and what it would entail. The years-old questions were answered on June 5th, when Apple unveiled its first AR/VR headset at WWDC 2023. They call it ‘Apple Vision Pro’, a $3,500 hunk of electronics Apple states as being “game-changing” and “revolutionary.” I’m not as confident as Apple seems to be about this product. A couple weeks ago, I wrote a column titled “I Don’t Want an Apple Reality Pro Headset” outlining some table-stakes points that I thought — and still think — are vital for a mixed-reality device to nail, along with some concerns about Apple’s implementation. Here’s some of what I had to say on April 25, and how well it aged:

The truth is, VR is dumb technology — or at least, I think so. Why are we so focused on having a product that is essentially just an iPad strapped to your face? What is the utility of having an edge-to-edge screen that, in reality, just takes you further away from actual reality and that makes it harder to connect with the real world? Am I missing something?

An image of the UI of visionOS.
Image: Apple

Apple somewhat answered these questions. The Vision Pro runs visionOS (or is it xrOS? Even Apple can’t get the names straight), essentially a fork of iPadOS that blends in with the real world. The truth is, visionOS is crazy impressive — I can’t undermine that at all. Apple stated during its State of the Union address that each SwiftUI scene (all apps run in the SwiftUI lifecycle) runs as a window on Vision Pro. The windows can be resized and moved around independently by the user, similar to Stage Manager on iPad, but with a twist. Because reality is infinite, window sizes are infinite, and can live in a real space respecting depth. This is where the blend of AR and VR comes in — this is the augmentation of a virtual space. You don’t live in a virtual space, the virtual space lives in your space — crazy stuff.

The way you control these windows is the most impressive part of the headset, in my opinion — Vision Pro has no controllers whatsoever, instead wholly relying on cameras and optical sensors to track your hand movements. It’s akin to what Meta does, but Apple’s version seems to work much better and more reliably. For one, you don’t have to reach out and touch virtual objects with your hands, like the way you would on a Meta Quest — you can simply pinch and drag with your hands in your lap and the headset will be able to pick up your movements. It’s really neat technology powered by “on-device machine learning,” and I must admit that it makes the experience seem much more seamless. That isn’t even the best part, though — the Vision Pro uses a ‘Focus Engine’ (it’s not actually called that; Apple just seems to refer to it as eye-tracking) similar to tvOS that tracks what your eyes look at and highlights (selects) items for you. I’ll get more into the technical details of how this works later, but as the Vision Pro’s sensor array monitors your eye movement, it intelligently focuses things that you’re looking at. You simply look at an app to select it and pinch your fingers together to open it. Want to dictate in a text field? Just look at it and start talking. As soon as you stop looking at it, the field submits. It’s absolutely mind-blowing demo and sets Apple apart from every other headset.

The way the user interface is designed is also shockingly delightful. Apps made for Vision Pro have a layer of translucency where you can see behind the windows. This allows for the experience to feel much more natural, almost as if you’re interacting with infinitely-large monitors that lie in your space. The concept and execution are both fancy, technologically advanced, and seem plausible. I have confidence that Apple will be able to successfully pull this off. The experience looks and feels much better than solid windows or objects would make it — it feels realistic. Even the way you authenticate yourself is well thought out and designed — Apple calls their iris scanning feature OpticID, and while the name will never grow on me, it’s well-implemented and seems weirdly futuristic. Authentication shouldn’t be a pain on a headset like this, and from demos Apple has published, it seems fast, secure, and reliable. This is the best way to prove you’re yourself on a wearable product you strap to your face. Apple has done a lot to ensure that the experience feels futuristic and reliable — the work shows. That’s an important detail.

However groundbreaking or intuitive the software may be, though, it doesn’t answer my initial question: Why are we so focused on having a product that is essentially just an iPad strapped to your face? I mean, sure, there are no bounds to how large windows are and thus working on Vision Pro is more roomy, but that’s pretty much it — there’s no headlining “whoa” moment here. This doesn’t enable anything that previously wasn’t possible before. It’s a very good implementation of ideas that have been out for years — this is nothing truly revolutionary. Most of the tech we saw was just assisting Apple’s vision (no pun intended) — it didn’t add new functionality that’s jaw-droppingly unique. This keynote made me ask the question “Why?” more times than it should’ve — why is Apple gambling so hard on this experience? What makes it so much better than just using a Mac? It doesn’t matter how good the software is when you can’t fix the major issues that come with VR. As much reality you bring in, it’s still much more unrealistic when compared to the real reality — the things your eyes see when you don’t have a headset on.

An image of VR FaceTime.
Image: Apple

I actually feel quite conflicted about Apple’s presentation. Throughout the keynote, Apple demoed people looking like dorks moving their hands around manipulating these fake windows in space, and it felt like a Meta presentation. It was so… un-Apple-like. I pointed out in my column from a month ago that the future of MR (that’s multi-reality, for the uninitiated) isn’t meetings — no matter how good of a job Apple does in that regard — and my point still stands there. You attend meetings with the headset on, and your avatar is an AI-generated version of yourself synthesized using depth data and images the headset takes of you. When you initially set it up, the headset will prompt you to hold it out like you would when you take a selfie on a phone so that it can do a scan of your face using the sensor array (which I’ll get to). It then takes all of that detail and makes a fake-looking version of you that moves around mirroring your facial expressions and movements using the same Memoji technology we have already. This is dumb — it’s over-engineering something that doesn’t need to exist in the first place. We can do meetings with traditional webcams already, and they feel as realistic, if not more realistic than the floating tiles we saw during the keynote. The faces don’t seem real because, well, they aren’t real. They’re fake. Fake faces don’t look real. Anyone who uses the headset that Apple’s pushing here to take FaceTime calls is going to look weird. Is that really the experience Apple wants to provide for people who spend $3,500 on a next-generation computing platform?

What they’ve demoed here is essentially a metaverse — a bunch of floating, non-existent translucent tiles that nobody else can see. Apple uses the term “spatial computing” to describe this world that they’ve created, and that really rubs me the wrong way — Apple is trying to sell people on this experience of being able to do things in a virtual world, and the truth is, that isn’t a very good experience. They’re not trying to be revolutionary, they’re trying to merely beat the competition. No matter how good the software experience of actually using the headset is, you can’t ‘break the laws of physics’, or in other words, do something that VR can’t. It’s a really good implementation of AR and VR software — I’d argue it’s the best, knowing how much it impressed press when they tried it on — but it’s just that, an implementation of VR and AR. The WWDC 2023 keynote presentation didn’t feel like an Apple keynote because Apple typically focuses on reality rather than on la-la land demos to try to sell customers on unnecessary technology. Before Apple apologists come at me for this take saying that it’ll age terribly in 10 years, hear me out: the tech is cool, the software is cool, and the implementation is cool — but this isn’t the right product for it all to be in. The future is AR glasses or contact lenses; those products have real practicality. That’s ambient computing, not spatial computing. Spatial computing doesn’t matter, regardless of how good of an experience Apple can provide. Moving along:

Apple never should focus on having a product that simply shows off what they can do — they never do that, they never make something half-baked for the sake of saying it exists. If they plan on shipping this product like this, Apple enthusiasts who genuinely care about the Apple touch aren’t going to be bothered by it.

The Vision Pro battery.
Image: AppleInsider

I retract these statements — Apple proved me wrong! Technologically speaking, the Apple Vision Pro headset is a marvel of engineering — except for the battery pack. We’ll get to that. The front panel of the headset is made entirely of glass and works as a lens for the cameras and sensors. Speaking of cameras and sensors, there are quite a few of them — 2 cameras, and an array of gyroscopes, accelerometers, and depth sensors, along with flood illuminators and a LiDAR sensor for operating in low-light conditions. The infuriating thing is, we have no idea what these sensors are — this is speculation from what we’ve seen so far on Apple’s website. Regardless, Apple insinuates that this array of sensors will provide for a significantly more realistic experience when wearing the headset, and I take their word for it. The sensor array allows you to take what Apple calls “spatial photos and videos,” which take the depth data from the sensors and the video and… puts them together. This allows you to play back spatial videos with the headset and retain that depth information so the media feels… real? Look, I don’t see the appeal of this — it seems mostly like a gimmicky feature that was only implemented for the tech demo. In fact, it also seems weirdly dystopian, inhumane, and off-putting. I digress.

A person wearing a Vision Pro.
Image: Apple

The glass also doubles as a display for Apple’s flagship feature of the headset, something they call EyeSight. The problem with most VR headsets is that they make others in the room feel… uncomfortable. I mean, sure, you in the headset can see them, but they can’t really see you. We, humans, are visual creatures and use our eyes to communicate a lot of information, so eye contact is important. Apple has “solved” this problem by projecting your eyes to the outside world using internally-mounted cameras and displaying them on the glass panel for everyone to see. The effect is… freaky — it doesn’t look quite natural and seems off-putting. I’m not sure how I feel about it. In fact, this is one of my biggest problems with the headset — you look like a giant a-hole when you’re wearing it. Sure, if you’re rich it’s trendy, but if you’re normal, it really doesn’t look fashionable. When you’re using an app, the display masks your eyes with colored orbs depending on how immersed you are to indicate to others that you aren’t available. Using the Digital Crown mounted on the top of Vision Pro, you choose how much of your surroundings you’d like to take in using the cameras. The more immersed, the more colored orbs other people see (shown in the picture above: full immersion). When you’re not looking at anything, other people can see your eyes in all of their beauty. This seems like a wildly over-engineered solution for a simple problem and is the only part of this headset that screams Apple. This is the Apple-est thing I’ve seen during this entire WWDC keynote, and while I don’t think it’s particularly helpful at solving this problem, I do think it’s a cool tech demo that highlights the sheer capability of this headset. Meta could never do this.

Coming back to that eye-tracking focus engine stuff I discussed earlier, the headset has what is essentially an apparatus of invisible lasers that work together to track your eye movement. The system projects light patterns onto your eyes to track their every move to act as a pointer for the UI. This is technologically advanced beyond belief and puts Meta and HTC and every other VR headset maker to shame. Your eyes act as the main pointer for this headset — it’s almost like Neuralink brain implants… but practical. Apple touts this as feeling like magic, and it’s believable knowing how much work was put into the tech here. Apple won in the tech department — this is an incredibly impressive demonstration of the technology in this headset. Apple did AR/VR the best.

The displays within the headset itself are also equally impressive. Apple says that they’re both higher resolution than 4K (they didn’t give us a FOV measurement nor a resolution, so take that for what you will) while being the size of a postage stamp. This analogy might be impressive, but here’s an even better way to frame it — each Vision Pro pixel is worth 6 iPhone pixels. That’s an incredibly sharp display that will without a doubt serve as the most immersive VR experience on the market. The panel itself is a custom micro-LED display with “23 million combined pixels,” which is hefty. That’s not a cheap panel, and from what I’ve heard, it stacks up as being one of the most, if not the most high-resolution and immersive VR experiences ever. I can’t help but repeat that point — Apple nailed the technology here, and this is the most technologically-advanced headset on the market. I can’t undermine how much this blows me away.

Audio-wise, Apple touted Spatial Audio support a ton during the keynote. The headset, much to my delight, doesn’t require a pair of AirPods Pro to function — it instead comes with 2 built-in speakers that sit close enough to your ears that others can’t hear what you’re listening to when set to respectable volumes. Apple says they used “audio raytracing” (that sounds like a buzzword) to analyze the acoustic properties of the room you’re in to deliver the “best audio experience.” The thing is, Apple is betting on the audio experience of Vision Pro to add to the immersion. They’re encouraging developers to build sound effects in rather than using haptics (which aren’t supported on the headset), try new audio processing techniques, and work with Spatial Audio more. I have a feeling that a huge part of this headset is betting on the audio experience, and I’m intrigued to see if/how Apple deals with it. Does the OS rely on audio cues? Who knows? But the hardware is there, and it’s impressive enough — especially knowing how good Apple is at making headphones.

2 chips power this experience — the M2 (the same M2 found in the MacBook Air and iPad Pro) and a new SoC — the R1. The M2 does self-explanatory things — it handles the software. The more interesting SoC is the R1, which works together with the M2 and all of the crazy XR (I hate that term as much as Jason Snell does) hardware. The R1 processes input from the cameras, sensors, and microphones and streams that info to the displays ensuring low latency. It’s impressive — Apple wants to ensure the least latency possible, and it seems to work, with touted numbers of only 12 milliseconds of lag. This is a crucial part of the headset, and I’m glad Apple put more engineering time into this. It’s what makes other AR/VR products feel unnatural — if there’s even the slightest bit of lag, it’s noticeable. The sad thing is that we don’t know exactly what’s inside that fancy R1 chip — but I’m willing to bet it’s similar to the H2, etc. It’s a special chip designed for a special product. Apple is fantastic at chip design, and both SoCs seem capable enough to run all of this tech. But that’s only half the story — the Vision Pro has active cooling, for some reason. The more you think about it, the active cooling makes sense — it’s not the SoC that gets hot, it’s the display and sensor array. The Vision Pro has a collection of Mac Pro-esque holes at the bottom to take air in and exhaust air out, and Apple promises that the system stays cool and quiet — 2 crucial properties of a VR headset. If that thing heats up while it’s on my face, I’ll want to take it off immediately. I hope Apple has solved this problem.

All of this impressive, substantial technology comes back to my main point — sure, this technology is ridiculously powerful and revolutionary, but… it doesn’t do anything in the long run? You know what’s better than a super high-resolution display showing everyone your eyes? Just sitting in front of a computer screen so that people know that you’re busy. You know what’s better than heinously over-engineered speakers that deliver Atmos-quality audio in real-time? AirPods Pro. You know what’s better than over-engineering an SoC that delivers 12-millisecond response times to the outside world? Just not wearing a headset at all. This is trying to solve problems that simply wouldn’t exist if Apple never entered this market entirely. The hardware story of this headset is the most Apple-like of all the things we’ve seen this week — they’ve invented problems for themselves. It’s the best headset, but it’s not the best idea. It’s class-leading, but… why? That question keeps popping up: why? None of this is to undermine the technological achievements here — Apple engineers who worked on this product have every right to be proud of their accomplishments. This product is simply unlike anything else on the market — it’s revolutionary… in headset land. In the grand scheme of things, it’s just a VR headset.

Speaking of “why,” let’s ask that question one more time in context: why does this product have a battery separate from the product? That’s a rhetorical question; I know exactly why: weight. Reporters have already commented on the weight of Vision Pro, describing it as “heavier” and having “a premium build.” I understand why they didn’t put the battery inside of the headset — there’s already way too much going on in there to fit a large enough battery. So, what did they do to solve this issue? Pull a Jony Ive and enclose the battery in a chic aluminum chassis and tout the headset as having ‘all-day battery life’ while being plugged in. Yeah, I hope it lasts as long as I plug it in. Absolutely revolutionary, innit? I kid, but only mostly — this headset costs $3,500, and they couldn’t even take the engineering time to figure out how to stuff a battery into the thing? And it’s not even like the battery is any good — it only powers the Vision Pro for 2 hours, not even long enough for a whole movie. The battery situation with this headset is the only thing (hardware-wise — I’ll get into the software in just a moment) that makes me feel this headset was rushed. Everything else is meticulously well-designed — but this battery seems like an afterthought. Here’s hoping they sell larger battery capacities as add-ons or something — that’s table-stakes stuff. I continue:

Apple can’t and shouldn’t count on 3rd party app developers to make a value proposition for this headset… I’m really worried about what Apple’s going to do here — how will the home screen be unique? How will launching apps and games work? How would widgets be displayed? These are all important questions that nobody’s really asking… This isn’t an “ambient” device like iPhone, Apple TV, or Apple Watch — it’s a device used with intention. When it’s picked up, it’s meant to be used, not just worn, providing helpful information throughout the day. How is Apple going to use this device to its maximum potential and what value is it going to deliver?

The visionOS Xcode simulator.
Image: Apple

This is, in my opinion, the biggest issue with the Apple Vision Pro headset — the developer software story. The truth is, there isn’t one — or at least, not yet. As I said during the visionOS portion of these first impressions, the UI itself is massively impressive. It’s beyond anything that any other MR headset rocks. Hats off to the Apple Human Interface team — they killed it. The software coupled with the hardware provides for a phenomenally mind-blowing headset experience paralleled by no other device on the market. In Apple’s words, it truly is the most technologically-advanced piece of consumer technology made. But, unlike the Macintosh in 1984, iPod in 2001, iPhone in 2007, or the MacBook Air in 2008, Apple Vision Pro has no objective. It’s not swinging for a milestone; it doesn’t leave users with a compelling reason to spend $3,500 on a piece of technology. For many, what was demoed on Monday is nothing but a meme. For others, it’s a slice of what the future of consumer technology holds. But other than for the vocal 1%, the Vision Pro doesn’t mean anything to people.

When the Macintosh was unveiled, its high price was excused for its re-introduction and popularization of the graphical user interface, and more importantly, the mouse. Both of these inventions changed the world of computing forever. Macintosh System Software introduced features that simply weren’t possible with any other technology that existed until that date. These inventions opened up the internet — they invented the idea of pointing and clicking on things, an idea that would be replicated by touch and now hand gestures years later. There was a use for these technologies — people didn’t have to play guessing games to try to figure out what the big deal was. When Steve Jobs demoed the iPhone in 2007, he scrolled through a list by just swiping his finger on the screen. The crowd’s reaction to that demo still gives me goosebumps to this day — that was groundbreaking. People immediately knew how it would be used, how it would influence their lives, and how it was going to change consumer technology. They imagined use cases for this one gesture — flicking through emails, scrolling through web pages, zooming in on a map, etc. Things that were previously never thought of as being possible on a mobile device. Whenever Apple has invented or popularized something, they have always given us an innovative use case for it that nobody else has ever thought of. When Steve Jobs unveiled the iPod, the selling point that eventually intrigued the masses wasn’t that the Click Wheel was a new, innovative method of input that nobody else had ever invented before, but rather that people could fit 1,000 of their favorite songs in their pocket. When Jobs pulled the MacBook Air out of a manilla envelope in 2008, his premise wasn’t that this was the thinnest laptop in the world — it was that people wouldn’t have to suffer with abysmally small batteries and abominable specs if they wanted a thin-and-light machine they could take anywhere. The Apple of Steve Jobs wasn’t focused on showing the world how capable Apple was — sure, they touted themselves as being the best at everything all the time, but that wasn’t the reason they existed. They wanted to make products that changed people’s lives, products that people didn’t even realize they themselves wanted, and products that invented things nobody had ever thought about. Apple delivered experiences.

Apple’s WWDC 2023 ‘One More Thing’ lacks this idea. Apple was too focused on showing the world that they’ve made the best headset in the world that feels game-changing, not showing the world that they need a VR headset in their life. Apple didn’t show us a single thing that screamed “this is revolutionary” to the masses. Nothing felt like it would change my life. We nerds get so tied up on how this is the best, most advanced headset in the world; that Meta is doomed because Apple has made a chronically-superior product that feels like magic when you’re using it. All of this is true, but the Vision Pro doesn’t have a killer premise. Apple is betting on developer support almost entirely for this headset. They’ve put tons of work into the UI and APIs for developers to use, but they’ve done no work making a use-case for the product. There’s no killer app or idea or implementation, it’s just… a headset with a bunch of APIs. It’s up to developers to decide what to do with it. This is evident in the fact that Apple got Disney’s Bob Iger on stage to demo Disney+ on the headset and confirm that it’ll be available on day 1. This is a departure from the previous Apple’s insistence on making products like there was no competition. This move (the Bob Iger one, that is) ticks me off for one main reason: it shows that Apple is intent on showing the world that this is the best headset and nothing else. This is a completely true statement — press and developers have cited the headset as feeling “magical” and “inventive,” and I don’t doubt them. Once again, Apple has nailed the software UI and hardware capabilities of this product — I can’t stress that enough. But they don’t have a reason to be working this hard on Apple Vision Pro. And that’s a sad reality — the fact that Apple is betting on developers to make the killer apps and experiences people will want to dish out $3,500 for. If Apple themselves is showing a clear lack of enthusiasm for this product at their Worldwide Developers Conference, I’m not willing to say that developers will be enthusiastic about it either. No matter how cool and magical this headset is, it doesn’t have a point in existing — yet. It doesn’t deliver value, it doesn’t feel like a must-have, and it doesn’t do anything other Apple products don’t do — yet.

The different types of objects developers can make for visionOS.
Image: Apple

Apple has thought out the entire lifecycle for building apps for Vision Pro using SwiftUI. After all, this is the most important part of this story — it’s why I presume they unveiled the Vision Pro 10 months in advance of its anticipated release. They want developers to start thinking of novel, innovative ideas so that consumers will be interested in the headset. Whether that idea comes to fruition is… arguable, but regardless, they’ve made the developer experience slick and easy to understand. Apple says that there are 3 main types of ‘things’ developers can create for Vision Pro, or more specifically, visionOS — windows, volumes, and spaces. I’ll try to explain each of these in non-developer terms, but I’m planning on doing a write-up on building my citation creation app, Citations, for the headset later this summer when the SDK drops later this month (more on it dropping this month later) — stay tuned.

Windows, like the same suggests, are panes that users can place and resize wherever and however they’d like. They work like Stage Manager tiles — you put them where you want, move them how you want, and they stay there for you to work with. They begin as a 2D experience enhanced by interactivity — a concept that Apple discussed about a ton during their State of the Union address and during sessions. Controls light up and make noises when looked at and interacted with. The controls are the same between platforms — pickers, drop-downs, radio buttons, regular buttons, text fields, etc. are all the same and should compile seamlessly between platforms without any changes code-wise. If you use Apple’s native SDK, things should work as expected. Additionally, Apple has introduced a new type of control specific to visionOS called an “ornament.” Ornaments are little bits of UI that sit outside the main window, like tabs, for example. Tab bars, as opposed to lying at the bottom of a window, sit on the side of a window, floating in mid-air. When a user looks at a tab bar, the icons expand into showing both the icon and title. The user can then look at which item they want to choose, and select it via a pinch gesture. When they stop looking at the tab bar, it recedes. Additionally, developers can make their own ornaments and place them wherever they’d like with a simple modifier. This is a big UI change, and I’m excited to see how developers take advantage of it. The idea is that you won’t have to stuff everything into a single window — things can live where they’d like to, because once again, reality is infinite. Developers can also RealityKit, ARKit, and SceneKit in tandem with SwiftUI and the new Reality Composer Pro to add depth to their windows. Take a Space app, for example: While all the text is 2D and lies within a window, a developer can animate a 3D replica of the Earth that pops out of the window and that a user can manipulate by themselves, independent of the window. It’s neat. Windows will undoubtedly be what most apps use to get started with making apps for visionOS — it just makes sense. Calendar apps, note apps, messaging apps, etc. don’t need a 3D environment to work in, and windows seem well-optimized for the OS. Apps not optimized for visionOS run in windows by default — this is the primary way apps will look and feel on Vision Pro.

Volumes are where developers can start to make real 3D experiences. Volumes are scenes, similar to windows, where a developer can add a 3D object to a space that lies wherever they want. Take that 3D replica of the Earth, for example. A developer can launch a volume of the Earth when a user clicks a button in the app’s main window. The window will then move to the side, and a larger Earth replica will show up in the middle of their room. They can then move it around, resize it, and interact with it. It’s essentially a part of an app that can do whatever you’d like in 3D — think AR. Volumes are created with an app called Reality Composer Pro and a framework called RealityKit, but Apple also touted Unity support. I don’t reckon volumes will take off in the long run — they might be cool for short-lived experiences among other gimmicks, but they aren’t super useful for most applications.

The most VR-ish of the ‘things’ (I don’t know what to call these; Apple just calls this system a “spectrum of immersion”) is called a space. By default, windows and volumes lie in the default space (which the OS also lies within), but developers can create their own spaces and position objects and scenes within it however they please. This is what any other company would call a VR environment — it’s what you’d expect from VR. A user can choose how much immersion they would like to experience using the Digital Crown at the top of the Vision Pro, and based on that, an app will either show or hide information. Apple is betting on this being the most popular way to build “spatial computing” experiences for the headset; they’ve put a lot of effort into the APIs to build spaces with RealityKit. Games and other wildly-immersive experiences will be built out of spaces entirely. Here’s the kicker, though — spaces are just SwiftUI scenes, at the end of the day, so developers will have to build a window that launches a space. An app can’t entirely be within a space, so even the launch screen of an app will show up as a window. This is unique from what Meta and other VR makers have done — you don’t just open an app and get transported into another world. You have to choose to open a new world and choose how much immersion you want each time on a per-app basis. This is where the mixed-reality part of the headset comes into play — Apple really doesn’t want to make this feel like a VR headset. They haven’t even called it one. I like this way of thinking, and I hope developers respect it. Spaces can have controls in the form of ornaments also built in SwiftUI. Looking down at an ornament within a space will reveal controls like tilt, pan, etc. — developers can make custom controls depending on their app, of course. Spaces can also take maximum advantage of Atmos sound design — Apple has a whole session encouraging developers to use Atmos to make their apps more immersive. This is the VR part of this MR headset, and it’s very obvious. This is the part that developers need to work the most on, because right now, spaces simply seem like a portal for games. There’s nothing truly revolutionary about them.

All apps, regardless of if they’re built using windows, spaces, or volumes, retain the visionOS look and feel like Apple’s own apps do — unless, of course, they’re simply designed for iPhone/iPad — those will look like iPhone/iPad apps without any of the interactivity or other design elements that make visionOS apps special. All of these developer APIs feel well thought out, but they’re not ready yet. Developers can’t try their apps out and get to work on building for visionOS until later this month when the SDK launches. That’s a huge bummer, and it comes back around to my original point — there isn’t a developer story for this headset at all. The tech is there, the UI is there, the frameworks are there. But there aren’t any ideas, there’s no SDK, there’s nothing important. Apple hasn’t told developers what they should build to enrich people’s lives, hasn’t given them the tools to play with the headset, and hasn’t told them anything about the limitations they might experience when developing for this headset. We’re going in blind, here, and that isn’t a good look for Apple. They shouldn’t have announced this product in such advance — it’s doing nobody good here. In fact, this launch was so rushed, that they flat-out called visionOS “xrOS” in all of the sessions because they hadn’t settled on a name at the time of the recording of the sessions. Sure, that’s hilarious, but it also says a lot. Apple has no plan here — they’re solely betting on developers’ ambition and creativity to drive sales of this product and accelerate mass-market appeal. There is no killer experience for this headset — developers will have to make it, and that’s a tough sell. Until then, it’s going to be memed for days. The software is there, the ambition and ideas aren’t. Apple themselves aren’t using the crazy technology they’ve developed to its fullest potential — how will developers? Why should developers? This headset costs $3,500 — sure, it’s impressive technology, but normal people aren’t buying this. It’s just other developers and rich enthusiasts. There isn’t an incentive to make a killer app for Vision Pro. I’m not seeing the vision.

I ended my thoughts in April with this sentence:

I’ll remain skeptical until Apple shows off how they think people are going to use this product and who they’ve developed it for.

An image of someone wearing a Vision Pro.
Image: Apple

I’m still skeptical. They haven’t developed this product for anyone yet. They’re betting on developers to do that. It all comes back to this central point: Apple has built the most beautiful, advanced, capable, magical, and mind-blowing mixed-reality headset anyone has ever built before. They won — they made the best product. There’s not a single doubt in my mind that as soon as I put on this headset I will feel astonished by its capabilities and what a marvel of engineering it is. I don’t want to undermine the point that Apple has done something truly incredible here — they’ve made a revolutionary, ground-breaking, innovative product. A first-of-its-kind product that’s leagues ahead of what any other company could dream of doing. The hardware is mind-numbingly impressive, the software is well-designed, intentional, and innovative, and the developer APIs are well-thought-out and conceived. This is, without a doubt, the next generation of Apple platforms and we all have a right to be excited about owning this piece of history and the future. They’ve very evidently put in the work to make this product insane, and I commend them for it. This has been one of the most exciting WWDCs in years — we finally get to see the future of Apple as a company, and the future of the consumer technology sector in its entirety.

But there’s a large caveat. One that can’t and shouldn’t be ignored. The opening keynote of WWDC 2023 is that iPhone keynote all over again. It’s the same ‘One More Thing’, and it’s the same unveiling of Apple’s next-generation computing experience. This is Apple’s next stand-alone product — it’s not an Apple Watch, it’s not an iPad. This is akin to the iPod, the iPhone, and the Macintosh. But the keynote sure didn’t feel like it. Instead of showing the entire world why they should care about a VR headset; why $3,500 is the price you should pay for this magical experience, Apple focused solely on the fact that they did it better than ever. Apple’s focus was that this is Apple’s version of inferior products. That’s not what consumers care about, that’s not what developers care about, and that’s not what made Apple famous. Apple is known for making things more popular — there’s a reason for that. Apple tells people why they should care. They show people why what they’ve built is groundbreaking. 1000 songs in your pocket. An iPod, a phone, an internet communicator. The computer for the bemused, confused, and intimidated. And now… “Welcome to the era of spatial computing.” It just doesn’t sound right. Apple has failed to make VR mainstream this week, and that’s disappointing for all of us. For now, the $3,500 is simply a meme that will haunt us all for the next n months. Apple Vision Pro is the best, most advanced piece of consumer technology you’ll be able to buy for $3,500. But it isn’t for the faint of heart, it isn’t for consumers, and it isn’t anything more than just a crazy-powerful VR headset.

You all know I’m going to buy one just to own a slice of the future.

--

--

Eshu Marneedi
Eshu Marneedi

Written by Eshu Marneedi

The intersection of technology and society, going beyond the spec sheet and analyzing our ever-changing world — delivered in a nerdy and entertaining way.

No responses yet