Apple unveiled several new augmented reality tools and technologies for software vendors during its annual WWDC conference this week. These technologies could be crucial if Apple actually releases an augmented reality headset or glasses in the coming years.
Apple has never confirmed plans to release augmented reality hardware, but can reportedly announce a headset as soon as this year. Facebook, Snap and Microsoft also work with devices that can understand the world around them and display information in front of the user̵
To succeed with an augmented reality device, Apple must come up with strong reasons for people to use it – and it comes down to useful software, just as apps like Maps, Mail, YouTube and the mobile Safari browser helped spur the adoption of the original iPhone. Getting developers on board to build augmented reality software now increases the chance that one or more “killer apps” will be available at launch.
Apple did not spend much time on augmented reality on the WWDC launch key on Monday, but announced several updates during the conference’s more technical parts show that it is still an important long-term initiative for Apple. CEO Tim Cook has said that AR is the “next big thing.”
“From a high level, this year, and perhaps even next year’s WWDC event, will be a calm before an Apple innovation storm,” Loup Ventures founder and longtime Apple analyst Gene Munster wrote in an email this week. “At the moment, Apple’s intense ongoing development is related to new product categories around augmented reality and portability.”
What Apple announced
During this week’s long conference, Apple briefed developers on the rapidly improving tools that can create 3D models, use the device’s camera to understand hand gestures and body language, add fast AR experiences online, a heavily Apple-backed standard for 3D content , and an exciting new sound technology that is like surround sound for music or other sound.
Here are some of the AR announcements Apple has made, and how they are paving the way for their bigger ambitions:
Object capture. Apple has introduced application programming interfaces, or software tools, that allow apps to create 3D models. 3D models are crucial for AR, because they are what the software places in the real world. If an app does not have an exact detailed file for a shoe, it cannot use Apple’s machine vision software to place it on a table.
Object Capture is not an app. Instead, it is a technology that allows a camera, such as the iPhone camera, to take multiple photos of an object, and then sew them together into a 3D model that can be used in the software in minutes. Previously, precise and expensive camera layouts were required for detailed object scanning.
Finally, third-party developers such as Unity, a top AR engine manufacturer, will include it in the software. For now, it will probably be used heavily in e-commerce.
RealityKit 2. Object Capture is just one of a significant update to RealityKit, which is a set of software tools for creating AR experiences. Aside from Object Capture, there are many small enhancements to make app makers’ lives easier in RealityKit 2, including enhanced rendering options, a way to organize photos and other assets, and new tools for creating player-driven characters in augmented reality scenes. .
Apple’s new city navigation feature in Apple Maps.
ARKit 5. ARKit is another set of software tools for creating AR experiences, but is more focused on finding out where you can place digital objects in the real world. This is Apple’s fifth major version of the software since it first came out in 2017.
This year, it includes something called “location anchors”, which means software developers can program AR experiences related to map locations in London, New York, Los Angeles, San Francisco and a few other US states. In a developer video session, Apple said it uses the tool to create AR direction overlays in Apple Maps – a potentially useful scenario for a head-mounted AR device.
AI to understand hands, people and faces. While Apple’s machine learning and artificial intelligence tools are not directly related to augmented reality, they represent capabilities that will be important to a computer interface that works in 3D space. Apple’s Vision framework software can be called by apps to detect people, faces and bags through the iPhone’s camera. Apple’s computer vision software can now identify objects inside images, including text on signs, as well as the ability to search for objects inside images – such as a dog or a friend.
Combined with Apple’s other tools, these AI tools may apply, affecting similar Snap filters. One session at this year’s WWDC even goes into how it can identify how a hand is posed or moves, which lays the foundation for advanced hand gestures, which are a big part of the interface of current AR headsets like Microsoft HoloLens.