Complete Abridged Guide to the History, Present, and Future of Augmented Reality - Part 2

By: Dan Jira | Published: 03/12/2021

Creating immersive experiences in AR require a solid toolbelt to work with, and luckily for us, AR development has improved with each and every tech generation. AR is more accessible now than it ever has been, and there are quite a few different tools available to anyone wanting to create an AR experience for themselves.

App Based AR development for mobile devices saw one of its first public introductions with the help of Project Tango back in 2014. Nowadays, mobile AR is something that it would be strange to be without. From Snapchat to Amazon, almost every mobile phone in use today can utilize this once futuristic technology. Today we will be featuring two of the biggest in the business and taking a brief look into their history and their capabilities.

Apple's ARKit
When Apple first announced that they were working on a platform that allowed developers to create augmented reality experiences for iOS mobile devices, it changed the world. Well, okay, it was not that influential… yet; however, I can almost guarantee you that it had a few people at Google worried. What Apple had done was create a working Augmented Reality development platform for everyone, before Google had a chance to release their version, even though Google started working on mobile Augmented Reality three years prior with Project Tango. With this announcement, Apple gave its developers a new dimension… more accurately half of a dimension. What ARKit had done is take the standard 2D and 3D AR experience and allow the users to manipulate their position by something Apple called “world tracking”. This was huge, and it meant a couple of different things.

Through “world tracking” Apple was able to allow the developers, and users, take an object in AR and ‘pin’ it into space. For developers, this meant that they could create ‘stationary’ objects in AR instead of just placing them on a flat or 3D plane in front of the camera screen. For users, this meant that you could preview a piece of furniture in your house in the location and size that that piece of furniture would be. For Google however, this meant something entirely different. What this meant was that Apple had figured out a way to accurately measure distance with a single camera. This was huge. One of Project Tango’s biggest holdups was that it needed multiple cameras to accurately measure distance. This is quite difficult to accomplish. How difficult? It is a task even the human brain has difficulty processing. Doing it correctly, and accurately was, and still is, an impressive feat.

A few years on from the initial release, and Apple has just recently released ARKit 4 which takes the initial platform and adds an extended feature suite to allow developers to create more and more interesting, and immersive experiences. Highlights from the Apple’s developer page about the new platform include a brand-new depth API that works with some of Apple’s newer premium products such as the iPhone 12 Pro, and Pro Max. (The full list is available on Apple’s website.) Location Anchors have also been introduced, which takes their “world tracking” and makes it true to its namesake. Using location anchors, developers are able to take 3D objects and “pin” their location not on screen in a room, but rather pin an object in a location anywhere in the world. Using specific coordinates for latitude, longitude, and altitude, this technology allows developers to not only place objects within the world, but also use real world objects, such as buildings and monuments, and use those as AR targets. There are a number of other features new to ARKit 4 and the full list is available on their website.

For those interested in trying out ARKit for themselves, just know that this option comes at a cost. Specifically, two costs. New developers looking to create AR experiences with ARKit are required to have an Apple Developer ID which will set them back $99/year for a 5-year plan. Once they have an Apple Developer ID, however, the ARKit platform is free to use.

Google's AR Core
First launched in March of 2018, ARCore seemed to be Google’s answer to Apple’s ARKit which was first introduced less than a year prior in June of 2017. However, this was not Google’s first foray into mobile Augmented Reality. Back in February of 2014, Google launched Project Tango through Google’s in-house skunkworks team, the Advanced Technology and Projects group, ATAP. Essentially, a prototype smartphone with 3D computer vision technology similar to Microsoft’s Xbox Kinect. In fact, the similarities between the two companies’ hardware solutions (dual cameras, both a few centimeters apart to allow for 3D computer “stereo vision”) are more than just surface level. Johnny Lee, the former Technical Program Lead at ATAP during the Project Tango effort was a former Microsoft employee who was a core contributor on Kinect technology before bringing his skills to Google. Although only 200 units of the prototype smartphone were made available to developers, and never reached full production status, the ideas and software created for the project heavily influenced what would eventually become ARCore almost 4 years later.

ARCore focuses more on software than the original Project Tango, mostly thanks to the benefit of newer developments in hardware. For example, Project Tango’s prototype smartphone required a bespoke co-processor just to run the computer vision software, whereas modern smartphone processors are powerful enough to handle the software without the need for extra hardware. This benefit of ARCore alone, allows it to operate on most Android devices running Android 7.0 (Nougat) and newer, as well as iOS 11.0 and newer on supported ARCore devices. ARCore’s availability on both iOS and Android devices make it an ideal choice for developers who want to create more accessible experiences.

In comparison with ARKit, ARCore offers more freedom for developers to create their AR experiences in whatever program they are most comfortable with. ARCore supports not only Android, but also Unity, Unreal, and iOS development natively within its platform. Google’s offering is friendlier to new developers, as well as anyone who has interest in learning about, or creating AR experiences for themselves. With an entry cost of completely free, the magic of AR creation is available to everyone.

If you are interested in learning more about either of these platforms, I highly suggest taking a look at their documentation. The links of which are available below:

Stay tuned for Part 3: Tools of the Trade - Web Based AR!