A simple intro to ARKit

[Today’s random Sourcerer profile: https://sourcerer.io/rahulcap]

With the evolution of the smart phone, iPhone in particular, comes major improvements in cameras, gaming capabilities, and processing. It seems fitting that these factors have merged into an industry-shaping technology: augmented reality.

The application possibilities are limitless. With AR being utilized in gaming, as well as apps built for industries such as real estate, it is an opportunity for developers and users. A user can see and interact with objects in her or his environment that are not really there. Developers can get in on this new and hot technology; thus, they’re able to get exciting new work and push the possibilities for users.

Apple’s fantastic ARKit framework puts augmented reality development well within reach — even for developers with modest experience. We’ll be exploring the easy, and pretty neat, process for getting started. Feel free to download the completed project at the end. It can serve as a nice little springboard for something extremely exciting!

A little about ARKit

The point of ARKit is to easily allow iOS developers to place digitally produced objects in the real world that are ripe for interaction. This is done with Visual Inertial Odometry, or VIO. With VIO, Core Motion and the camera work to ensure that the device understands its movement and position in an environment. It finds planes, such as floors and tables, and points on which objects can be placed. Additionally, it can gauge lighting to be sure that there is enough contrast between objects and their display point.

Quite a bit of power goes into ARKit and its use, so it is limited to devices with A9, A10, and A11 processors. Also, a physical device will be needed during development, as the camera is a critical component. It’s our window to the real world.

Create a New Project in Xcode

Getting started, let’s create a new project in Xcode and choose a Single View Application. In an effort to be descriptive with our code, let’s get rid of the ViewController that is provided and create a new UIViewController. Ours will be named ARViewController.

As in the image above, open up the storyboard, select the View Controller Scene, click the Identity Inspector, and change the class to ARViewController.

The next step is to drag an ARKit SceneKit View onto ARViewController. Drag the corners of the new view so that it covers the entire ARViewController view and set the constraints as in the image below.

Once the ARKit SceneKit View is tied into place, while still in the storyboard, open the Assistant editor. Create an outlet for the ARKit SceneKit View to ARViewController, so we can make the fun stuff happen.

One bit prior to entering ARViewController — we need to get permission to use the device’s camera. Open the Info.plist file, tap the plus button to add a new key, and start typing Privacy. In the list, we’ll need to select Privacy — Camera Usage Description. In the value, type in a convincing phrase that would entice users to allow the app to use their camera. This will now show the permission alert when opening the app for the first time.

In the ViewController

Now, we’re off to ARViewController. First, we will import ARKit. Then, we will need to add viewWillAppear() right under viewDidLoad().

override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(true)
let config = ARWorldTrackingConfiguration()
sceneKitView.session.run(config)
}

In viewWillAppear(), we configure the ARSCNView session to track and detect planes and points. This occurs in ARWorldTrackingConfiguration().

To ensure that the session can take a rest when it isn’t needed, create viewWillDisappear() and pause the session.

override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(true)
sceneKitView.session.pause()
}

Let’s add some Anchors

The sample project comes with a funny, little, red anchor asset, and we’re going to put it into the real-world. In the image below, we’ll create the dropAnchor() function. This is where are create the object and place it in our view. In our case, we’ll add a bunch of them to help illustrate the spatial and positional power of ARKit.

As per the comments in the image below, we have four part to the process. First, there is the image asset. We show the image in the SCNPlane, which is a flat, one-sided plane. The SCNPlane will occupy the SCNNode space, which determines placement in the world. Finally, we add the SCNNode to the SCNScene and assign the SCNScene to our ARSCNView’s scene. To create the extra anchors, we just loop to create a bunch of nodes with different x values. We’ll do this using a SCNVector3, which is a “representation of a three-component vector”, as-per Apple. X, Y, and Z are used, with Z representing depth or distance from the observer from a certain perspective.

func dropAnchor() {
let image = UIImage(named: "anchor")
// Measurements in meters
// One-sided plane that shows the image
let plane = SCNPlane(width: 0.4, height: 0.4)
plane.firstMaterial!.diffuse.contents = image
let scene = SCNScene()
for i in -20..<20 {
// Coordinate space in view that will hold the plane
let node = SCNNode()
node.geometry = plane
// 1.5 meter away from user
node.position = SCNVector3(Double(i), 0, -1.0)
scene.rootNode.addChildNode(node)
}
// Set the scene to sceneKitView
sceneKitView.scene = scene
}

It is important to note that the numeric size and location values in dropAnchor() are in meters. Changing them around has quite an effect and can be pretty entertaining.

The last step is to call dropAnchor() in viewDidLoad(). Give it a spin on a physical devices, and the objects will appear in the real-world view. Maybe, have a walkabout and see some anchors!

In Conclusion

ARKit has a low barrier of entry, and we’ve just crossed that barrier. Experimenting and exploration are the next steps, and they will yield some exciting results. Adding gesture recognizers, for example, is a path toward interaction with the objects in the view and the view itself. One consistent result, though, is found by putting the build in someone’s hands and watching them grin. AR affects users, so it’s a good opportunity to take advantage and have some fun.

Here’s the final project.


Follow me and other software engineers on Sourcerer Blog

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.