ARKit on iOS is a multifaceted package that simplifies the task of building an AR experience. But with the latest updates, it has broken out of its mould to take its place at the top of AR development.
The chief differentiator is its World Tracking — which positions virtual 3D objects in the real world. It tracks and measures distances with greater precision than ever before — especially with its enhanced depth perception and people occlusion (the camera focuses on the virtual object and blocks out people in the frame).
A big part about quality AR is the integration of virtual objects into the real world. RealityKit is an Apple framework that makes this possible.
It’s built on top of ARKit and is preferred by our developers for 3 chief reasons:
But (this is the last but, we promise) to integrate RealityKit into an app, we need to understand how ARView and ARSession work.
This is the view in RealityKit that allows an user to interact with an AR experience. It can construct a Scene using the positions of objects and overlaying them on the real world in the view.
ARSession is the brain of your AR experience. The session contains all the configuration settings one needs to position objects in a 3D environment.
It works with ARView to keep track of all the virtual objects in space along with the captured feature points in the real world — becoming the bridge between the real world and virtual space.
ARSessions are inside ARViews by default. We can configure how the work by creating a TrackingConfiguration object along with a few other options.
This invites the question:
How does ARKit even know where to position a virtual object?
It does this by supporting different types of tracking configurations using your behavior as a cue. Since we wanted to explore real world measurements, we went with ARWorldTrackingConfiguration
How does ARKit know where to position a virtual object?
ARKit supports different types of tracking configurations based on what you desire, and since we wanted to explore real world measurements, we went with ARWorldTrackingConfiguration — which is a configuration that triangulates the iOS device’s position and orientation. This enables it to augment the user’s environment with virtual objects.
Now, we can create an instance of ARWorldTrackingConfiguration to make sure it’s configured with the right options and then pass it on to the session.
Here’s a code sample:
To start off, we created an instance of ARView to position on the screen. Next, we made an instance of ARWorldTrackingConfiguration — which is actually super important and needs you to keep track of a few important parameters:
With this newly initialized AR experience, we can add virtual objects into our experience and explore anchors and entities.
Every scene in an ARView has the following relation with anchors and entities:
RealityKit provides a protocol called HasAnchoring. This describes points in the real world that act as hooks or as anchoring POVs for virtual objects to launch into real-world surfaces. An example of an important class is the AnchorEntity.
Think of this as the atom in an augmented space. It allows you to add characteristics like dimensions, surfaces, and colors that can interact in an AR scene. The Entity is rarely used by itself, so for all practical purposes, developers still choose to go with AnchorEntity or ModelEntity.
Model Entities are virtual objects (with simulated physics) placed in the AR space. Just like their real-world counterparts, they come with attributes like:
We can also add interactivity to our model entities by adding gestures on ARView. These can be linked to the model entity that the gesture is referring to — something that RealityKit calculates in real time. To make all this easy, ARView makes available a few gestures:
To enable them, we just have to call a function named “installGestures” on the ARView
Now that we’ve explored Entities, let’s jump into some code:
So these functions generated something called Model Entities which are concentric circular discs with real world dimensions in meters.
These get overlaid on top of each other. Then, we apply a function (to create a simpler material with a color) to apply to a Model Entity constructed as a circular plane. Four of these circular planar model entities then get added to a single model entity — called the entity.
The simulated physics for each gets grouped in the parent, which we use to construct an Anchor Entity. This Anchor can host our Model Entity to host our circular discs in the augmented reality scene.
And the final result is this:
Voila! You now have your own 6 feet visualizer at your beck and call!
(You’re welcome)
Explore More