To scale or not to scale? When it comes to augmented reality, the right camera alignment and scale are absolutely essential to bridging the gap between real and virtual worlds. And as developers are already experimenting with image passthrough and hybrid reality resources like Image Hands, this is more important than ever.
Based on our ongoing research and testing, we’ve updated our VR Best Practices to recommend one-to-one alignment between the real-world and virtual cameras, and created a demo to let you experience the difference. Here’s why.
The Challenge: IPD vs. ICD
When developing for AR, there are three cameras that you have to take into account: physical, virtual, and biological.
- The physical cameras on the controller have a fixed separation of 40 mm. This is known as the inter-camera distance (ICD).
- The virtual cameras exist in VR, and are looking at virtual objects as well as the images coming in from the controller. Virtual camera separation (VCS) can be changed at will.
- The biological cameras are your eyes, and their separation varies from person to person. The average human interpupillary distance (IPD) ranges from 54–68 mm. Your brain creates a sense of depth and scale perception based on your IPD.
As a result, the only way to provide images of the physical world that correspond perfectly with perceived scale is to capture those images with an ICD that matches your own IPD. While modules like Dragonfly will have a 64 mm ICD to match an average human IPD, what’s the best approach for today’s hardware?
To let you see how this works, we’ve created a simple demo that compares camera alignment vs. player/world rescaling. Launch the demo and hit “Enter” to move through the scenes and see how it affects the behavior of virtual objects and the image passthrough. (The first scene you’ll see is the correct one, but it’s more fun to skip it for now!)
|Can we reduce the 3D scale of the physical world by changing the 2D images? This is not possible, since the perceived scale of the external world is determined by the distance between the viewpoints. Furthermore, rescaling the 2D images actually has the effect of zooming in, which would yield a mismatch in the viewing angle of the real and virtual scene.|
|How about reducing the scale of the player relative to the virtual world? With this approach, we’d increase the scale of the virtual world by a factor of 1.6 (IPD/ICD), while still keeping the VCS and the IPD the same. This might work, but only if the user’s point of view in the virtual world cannot move, or if all virtual objects are expected to move relative to the user. This is what we did with VR Intro to align the hand skeleton with the hand images. However, this approach fails when the objects do not move with the user’s head.|
|What if we aligned the cameras? So far, we’ve seen that scaling doesn’t work. The final and simplest approach is to match the VCS with the ICD, which will have the effect of changing the perceived scale of the virtual world. Having made this change, when your head moves, the virtual objects remain aligned with physical ones. In effect, the user experiences a subtle shift in their perceived IPD and adapts to the change.|
3D perspective is a tricky concept to communicate in 2D form, so be sure to try the demo and see for yourself.
Based on our user testing, we’ve discovered that aligning the cameras is actually the most seamless solution. Developers can build without worrying about relative scaling, while user’s brains adjust more readily to 1:1 alignment than to scaling changes.
With this insight, which is also reflected in our demo scenes in the Unity Core Assets, you can now build hybrid reality experiences that bring virtual objects and the real world in sync. We’ve seen some really incredible momentum from the community in this space over the last year, and as our platform continues to evolve, we’ll keep you updated about the latest resources and best practices.
Photo credit: jepoycamboy, Flickr