Mathematics lies beneath the surface of just about everything – and VR is definitely no exception. If you’re not using our Unity VR assets, we know you’d much rather get started quickly than spend hours tweaking variables and teasing out rotation matrices. That’s why I put together this quick guide to VR essentials. In this post, we’ll cover correcting for distortion, orienting objects and cameras within your scene, and using the Image API for raw image passthrough.

Oculus distortion

Here are three approaches to compensating for the lens distortion on the Oculus Rift:

C++. Since v0.3 of the Oculus SDK, there have been two ways to correct the distortion in the Oculus Rift lens – SDK rendering and client rendering. Our preferred method is SDK rendering, as it allows you to easily account for time warp and chromatic aberration effects.

Unity. Our Unity VR assets handle all the heavy mathematical lifting for just about everything you’ll find in this post. But if you want to take the DIY approach, read our getting started guide for Unity, which includes all the key figures you’ll need.

JavaScript. OculusRiftEffect is an excellent rendering effect that you can use with any Three.js app.

Object placement

As you probably know, the best way to position objects is with their world-space location, because it exists independently from where you’re seeing it, or which direction you’re looking. This allows an object’s physical interactions, including its velocity and acceleration, to be computed much more easily.

However, hands tracked by a head-mounted Leap Motion Controller require an additional step – because the sensor moves with your head, the position of your hands depends on the vantage point. If you’re using C++ or JavaScript, this means you must place the hand and finger positions tracked by the controller into world-space. You may skip these steps if you are using Unity.

The exact solution to this problem is described below. (Right now, the Leap Motion API doesn’t do this automatically, as it assumes that forward is up.)

Query the Oculus SDK for current HMD transform. We want the average of the left and right camera positions, which can be obtained via Oculus’s GetTrackingState().HeadPose.ThePose method.

vr-essentials-equation-6

vr-essentials-equation-2

Encode the location (rotation and translation) on the Rift where the Leap Motion Controller is mounted.

vr-essentials-equation-3

Assuming that the Leap is mounted in the front of the Oculus using our VR Mount, tx and ty are both roughly 0, while tz is -0.08, meaning that the Leap Motion Controller sits about 0.08 meters forward of where your real eyes are.

Find the final transformation matrix that brings positions from Leap space into world-space, and apply it to the data returned by the Leap SDK frame. The 0.001 is a conversion factor from millimeters, which is the default unit of the Leap SDK, to meters.

vr-essentials-equation-1

vr-essentials-equation-4

Camera placement

The camera, like the Leap Motion Controller, also follows the orientation and positional tracking of the Oculus SDK. Unlike the above, however, there is no single correct place to position the left and right view cameras – since their placement depends on what depth and content you’re optimizing for.

Rendering any object requires a projection matrix and modelview matrix. For the projection matrix, always use the one provided by the Oculus SDK, as this will ensure the correct field of view. However, the modelview matrix depends on the situation:

  • If you just want correct placement of 3D objects, use the one provided by the Oculus SDK. There will be a separate left and right view for each eye, and you may notice that their translations are about 0.064 meters apart (the normal human inter-eye distance).
  • If you want the 3D objects (such as the virtual tracked hand) to line up with the raw images of the Image API (see section below), then you need to manually tune the locations of the cameras – because there is actually no video stream from your eyes!
    • In C++ and Javascript, this involves adding a translation in x and z to the modelview. When this translation equals the difference from your eye to the corresponding Leap Motion Controller camera, the video passthrough aligns with virtual objects – you are actually seeing from its viewpoint. Objects will also appear about 60% larger in this view because the Leap Motion Controller’s cameras are closer together than your eyes (40 mm vs. roughly 64 mm)
    • In Unity, you have to override the OVRCameraController’s position and add a shift to each of the left and right eyes. This is covered in more detail in our Unity + Oculus guide and updated AR alignment best practices.

The following diagram illustrates the difference between the above cases. As you can see, there is no position to place the camera that will simultaneously satisfy both the 3D constraints and be aligned with the image. Previously, we recommended a compromise so that both cases are somewhat satisfied, where the camera would be positioned somewhere along the line segment connecting the above extremes. However, for augmented reality applications, we now recommend direct alignment between the Leap Motion and virtual cameras. Learn more in this post.

Optimal camera placement with Leap and Oculus

Using the Image API in VR

In this section, we’ll take a look at using the Image API to get the raw video passthrough.

It’s important to remember that the video data doesn’t “occupy” the 3D scene, but represents a stream from outside. As a result, since the images represent the entire view from a certain vantage point, rather than a particular object, they should not undergo the world transformations that other 3D objects do. Instead, the image location must remain locked regardless of how your head is tilted, even as its contents change – following your head movements to mirror what you’d see in real life.

In C++ and Javascript, this means skipping the modelview matrix transform (by setting the modelview to the identity matrix) and using the projection transform only – in this case, the perspective projection returned by the Oculus SDK. Under this projection matrix, it’s sufficient to define a rectangle with the coordinates (-4, -4, -1), (4, -4, -1), (-4, 4, -1), and (4, 4, -1), which can be in any units. Then, texture it with the fragment shader provided in the example here.

In Unity, this means locking the images as a child of the Controller object, so that it stays locked in relation to the virtual Leap Motion Controller within the scene. Our image passthrough example does this for you.

Looking for an experiment? Dig into VR Intro

If you’re looking for a project that combines Leap Motion and Oculus in some interesting ways, be sure to check out VR Intro on GitHub. The codebase includes a lot of the same API calls and functions that you might use in your own project – plus you can take what’s there and add another layer onto the experience.

Since you read this far, I’ll even toss in a couple of Easter eggs that many people miss with VR Intro: press Up/Down to make the digital effects become more or less transparent, and Insert/Delete to change the passthrough brightness.

If you have any questions about this guide, please let me know in the comments!