As we saw last week, the V2 skeletal beta Bone API makes it possible to build physical interactions around a skeleton-rigged mesh – something that we couldn’t accomplish consistently with V1. When combined with auditory and other forms of visual feedback, onscreen hands can create a real sense of physical space, as well as complete the illusion created by VR interfaces like the Oculus Rift.

But rigged hands also involve several intriguing challenges, which we encountered in developing our Unity example hand. In this post, we’ll take a look at some of the technical and design challenges that went behind building our rigged hand, and how we’re still thinking about them.

Technical challenges

Positions vs. rotations

How should we drive the movement of the rig using the data we receive from the SDK? Your first instinct might be to set the position of each joint in the rig to the joint positions we get from the SDK. But we quickly discovered that this was unworkable, since the knuckles in your hands actually move by rotating, not by translating. Using positional data to drive joint movement makes the deformations look completely wrong.

This means that the ideal way of getting the rig joints into the correct position is using rotations. That’s why we created the basis() function in the Bone API:

  // Returns the rotation quaternion of the given bone in relation to the controller.
  public Quaternion GetBoneRotation(int bone_type) {
    Bone.BoneType type = (Bone.BoneType)bone_type;
    Quaternion local_rotation = finger_.Bone(type).Basis.Rotation();
    return controller_.transform.rotation * local_rotation;

  // ...
  public override void UpdateFinger() {
    for (int i = 0; i < bones.Length; ++i) {
      if (bones[i] != null)
        bones[i].rotation = GetBoneRotation(i);

Coordinate systems

Depending on how your rig is set up, the orientation and coordinate system of the rig export can also become an issue. Ideally, the rig should have the same coordinate system as the engine/framework you’re working in. In the case of Unity, positive-Z should be forward (where the fingers are pointing), positive-Y should be up, and positive-X should be to the right.

Basically, if your rig’s coordinate system doesn’t match the coordinate system you’re working in, be sure to apply a transformation to get the rig into the environment’s coordinate system before applying any other transformations.

Size of the hand

Since we won’t be doing any translation of the joints – as the only translation should be the hand itself – the scale of the user’s hands will not be automatically captured. Scale is a complex issue if you want to do it perfectly right. Everyone’s hands have different dimensions, thicknesses, etc.

What’s the problem? Without proper scaling, you run the risk of making your rigged hands too big (so they can accidentally overlap) or too small (so that the rigged hands are unable to touch each other). Right now, the simplest approach is to approximate the scale by using PalmWidth and doing a uniform scale of the mesh based on that scalar.

Rig building, over-rotation, and fine tuning

Rig building for the hand has some very specific requirements, because it needs to reflect real-world structures and behaviors:

  • When joints have 0 rotation, they should be pointed forward along the z-axis.
  • Non-first knuckle joints should have no translation except in the z-direction.
  • To get the hand into a good bind pose, rotate the joints into place from the default (where all fingers point along the z pose) to the bind pose. Be sure not to translate them into the bind pose!

If you have any translation in bones that aren’t the first joint, you’ll run into over-rotation issues and the hand will look wrong.

Design challenges

Uncanny valley: “that’s not my hand”

How many times have you heard the expression “I know it like the back of my hand”? Unfortunately, we can’t realistically represent every detail of every user’s hand. This makes realistic-looking hands fall into the territory of the uncanny valley – the creepy-crawly space where things look almost but not quite human.

I’ve played with a variety of different hand styles, and the cartoonish, outline-shaded white hand is the style we like most. Below is an example of the current Unity rigged hand in action:


Naturally, your preferred design will depend on what sort of app you want to create. We encourage everyone to try out more styles and let us know what works for you!

Camera angles, perspective, and styles

What’s the most natural rendering of the hands on a flat screen? The question is more difficult than it might appear at first, because it only spawns more questions. With virtual reality head-mounted displays, it’s not an issue – because your perception of the hands can be mapped to how your head is moving using head tracking. But in traditional setups with 2D screens, finding the ideal angle can be tricky. Here are some of the questions that pop up:

  • What’s the angle of the monitor on the user’s table?
  • What’s the angle of the user’s head as they look down (or up) at the monitor?
  • What should be the angle of the camera in the virtual world?
  • What offsets should be used to represent the fact that your hands are rarely right in front of your face, but typically lower and down?

Future improvements

Better scaling

Earlier, we talked about scaling the hand by getting the PalmWidth. While this is a decent quick fix, individual human hands are definitely more complex than a single uniform scale. Finger lengths, palm dimensions, finger thicknesses, etc. all vary from person to person. Reliably capturing these unique traits and representing them in a deformable mesh is a big challenge, but ultimately worth it.

Different visual styles

As we build on the rigged hand concept, a variety of realistic, cartoonish, and abstract hand representations will need to be explored. Since the Leap Motion Controller can be used for everything from arcade games to medical interfaces, which one works the best will probably be specific to the application and context.

This is still a work in progress, and we’d love to hear about how you would use this feature in your own projects. In the coming weeks, we’ll be digging into other new features with skeletal beta.

Pohung lives for 3D interaction and game design, and currently builds augmented reality tabletop games at Tangible Play.