UPDATE: This version of the interaction engine has been deprecated. A new, more powerful engine is in the works for Orion. Stay tuned!

Interacting with your computer is like reaching into another universe – one with an entirely different set of physical laws. Interface design is the art of creating digital rules that sync with our physical intuitions to bring both worlds closer together. We realize that most developers don’t want to spend days fine-tuning hand interactions, so we decided to design a framework that will accelerate development time and ensure more consistent interactions across apps.

A universal 3D interaction framework

That’s why we built our Interaction Engine, which has just been released as an early developer beta. While it’s currently available in the Unity Store as part of our Unity assets, in the future we plan to release it as a C++ library, so developers can build on a common 3D interaction framework. Our goal is to make the interaction engine accessible for developers – built from scratch, with no external technology or library – to add interface layers for your favorite physics engines. With the current version, you can access a simplified DLL interface which forwards simple calls from Unity3D to our library.

Our goal is to make the interaction engine accessible for developers to add interface layers for your favorite physics engines.

The beta engine is packaged with several functional examples, including interactions like passing objects from hand-to-hand, or grabbing and scaling objects. (My personal favorite is scaling – a great example of what you can do when real-world rules don’t apply.) These allow for more natural interactions that you can quickly integrate into your project.

Building the engine has been a learning process, and while it’s still rough around the edges, we’re excited to see how far we can take it. Since the Leap Motion Controller has no tactile feedback, we had to create rules for a digital medium that would make sense. Object intersections, engine latency, tracking stability – these are all challenges that we’re constantly iterating on. Here’s where we are right now.

Holding an object

Intuitively holding and moving a 3D object is the core idea behind the interaction engine. During the course of our development, we’ve moved from simple collision detection, to orienting the object with your hand, to creating multiple anchors around the surface of the object. At each stage, we’ve improved the user experience while taking the burden off developers to design intuitive fluid interactions.

With a basic physics simulation, you can insert your hands as objects into a scene. While you can push objects around, you can’t really grab things and handle them. This is because there is no tactile or force feedback, so instead of stopping, your hand ends up going through the object. Obviously, since virtual objects don’t behave like real objects, this led to a lot of unwanted side effects. We had to take a step back and think about how we want the objects to be really held.

Early on we realized that this would be a nontrivial problem. Luckily, we love those sorts of problems around here!

Stage 1: Collision detection. Our first approach was to use collision detection and attempt to precisely grab the object between our hand shapes, making sure that they didn’t intersect any objects. Instead, the fingers wrapped around it, regardless of how far you closed your fist.

While this allowed for a better grip, we quickly found that this was challenging and required a lot of attention. You have to very precisely place your hand within the scene and make sure that it syncs with the object properly. Onto the next iteration.

Stage 2: Object orientation. The next step was to sync the object’s orientation to your hand (e.g. palm or fingers). When you’re holding the object, you can wiggle it around or move it within the scene, and it follows your hand precisely. Typically, physics engines don’t have this explicit logic of an object being held and “glued” to your hand.

This is already an improvement, but it still feels limited. It takes a lot of focus to grab the object in just the right way and avoid unintentionally pushing it out of your hand. That’s when we created magnetic anchors.

Stage 3: Magnetic anchors. This is the part where you get Jedi Force powers. With magnetic anchors, when you close your hand above an object, it levitates into your palm. That’s because we can create custom anchoring points that tell you where and how any given body is held in your palm. You can see this principle at work in our Unity ragdoll demo.

Combined with pinch and grab indexes, magnetic anchors are very powerful. As long as your hand is close enough, the object gets pulled into your hand. When you start pinching an object, it can look for the closest anchor from the grabbing point. We started with a single anchor point for each object, but soon moved towards multiple points.

Right now, when users grab an object and just move their fingers around – the thumb and index finger in relation to each other – the engine combines this motion and relative positional vector to modify the object. The fingers are actively grabbing based on their pinch strength, while the object’s behavior is defined by its anchor points. With just a few simple anchors, you can quickly and easily define how your users interact with an object.

With just a few simple anchors, you can quickly and easily define how your users interact with an object.

The anchor-based approach can be applied to a variety of potential uses. For instance, if your app had a volume knob, you could make a quick motion to grab it, and then use either your whole hand or just your thumb and finger to twist it. With our scaling demo, you can take a tiny object and stretch it to become huge. (I love this interaction as a developer because it’s one way that we can actually enhance reality and make it better.)

Of course, while the core interaction model is absolutely essential, it’s just the start of a compelling 3D experience framework. Here are some of the other technical and UX challenges that we’re currently working on – and where we see room for improvement.

Disappearing hands and sensor range

In a normal physics engine, when your hand disappears at the edge of the frame, the object being held will simply continue moving. That’s because objects in engines like Unity3D have inertia. While hands disappearing outside the Leap Motion Controller’s sensor range is normal, objects inadvertently flying away is a problem!

With our approach, the object stays in place after being interacted with. In the future, we might want to have a second level that uses confidence values to stabilize the hand. This would mean that a proxy hand would stay in place and wait until better tracking is restored.

Object stability and magnified wobbles

Unlike physical objects, which have mass and inertia, the movements of virtual objects in the interaction engine are defined by how you hold them. Even when you’re holding an object at a standstill, the fingers jitter a little bit. These tiny changes in the vector between your fingers can amplify this slight jitter into a distracting wobble on the far end of an object.

We’re still working on figuring out how to stabilize this wobble without detracting from the speed and precision of the experience. One potential solution is to use the palm orientation, which is much more stable than the fingers. Of course, since future versions of the engine will evolve alongside our core tracking, the problem might be resolved at a higher level.

Engine latency and the human perception threshold

Much like how the pixel density on Retina Display devices makes it impossible to discern individual pixels, there’s some amount of latency that’s undetectable by the human nervous system. While this “human perception threshold” varies from person to person, we’ve found that 30 milliseconds is a good target threshold.

Our latency target is around 30 milliseconds, which lies below the human perception threshold.

Laggy controls have the power to make or break a platform, which is why we designed the Leap Motion Controller to have near-zero latency. However, the computing required to interact with a complex experience with many interactive parts can add unacceptable levels of latency. As far as users are concerned, slow is slow.

For now, we’ve addressed this problem by assigning priority to the object that your hand is currently interacting with. Grabbing an object serves as a switch, so that the object starts being governed by the rules set by the interaction engine. All other objects within the system are processed afterwards. Of course, the illusion breaks down when your object tries to interact with other objects. Smack a paddle into a ball, and the paddle might momentarily phase through the ball before the ball reacts. This is our next challenge.

Two-handed interactions

For me, this is the most exciting area of development. With the current interaction engine, you can pass an object back and forth, but there’s so much more we could do. Imagine adjusting the orientation of an object when you hold it with one hand, and push it with the other. Holding it by one anchor and grabbing the second anchor, so that the object follows your second hand. The possibilities are endless.

What’s next?

These are still open questions, and I’d love to hear from you. What’s the best way to interact with an object in 3D space? What physical laws need to be bent – or even broken – to create a smooth intuitive experience? We want the Leap Motion interaction engine to be a great resource for you, so please let us know what you think!

/ Everything Matt does is waves. He has a background in music software, and in his free time develops interactive art using physics simulations and audio. If he’s not writing software, Matt is biking around San Francisco while eating a slice of pizza. Check out more of his work at tytel.org.

LinkedInVimeo

Adrian Gasinski / Adrian Gasinkski is a software engineer at Leap Motion.

LinkedIn