How can we develop applications for the Leap Motion Controller that don’t require constant visual feedback?
Imaging using your mouse with your eye shut. How well would it work? Now imaging typing with your eyes shut. How well does it work?
As a musician and musical interaction designer, I have to ask this question for every project I make: Can I use this interface without having to look at it? In order to read music, follow a conductor, or take visual cues from my audience or bandmates, being able to operate your instrument without the need for constant visual feedback is absolutely essential. I can’t read the notes of a new composition very well while staring at the fretboard of my guitar or keyboard of my piano.
Of course, these instruments provide tactile feedback, to a greater or lesser extent. Guitar offers much more than the piano (with its string gauge, neck thickness, and spacing between frets), but with both instruments you can also rely on the feedback provided by joint position.
On the other extreme, the Leap Motion Controller doesn’t provide any tactile feedback. As a result, when I first started developing projects for Leap Motion, I was dependent almost entirely on visual feedback. I had to put my hands into just the right position in 3D space to engage the device and control musical parameters.
My inner musician found this impressively frustrating. Take a moment to imagine an invisible mixing board in the air that you cannot touch or feel. Now, try turning up the volume of one sound channel by moving your finger vertically, perpendicular to the ground.
As you can imagine, if you accidentally shifted slightly to the left or right while moving vertically, you might go from mixing the drums to re-mixing the guitars or vocals. This would completely destroy the balance built with previous gestures. When this happens, you might quickly become disappointed with the technology.
Over time, I realized that this issue was not one of technology, but rather how I was thinking about using it. Below is a breakdown of the ways I was creating problems, as well as some potential solutions that I developed.
Problematic Design
- The quantization of space (parallel channels in the air) was relative to the position of the device only.
- Channels were too close together (so it was easy to control the wrong channel) or too far apart (which led to me waving my hand in the dead space between two channels).
- The number of channels to control outnumbered the fingers I had to control them.
- Mixing with “sample and hold” finger positions required me to remember where I left off in space. Thus, visual feedback was required for me to avoid big jumps or lots of smoothing/interpolation.
Solution in Progress
- Use the relative positions of fingers to control things – relative to each other, or to the palms/hands – rather than their absolute position in space.
- Limit the number of controls to the number of fingers or hands available to the user(s).
- Rather than using sample and hold, allow the parameters to return to a preset position when my hands leave the active control area of the Leap Motion Controller.
Moving forward, I’m trying to develop new ways of providing auditory feedback – not only for the direct control of musical parameters (such as with the polyphonic theremin), but so that I can use sound as feedback for the user while controlling other media (including musical sounds).
Just like with playing a musical instrument, you can achieve much more if you are able to look away from your instrument while playing it. My hope is that, by freeing users from dependency on visual feedback, we can greatly expand the possible applications of the Leap Motion Controller.