How do I control multiple parameters and know which one I’m controlling?
After watching a few video reviews of the Leap Motion Controller, it seems that some of those who recently joined the Leap Motion user community are looking for more clarity in controlling multiple parameters. To solve this problem, gesture mapping and use of feedback could be used to provide clarity, or at least improve intuitive interactions.
For the sake of discussion, let’s define mapping as the connection of a control gesture to a variable parameter. For example, moving your hands further apart or closer together (control gesture) to control zoom (variable parameter). We’ll define feedback as any information received by the users’ brain that allows them to understand the state of the system.
With those terms defined, I have some questions for you to consider.
Is your mapping arbitrary?
Will users have an intuitive relationship with the mapping, based on previous experience? For example, in a driving app you wouldn’t use a forward/backward hand movement for steering. Instead, you’d use a left/right movement for one hand, or an altitude differential for two hands (e.g. holding the left hand high and the right hand low turns right, like with a car’s steering wheel).
Is the feedback for specific controls clear and easy?
Do you know what you’re about to control – before you start controlling it? In the past, I’ve tried controlling multiple parameters discretely by creating an array of vertical sliders. In order to control these, I needed to see the sliders move, but by then I had already started controlling that parameter – too late if I didn’t want to change that specific value!
I later found it was much easier to control multiple parameters by holding out my fingers (1 finger for one parameter, 2 fingers for another), then moving my hand up and down to control the parameter. With this bodily feedback, I would know what I was about to control before the system could produce any auditory or visual feedback.
One example of an already available app that takes advantage of body feedback is Geco MIDI. This app does this brilliantly, using hand orientation and finger position (open hand vs. closed hand) to differentiate between control streams.
However, the Geco MIDI app mapping is also open-ended – the user ultimately defines the relationship between a gesture and a specific parameter. To me, it seems likely that more developers are going to map gesture controls to known parameters. As a result, they can take further advantage of existing relationships – the ever-evolving language of gestures – to create more intuitive and approachable experiences.
(For further reading on mapping and feedback, I’d highly recommend the discussion of affordances in Donald Norman’s The Design of Everyday Things.)
In short, developers should take advantage of body feedback, which will reduce the need for audio/visual feedback in apps where these forms of feedback are not possible or not desired. Plus, when possible, it’s also important to take advantage of intuitively understood mappings.
How do you approach the issue of body feedback when developing apps? What are some intuitive gestures that you’ve incorporated into your work, or would like to experiment with? I’d love to hear your thoughts in the comments below.