Scrolling is awesome. It might seem like a trivial task to us because we do it so often, but when you stop to think about it, the ability to scroll absolutely changed the game of interfaces. It allows the user to read more without having a jarring page change, and lets you hold near-infinite material in a finite amount of space. Kinda like the TARDIS. The only problem with scrolling is that you need a way to navigate it.

Because of this, I set out to make an interface that allowed me to try out some new methods of scrolling. Some methods are good, other methods are so-so, and other ones are downright vulgar. You can see the project live in action (you’ll need webGL for reasons I will discuss momentarily) on my project page, and I’ve put all the files up on github.

Typically, when designing an interface, the only task is to make something that makes sense visibly. Buttons of like types should be grouped accordingly, you need to account for Fitt’s Law (check out a cool article about that on Six Revisions), and you should make sure everything is aesthetically pleasing.

As Leap Motion developers, however, we don’t have it quite as easy. Rather than just having to deal with the layout of the site, we also need to create the most intuitive mode of interaction possible. In this experiment, I chose to focus on the interaction rather than the actual design of the site – I can already feel Pohung cringing at the heinous use of normal materials – but I believe that a more user-friendly design would dramatically improve how the interface feels. That being said, let’s talk a bit about interaction models.

All Interaction Models

All of the interaction models have a few things in common. First off, they use speed in order to move along the x axis. Additionally, they all have a ‘dampening factor,’ which is part of the GUI (made using dat.gui).

By giving them speed, it means that they will always slow to a stop, rather than stop automatically. This is helpful mostly because it creates a more ‘physical’ interface. It keeps users from having to experience a jarring stop as soon as fingers are lost, or the state of interaction changes.

At the end of js/scroller.js, you can see that all the tiles’ positions are updated using this speed. If there was no dampening, all the tiles would quickly start and stop, preventing the interface from being as ‘smooth’ as it could be. Most of the different interaction models are actually just changing this single variable, and the scroller object takes care of the updating for all of them.

Velocity-Based Interaction

This interaction model uses the velocity of the hand to scroll through pictures. The code for this interaction model can be found in interactions/velocityBased.js of the github repo. This mode of interaction is probably the most simple, and the majority of it is done in only two lines of code:


Here we can see that all we need to do is take the palmVelocity (in the x direction) and add it to the speed, after dividing it by a number that makes the movement feel natural. The rest of the code is basically just telling us when we should be assigning the speed.

I really like this interaction model for one main reason – it is extraordinarily responsive. When you move your hand, the tiles move with articulate precision. It feels like you are even more powerful than usual on a computer because the objects seem to have no mass as you move them.

This is one of the beautiful parts of a digital interface: you can be physical when you want to, but can give the user superpowers just as easily. (In fact, many times it’s easier to give the users superpowers than to make it seem totally adhere to reality.) If anybody made, for example, a Superman game where you actually had to be able to fly in order to play it, the game would end in disappointment and, of course, a splat. Because of this luxury we have in the digital realm, many interfaces that would not be possible before now are.

The problem is that with great power comes great responsibility, and in this case, that great responsibility means never screwing up. Because the interface is so responsive, it becomes more difficult to put your hand into the field without negatively affecting the movement of the scroller. If I’m trying to loop through as many objects as possible, I might be moving my hand to the left, and as I loop back around I will accidently catch the field going in the opposite direction, sending the tiles careening the wrong way. This tends to be the problem with all interaction models that have articulate responsiveness: They are also articulately responsive, even when the user is doing the wrong thing.

Motion-Based Interaction

This interface is almost exactly the same as the velocity-based interaction, except for one difference: it uses the motions API rather than the palm velocity. Again, the speed is assigned with ease:


The same benefits and problems of the velocity-based interaction model apply to this model as well. It might be very responsive, but it doesn’t stop users from being very bad at using it.

Grab-Based Interaction

This interaction model is actually no different speed-wise from the motion-based model. It uses translation to add to the speed, and is very responsive. The difference between this model and the two above is actually the state of when you are scrolling. Where the two previous models scroll when there are multiple fingers in the field, this model only scrolls when there is a hand, but no fingers. This means that the user has to ‘grab and throw’ the scroller.

The sound of ‘throwing’ an interface seems pretty cool. It is a super ‘physical’ way of interacting with the computer, and makes it feel like the singularity is just around the corner. The problem with this method is mostly its uncertainty. It might just be the tattoo on my wrist, but sometimes the Leap Motion Controller will pick up an extra finger, even when my fist is clenched. This makes it so that I don’t move when I think I will, or move when I don’t think I will.

We’re working on this exact problem (not the tattoo, but the tracking), and I’m certain that it will be solved, mostly because everybody working on the problem is straight-up BRILLIANT. That being said, even when this specific issue gets fixed, figuring out exactly when multiple fingers get picked up is hard to tell, even for the most veteran Leap users. As I try to fling the scroller by letting go at exactly the right moment, many times I will let go too soon or too late, causing unneeded stress and disappointment on the scale of a Superman who can’t fly. For that reason, I feel like this interaction model is currently the weakest, and even as the tracking improves, I can’t guarantee this method will.

Momentum-Based Interaction

This interaction model is probably my favorite. Rather than mapping the velocity of the palm directly to the velocity of the tiles, it adds the velocity of the palm to the velocity of the tiles.


Because you’re adding to the speed rather than setting it direction, the divisionFactor will be much greater. This interaction method accomplishes a few tasks that make it the most feasible to me.

  • It makes it so that sticking your hand in the field will do nothing negatively. In the case of the other interaction models, this will instantaneously stop the movement of the scroller, and tends to be jarring, especially when you don’t mean to put your hand in the field.
  • It makes the interaction feel slightly more physical. Although we discussed earlier how this can be more of a disappointment than an enhancement, in this case it feels pretty cool to be ‘pushing’ a digital object.
  • It allows you to continue pushing the scroller in a single direction, so you can give it as much speed as you want (although you also need to push this much in the opposite direction to slow it back down ).

There is one more important difference between this interaction model and other interaction models: It only allows scrolling when your hand is perpendicular to the Leap Motion Controller, rather than parallel. This slight change actually makes a HUGE difference in the ‘physicality’ of the interaction model. If you imagine pushing physical objects, your hand is always perpendicular to the direction you are pushing. Because of this, a vertical hand swiping from side to side feels much more like what you would do with a physical object than allowing for any orientation.

Another benefit of using a vertical hand is that it allows the user to use their hand like a sort of ‘paddle’ – when you stroke in the direction you want the tiles to move, you keep your hand vertical, pushing the tiles like you would water. However, after you’ve pushed all the way through the field, if you want to keep scrolling, you can then make your hand horizontal to move back through the field without affecting the flow of water. This means that you can endlessly push in one direction without having to worry about ‘catching’ the tiles and sending them in the wrong direction.

One more cool part about using the vertical hand is that it lets you continue scrolling at whatever steady pace you choose. To do this, swipe your vertical hand in the direction you want the tiles to move, and then continue to hold your hand vertically in the field. This will prohibit the dampening constant from being applied, and will keep the tiles smoothly moving in the direction you move, at the speed that you moved.

Interface Design

Although I claimed to not concentrate on the interface design, there is one thing that I tried out. This is where the WebGL comes into play. The giant bar on the background will loop around as the images are looped through, and when it comes back to the front, it’s telling the user that they have scrolled through all the images. The code is pretty simple, and is in js/scroller.js.


I doubt that this small piece of information is worth having to render an entire extra scene, but you could imagine the background being a beautiful space scene, or maybe a mountain, or clouds, or a particle system that’s pulsing to the sound of galactic beats. That being said, what it adds to the interface is not really worth the extra rendering power required.


Holding a post-mortem might not make sense for something that was never alive. That being said, I can say a ton of things that I know are wrong with this interface. The greatest change I would like to see is additional interaction modes that use the touch zone. When I created this interface, the touchZone didn’t exist, but I feel like adding that into some of these methods might make it much more fluid.

I’m sure there are a ton of other things that could be changed as well, but I was happy with this project as an experiment. This is probably the most important lesson that I have come away with from make this interface – experimentation is vital.

As Leap Motion developers, we have a difficult task in front of us. We need to explore the unknown, we need to define the unknown, and we need to make the unknown usable. This is ridiculously difficult, but it is also unfathomably rewarding. When you make an interaction model for the Leap, you are helping to define the future. You’re making something that nobody in the history of the known universe has ever made before. THAT is a truly magical feeling.

This unknown comes with so much frustration as well. I can’t count the number of times I’ve worked all night on a project, only to destroy it all because it’s worthless. Yet, when I wake up in the morning after a night of too much caffeine and techno, there sometimes is an experiment that’s unlike anything the world has ever seen.

This is why I ask fellow developers not just to create, but to experiment, to try something that makes no sense. In the end, it might just be a throwaway project. But even those projects can help us define the future of human-computer interaction. Post them on the forums and discuss the problems you have, the solutions you found, and any frustrations you have with anything we do at Leap Motion. It’s only by doing all this that we can begin to learn how exactly this technology can be used.

Let me know what you think in the comments, and if you have any questions, comments, or suggestions about the code, Leap Motion, or the future in general, email me at I would love to hear from you!

– Isaac

A musician and creative researcher, Isaac explores the boundaries of space and sound with the Leap Motion JavaScript API. He has designed a variety of experimental interfaces that let you reach into music, big data, and even the fourth dimension.