Menu interfaces are a vital aspect of most software applications. For well-established input methods – mouse, keyboard, game controller, touch – there are a variety of options and accepted standards for menu systems. For the array of new 3D input devices, especially in virtual reality, the lack of options and standards can create significant development challenges.

Yesterday, I introduced you to Hovercast – a hand-controlled menu interface for virtual reality environments. In this post, we’ll take a closer look at the development process behind Hovercast, including some insights on usability and design for virtual reality.

Concept

I occasionally scale back from my work on client projects to make time for my own. My recent interest in virtual reality and Leap Motion input led to several interesting project ideas. However, I started to see a disappointing pattern: beyond each project’s primary features and interactions, the complexity increased dramatically.

Where there are problems, there is opportunity – so I began thinking about solutions. I realized that this “complexity wall” is an obstacle that almost any virtual reality project will face. I wanted to create a versatile user interface that could manage this complexity, without adding to the project’s learning curve. The Hovercast project was born.

First Iteration

From the start, I wanted to attach the menu to a hand – to feel like a natural, powerful extension of your virtual self. It also needed to be available on demand. Controlling the menu’s visibility via hand orientation was a natural and unobtrusive solution.

The first Hovercast prototype attached menu items to individual fingertips and the spaces between them. While it was fun to watch the labels react to your fingers, this approach was slightly unreliable, and often made selections difficult.

Virtual reality experiences, with new types of 3D input, require new thinking about usability and design. Here are some informal guidelines of my own, which shaped the Hovercast project from the very beginning:

  • Simple Inputs. Where possible, use simple hand motions and reliable gestures. These are easier for the user to learn and perform, and less likely to fail. With some creativity, you can find ways to achieve complex results with very little learning curve.
  • Visual Indicators. Provide visual indicators to communicate information about the user’s input. This might be their hand’s proximity to a target, their grip strength, gesture progress, etc. These can be effective in a variety of visual formats, even subtle ones. They work best for actions with a linear progression (from 0% to 100%). Indicators provide vital feedback about which actions are important, how much action is required, and the user’s location in 3D space.
  • Hover Actions. Physics-based interactions (like a button push) are useful in many scenarios, but there are some drawbacks. Your hand may obscure the target, and the lack of haptic feedback can make it difficult to gauge your progress. Hover actions simply require the user to hold their cursor near to an item for a short time. This encourages the use of Simple Inputs, and usually requires Visual Indicators to communicate hover progress.

Hovercast has three Simple Inputs: hand orientation, the “grab” gesture, and the position of the cursor finger. Users highlight and select menu items using Hover Actions. Each menu item contains Visual Indicators that communicate the cursor’s proximity and the selection timer.

Second Iteration

I saw a variety of issues with the first prototype. The selection targets were too small, the menu failed if a single finger’s tracking was off, and it just didn’t look or feel the way I had hoped.

The arc-shaped menu bar was my solution to these issues. It gave the menu much-needed structure, improved its appearance, and helped it feel more like an extension of your virtual self. It made selections easier, and allowed the menu to be less affected by minor tracking glitches. It also provided a way to introduce “slider”inputs.

This iteration also included several other improvements: title text placed over the palm, variable-sized menu segments, a ring around the cursor finger, and many options for customization.

My decisions during this iteration led to some additional thoughts on usability:

  • Big Targets. Despite depth perception and good input precision, virtual reality is not as accurate as the real world. It’s easy to miss small targets, and it’s a frustrating experience to miss anything at all. If any accuracy is required, use Visual Indicators to communicate proximity to the target.
  • Aligned Targets. Place interaction targets near to each other, keeping them aligned on the same three-dimensional plane. There could be variations on this, but the intent is to limit the amount of searching for targets in 3D space. When targets are aligned, users can navigate between those targets more quickly and confidently.
  • Relaxed Poses. Avoid scenarios where the user has to maintain the same hand pose for too long. This is especially true if the pose requires some straining. When possible, allow for some hand and finger movement without disrupting the current task.

With its arc-shaped menu, Hovercast introduced Aligned Targets. Its menu segments could now become Big Targets, and the new menu positioning allowed for Relaxed Poses.

Third Iteration

With the menu’s core functionality and design in place, this iteration focused on improving Hovercast’s internal structure and customization features. I made it possible for developers to change customizations on-the-fly, and to optionally provide per-menu-item customizations. Hovercast has two types of customization:

  • Settings allow the developer to change several aspects of the default menu appearance – including colors, sizes, fonts, and more.
  • Renderers allow the developer to replace the default appearance of the menu segments. For example, the developer could create renderers that display the menu segments as circles, or even spheres – all with different visual indicators, icons, etc.

I also added a subtle highlight along the inside edge of the menu segments. This highlight appears for the segment that is nearest to the cursor, and also requires that segment to be in a “selectable” state. This tells the user which menu item (if any) they are about to select.

Tips on Developing VR Tools

Working with virtual reality has been exciting for me, with so many new concepts to explore and challenges to address. Building tools is particularly interesting at this early stage, as they can help developers (myself included) work around some difficult challenges. Here are some thoughts that might be helpful when building tools of your own:

  • Build a demo from the start. This helps you understand the ways that developers will actually use the tool. You can keep the demo up-to-date by adding features in parallel with the tool. Try to create a variety of scenarios in the demo, to ensure the tool can handle them well.
  • Invest extra time in your demo. A nice-looking demo with interesting features gives you an easy way to create sharable videos of your work. When people can visualize your project, they will be more likely to understand it, share it, and actually use it.
  • Explain your project, and its purpose, in plain language. In my first attempts to share Hovercast, I didn’t explain many things that seemed intuitive to me, so people didn’t understand the project.  Try to describe the problem you’re solving, the benefits your project provides, and the scenarios where it’s useful.
  • Explain everything else, too. Many concepts related to virtual reality are not yet well-known. Be ready to describe how virtual reality works, how your hands are appearing on the screen, why there are side-by-side videos, etc.

Developing with Leap Motion Tracking

While motion tracking is continually improving, it’s essential to take advantage of the technology’s current strengths. With Hovercast, I have focused on motions and gestures that the Leap Motion Controller tracks smoothly and reliably.

The menu’s “palm toward eyes” orientation seems to be the most accurate hand pose for the Leap Motion software. I expect this is because the camera angle provides a full, unobstructed view of the hand and fingers.

Initially, I thought Hovercast would use a “one-finger point” pose for the cursor. The motion tracking didn’t always perform well for this pose, so I made the cursor simply watch your fingertip position. This approach also improves the experience, as the user can move their cursor hand more freely.

Don’t have an Oculus Rift headset? Along with the VR demo, I included two alternate versions that require only a Leap Motion Controller.

  • The “table mount” version works, but not very well. Since the device is directly below the hands, the “palm toward eyes” hand orientation only allows it to see the edge of the hand – all the other fingers are obscured. To give the device a better view of the hand, I shifted the ideal hand orientation to something like “palm towards chest”.
  • The “head mount” version works much like the main demo. To try this version, users will need to attach the Leap Motion Controller to a hat or headband. (I’ve also heard that you can hold it with your teeth!)

From a user perspective, I find that using the Leap Motion Controller in medium-to-low lighting conditions works best for VR tracking. Using a computer that is well above the minimum requirements greatly improves motion tracking. Limited processing power makes it more difficult for the Leap Motion tracking software to keep both a high level of accuracy and a smooth framerate.

What’s Next for Hovercast?

To encourage adoption of the project, I’ve developed Hovercast as an open source project, with a GNU General Public License. This means that anyone can use it for free in open-source projects, commercial or not. Anyone interested in using Hovercast for closed-source projects can contact me about licensing options. My development and consulting services are also available.

Beyond that, of course, I plan to continue development! New Hovercast features might include a one-handed usage mode, additional types of 3D input, and support for other game engines. I also have several concepts in mind for new virtual reality tools and interfaces. I’d love to expand my work on Hovercast into an entire suite of tools that can help developers create stunning virtual reality experiences.

Ready to get started? Learn how to add Hovercast to your own projects with this quick getting started guide.

Epilogue: Hovercast on Twitch!

Recently, Zach joined us on our Twitch channel to talk about the development of Hovercast, along with a new upcoming demo:

For cutting-edge projects and demos, tune in every Tuesday at 5pm PT to twitch.tv/leapmotiondeveloper. To make sure you never miss an episode, enter your email address to subscribe to up-to-the-minute Twitch updates:



Zach Kinstner has been a software contractor and consultant for nearly a decade, specializing in projects that require a high degree of creativity and design. His one-man development company is called Aesthetic Interactive, from Grand Rapids, Michigan. You can see his portfolio at AestheticInteractive.com and follow @zachkinstner on Twitter.

Twitter Google+