Itadakimasu (Japanese for ‘Bon Appetit’) is a therapeutic VR experience that allows users to interact with animals through different hand gestures. The focus of this piece stems from research findings that animal-assisted therapy can help decrease anxiety and reduce blood pressure in patients.
Although the experience is simple in content, my intent is that it could act as a short-term solution for people in places where owning a pet is logistically difficult.
Ideation and Planning
The goal is to create an emotional response through interactions between the user and animals.
With only one month to work on this, I planned a timetable and prioritized the following:
- Code and perfect the interaction between the user’s hand gestures and the animals with the help of Leap Motion’s detection modules.
- Model, rig, and animate the animals. It is important that every motion and animation elicits an emotional response from users.
- Create an environment that uses motion to guide the users.
- Develop music and voice assets that help bring life to the environment and characters.
To me, it was most important to get the interactions right. Once that was achieved, I could then start playing around with the different types of animals and animations.
Interacting with Leap Motion
I chose to work with optical hand tracking using the Leap Motion Controller in order to perform gestures that were natural to our culture. Although an analog controller would have been great for haptic feedbacks, the Leap Motion Controller provided an experience that was more personal and intimate.
Using Leap Motion’s detector scripts, I can easily detect what the user’s hands are doing. This can be anything from figuring out the palm direction to seeing if the fingers are curled or extended.
Combined with a Logic Gate, I can now use a certain hand gesture to trigger a specific animation in the animals. This has been extremely useful in my process, as it made it easier to debug and test what was working or what wasn’t.
Once this was achieved, I could begin to shift my focus to the animals themselves.
One of my biggest inspirations is taken from PARO, the therapeutic seal robot. PARO is an advanced interactive robot used in hospitals and extended care facilities in Japan and Europe. You can read more about it here.
I wanted to emulate that same sensation of joy through animation.
In addition to reusing the sloth from my previous work, I added a red panda, an otter, and a hedgehog. Rather than going for more realistic animal behaviors, I wanted to place these characters in funny or unusual situations. For example, the red panda spends his time eating ramen or the otter is getting ready to jam with his clam guitar.
In order to ensure good feedback, the animals are highlighted whenever the user gazes at them. This lets the user know that they can start to perform an action.
Environment as Onboarding
Rather than have the environment act as a simple wallpaper, I wanted it to be central in guiding the user, so that a first-time user would be able to see hints of what to do embedded in the background.
In order to do that, I made sure that instructions would ‘frame’ certain animals. For example, the three hand gestures are integrated as flyers that rest above the red panda and otter. This ensured that the discoverability of these actions were high.
Taking cues from the interior design concept of ‘vignettes,’ I grouped my environmental objects around each animal as a picture frame. So not only was it pleasing to look at, but it conveyed the necessary information, as well.
For the sloth, I chose a slightly different approach. The sushi conveyor belt sits in the foreground of the sloth. Every so often, the user will see the same three hand gestures as signs that pass through along with the sushi.
The conveyor belt is also meant to guide the user’s eyes so they could follow the sushi to see the rest of the scene.
Overall, there were strong positive reactions to my piece (see featured video above). The Leap Motion Controller worked well and everyone reacted naturally with the gestures. I feel this would have been a different experience if I had gone with a clunky controller.
I did notice that some of the participants assumed they could do any gesture, other than the three, to get a reaction. Some of them waved and even used voice commands like saying “hiiiiiii.”
Another observation is that several people were not immediately aware they could turn around to see more animals to interact with. This has to do with the user only being able to see one animal at a time, with the others being out of their peripheral vision. I feel if I had four animals, they would be evenly spaced for the user to notice and want to turn around. This also presents an opportunity next time to experiment with light, shadows, and sound to gives cues for users to turn around.
In the end, it was truly heartwarming to see most of the participants leave this experience with a laugh or smile on their face.
I would like to develop my skills in sound design. Right now, sound is more of an afterthought rather than being fully integrated as part of the design. I would like to explore more to see how sound can be used as cues in directing the user’s attention.
Another area of improvement is making the environment more responsive. Several participants wanted to pick up the sushi and I feel adding that interactivity would have made the experience more immersive.
This would not have been possible without Sergio Trevino, who graciously donated his time to help me code and understand the detector scripts.
Thank you to the wonderful Robert Ramirez for providing the music, Patty Metoki and Emily Okada for lending their incredible voice talents, Jerry Villagracia for audio support, James Chen for Unity support, Keiko Komada and Mai-Chi Vu for design support, Kerin Higa and Nikki Chan for editing, and Chris Iseri for coming up with the wonderful title.