Wade Kellard and his team all have three things in common. They’re students at the Rochester Institute of Technology. They’re deaf. And they’re determined to give a voice to the world’s 70 million other deaf people – one that’s natural, accessible, and affordable.

By using motion-control technology to track and translate sign language in real time, the team at MotionSavvy wants to create the next generation of communication devices for the deaf and heard of hearing. The team has already successfully created A-to-Z finger spelling recognition, and have been working non-stop to add new words and signs to their growing American Sign Language (ASL) translation platform.

But it’s not as easy as it sounds. ASL is a unique language, with its own grammar, history, and terminology outside of spoken English. That’s why they’re using language processing software to analyze the context and try to predict what the user is signing. Ultimately, they would like to see ASL translation become mobile and natural – so that the deaf can be understood everywhere they go.