Signposts That Digitally Aid the Deaf

April 26, 2012 | Salute to Scholars, The University

By Cathy Rainone

Matt Huenerfauth’s Linguistic and Assistive Technologies Laboratory at Queens College is outfitted with spandex bodysuits with Wii-like sensors, spandex gloves that have little thin strips signaling precise joint movement and helmets containing eye trackers — motion-capture equipment that you’d find in a Hollywood animation studio.

But Huenerfauth, associate professor of computer science and linguistics, is not making animations for video games or 3D movies. He uses the equipment to digitize the movements of people performing American Sign Language, a language distinct from English with different word order, grammar and vocabulary.

[youtube]http://www.youtube.com/watch?v=OWnPztWMpQc[/youtube]

There are more than half a million deaf people in the United States who are fluent in ASL, but half of deaf high school graduates read English at only a fourth-grade level. Much of the English text available online is too difficult for them to understand. Huenerfauth is hoping to improve access to websites for deaf people by creating software that can present information in a form of ASL computer animations.

“The state of the art in producing sign language animation is sort of where we were 30 to 40 years ago with computer synthesized speech, to the point where it’s not really understandable at times for ASL users,” says Huenerfauth, who’s proficient in ASL. “They may not even be able to follow or understand what the message is because of the problems in the speed in which the character moves or subtle ways with which their hand is turned or the timing of something just makes these messages not understandable.”

Creating more accurate ASL computer animations is an elaborate process. It’s not enough to just record the signs and stick them together to formulate a sentence, says Huenerfauth. ASL users convey important grammar information with facial expressions and use subtle hand movements to conjugate verbs or place things in space around them. Huenerfauth analyzes the recorded movements of ASL users and looks for these patterns to create more realistic virtual human characters that perform more realistic ASL movements.

“There are things that you can do in sign language that you can’t even do in written or spoken languages,” says Huenerfauth. “When you’re signing, you actually point to places around you in space and you set up people, places or things that you’re talking about at those locations, and the people you’re conversing with will have to remember where you’ve pointed to set up everything, and then they point to the same locations when they want to talk about those items later. For someone who’s used to communicating in that way, the differences between that and written languages can be really profound.”

…Huenerfauth’s lab is outfitted with spandex bodysuits with Wii-like sensors, spandex gloves that have little thin strips signaling precise joint movement and helmets containing eye trackers — motion-capture equipment that you’d find in a Hollywood animation studio.

A leading researcher in the field of computer accessibility and assistive technology for people with disabilities, Huenerfauth received a five-year Faculty Early Career Development Award in 2008 from the National Science Foundation to support his research on ASL. Huenerfauth recruits deaf people to record their ASL movements with a focus on a particular linguistic issue at a time, and he has deaf members on his research team who facilitate these research sessions. In the summer, students from deaf high schools in the area come to his lab to analyze these recordings and look for patterns as part of their research internships.

“We rely on them because they have been signing their whole life,” says Huenerfauth. “They’ll catch little bits of slang or things that would go by too fast for someone else to catch. There’s slang and regional differences in ASL, there’s dialect and accents. All these variations that are out there for all these spoken languages are there in sign languages, too.”

Once data is analyzed, Huenerfauth uses software to create a mathematical model that explains how ASL users deal with a particular linguistic issue such as where they pause while signing.

“It’s all about animations that match human patterns and coming up with a mathematical model that captures that pattern, says Huenerfauth. “The output of our lab is publishing papers that describe these mathematical patterns so that anyone who wants to make animations of ASL can incorporate any of these patterns into their own software and thereby make their animations more natural and more understandable.”