Professor develops TalkMotion app for those unable to speak

While working as an assistant professor in the computer science department at Loyola University Chicago, Mark Albert was approached by a student who is unable to speak, inspiring him to develop technology that could help her communicate more easily.
When Hannah Thompson was born, she experienced brain damage that led to cerebral palsy and the onset of dystonia, a movement disorder causing limbs to move involuntarily. Thompson said using a tablet as a communication device can often be challenging for her to navigate due to motor disabilities.
“The time it takes to speak is frustrating,” Thompson stated. “The physical exertion can be straining on my body. I can accidentally hit the wrong quick response and be embarrassed because it’s an inappropriate response. You can imagine the countless other ways it can be frustrating.”
Albert, now an assistant computer science and engineering professor at the University of North Texas, worked alongside a team of researchers on campus led by Ph.D. student Riyad Bin Rafiq to develop the TalkMotion app. TalkMotion can recognize gestures performed by the user to communicate words or phrases. A prototype of the app is available in the Google store, but it is still in a developmental stage.
“When [Thompson] came to me, she wanted help, she wanted a computer scientist [who] could make her tablet easier to use,” Albert said. “You could go that route, but I was working in wearable devices and inertia motion sensors and these technologies meant we could set something up with enough time and effort to respond to her movement more naturally.”
Currently, the app has 11 preset gestures. Albert said the goal is to create 50-100 gestures, and ultimately program the app so the user can create their own gestures that respond to their movements. Several sign language translation apps exist, but Albert said this is not an option for Thompson.
“[Thompson] is cognitively 110 percent there, like really funny, really sweet, really smart, but she can’t move her fingers to sign, that’s just not in the cards,” Albert said. “She has a different style of movement, a very spastic movement, so we would want [the app] to trigger based on her style of movement.”
In the future, the team hopes to integrate the app with a smartwatch. Rafiq said he hopes this can be finished in a year to a year and a half.
“Right now we have the first prototype deployed, and currently we are focusing on making it more personalized,” Rafiq said. “We are trying to make it to where users can make their own custom gestures to communicate. The system will learn their gestures and they can use it later to communicate.”
Albert said the team is currently applying for grants from the National Science Foundation, which would be the first step in developing the science they need to make the app work. When they are confident that the technology will work and address a human need, it would become a National Institute of Health funded effort, and the researchers hope to eventually partner with a company to promote the product, Albert said.
“What’s unique about this problem is it’s a narrow group,” Albert said. “There’s a limited group of people who are unable to speak and have limited mobility in their ability to sign. It’s not purely just engineering, there’s some fundamental science that needs to be addressed in how these learning systems perform.”
Thompson said she is hopeful about the future of the project and cannot describe how much easier her life would be with quicker communication.
“When Dr. Albert said he would help, I was elated but well aware it would take time,” Thompson stated. “I felt seen and heard. We are on the right path — computer scientists have a moral obligation to keep people who have disabilities in mind so this population can reach their highest potential.”
There are no comments at the moment, do you want to add one?
Write a comment