Google wants people with Down syndrome to record themselves speaking to help train its AI to recognize unique speech patterns
- Google is asking people with Down syndrome to "donate" recordings of their voice to help train its voice-recognition software.
- Voice technology has historically struggled to understand people with unique speech patterns, like those with Down syndrome.
- Google is seeking 500 unique speech samples, and is already more than halfway toward reaching its goal.
- Visit Business Insider's homepage for more stories.
Voice computing is the future of tech - devices like smart-home systems and internet-enabled speakers are leading a shift away from screens and towards speech.
But for people with unique speech patterns, these devices can be inaccessible when speech-recognition technology fails to understand what users are saying.
Google is aiming to change that with a new initiative dubbed "Project Understood." The company is partnering with the Canadian Down Syndrome Society to solicit hundreds of voice recordings from people with Down syndrome in order to train its voice recognition AI to better understand them.
"Out of the box, Google's speech recognizer would not recognize every third word for a person with Down syndrome, and that makes the technology not very usable," Google engineer Jimmy Tobin said in a video introducing the project.
Voice assistants - which offer AI-driven scheduling, reminders, and lifestyle tools - have the potential to let people with Down syndrome live more independently, according to Matt MacNeil, who has Down syndrome and is working with Google on the project.
"When I started doing the project, the first thing that came to my mind is really helping more people be independent," MacNeil said in the announcement video.
Google is aiming to collect 500 "donations" of voice recordings from people with Down syndrome, and is already more than halfway toward its goal.
Get the latest Google stock price here.