Google, Apple, Meta, Amazon & Microsoft Join To Improve Voice Recognition


Google announced that it is joining the Speech Accessibility Project in order to help develop advanced speech recognition systems that can serve the needs of people with impaired speech.

Speech recognition is used to access websites, speech translation, voice assistants and for operating devices.

But it can be difficult for voice activated devices and services to work if a user’s speech pattern is affected by Lou Gehrig’s disease, Parkinson’s disease or Down Syndrome, among other reasons.

The project aims to change that situation with the creation of a project that brings together five technology companies that can work together to solve the challenge of making speech recognition work for those with non-standard speech patterns.

The project will first work with English and then expand to other languages.

The Speech Accessibility Project website explained:

“…without diverse, representative data, ML models cannot learn how to understand a diversity of speech. This project aims to change that by creating the dataset needed to more effectively train these machine learning models.”

New Project to Advance Accessibility

The Speech Accessibility Project is a new program by the University of Illinois and five technology companies that are working together to create technology that will make voice activated technology accessible to a wider group of people.

The following companies are members of the new initiative:

  • Amazon
  • Apple
  • Google
  • Meta
  • Microsoft

The project website stated the problem they will solve:

“Today’s speech recognition systems, such as voice assistants and translation tools, don’t always recognize people with a diversity of speech patterns often associated with disabilities.

This includes speech affected by Lou Gehrig’s disease or Amyotrophic Lateral Sclerosis, Parkinson’s disease, cerebral palsy, and Down syndrome.

In effect, many individuals in these and other communities may be unable to benefit from the latest speech recognition tools.”

Solution to Speech Recognition Accessibility

The Speech Accessibility Project will collect samples of different voice patterns and create an anonymous dataset.

This dataset will then be used to create machine learning models that can better understand the variety of voice patterns that are currently underserved.

Project Euphonia

Google launched its own AI based accessibility initiative in 2019 called Project Euphonia. This project helped Google adapt speech recognition to be able to understand non-standard spoken English.

This project collected speech pattern recordings from over 2,000 participants in the Google project.

One of Google’s contribution to the Speech Accessibility Project is to make it easy for participants in Project Euphonia to anonymously contribute their speech pattern samples to the new accessibility project.

Google’s announcement stated:

“Our hope is that by making these datasets available to research and development teams, we can help improve communication systems for everyone, including people with disabilities.”

Advanced Speech Recognition

This new project is a milestone in the creation of technology that can serve those with non-standard speech patterns.

What makes this new project exciting is that all five technology companies will work together to solve speech recognition problems instead of working in separate silos.

Improving access to devices and the Internet for underserved communities benefits everyone.


Citations

Google’s Announcement

New ways we’re making speech recognition work for everyone

Project Website

Official Website of the Speech Accessibility Project

Featured image by Shutterstock/Krakenimages.com

window.addEventListener( ‘load’, function() {
setTimeout(function(){ striggerEvent( ‘load2’ ); }, 2000);
});

window.addEventListener( ‘load2’, function() {

if( sopp != ‘yes’ && addtl_consent != ‘1~’ && !ss_u ){

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window,document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);

if( typeof sopp !== “undefined” && sopp === ‘yes’ ){
fbq(‘dataProcessingOptions’, [‘LDU’], 1, 1000);
}else{
fbq(‘dataProcessingOptions’, []);
}

fbq(‘init’, ‘1321385257908563’);

fbq(‘track’, ‘PageView’);

fbq(‘trackSingle’, ‘1321385257908563’, ‘ViewContent’, {
content_name: ‘speech-accessibility-project’,
content_category: ‘news’
});
}
});



Source