In front of an Apple store.
Norikazu Tateishi / AP / SIPA
Apple is looking to improve Siri, its voice assistant.
The teams of the American giant want the computer application of voice control to be able to understand the greatest number of people, in particular those who have speech defects such as stuttering, reports
relaying information from the
Wall Street Journal
which announces that a document will be released shortly by Apple.
To promote the interaction of its assistant with people with speech impairments, Apple used a database of 28,000 audio clips taken from podcasts.
We hear speech defects which are analyzed by Siri.
So when the voice assistant is confronted with a person who stutters, he will be able to interact with them.
Five speech problems studied
Five types of speech problems are taken into account: speech stops, repetitions of sounds, repetitions of words, interjections and prolongations.
Currently, only one feature makes it possible to listen to people with atypical speech rates.
Called Hold to Talk, it increases the listening time of Siri.
In this way, a pause in speaking is not considered to be the end of a voice command.
The voice assistant adapts.
The other voice assistants created by the GAFAMs are also trained to adapt to atypical speech modes.
On the Amazon side, Alexa users will be able to use the Voiceitt application, developed by an Israeli start-up.
Like Apple, Google is testing a specific application to better understand atypical speech so that users communicate with Google Assistant and Google Home smart products.
Apple allows you to block the sending of personal data by Siri with an update of the iOS
Surfing Attack: How can hackers take control of Google Assistant and Siri remotely?