Halo, what? Amazon made a remarkable and already controversial entry into the connected health market on Thursday, August 27, with a new bracelet. Called Halo, it looks like Fitbit and others who offer to monitor sleep and measure heart rate. But he introduced two innovations that made people react.

This is timely, because one of Amazon's main promises is precisely to assert that its bracelet will manage, thanks to artificial intelligence, to “read” the emotions of its user. Boredom, elation, hesitation, joy, or even confusion are some of the feelings Halo AI is supposed to recognize by tone of voice. 

Emotions, "complex materials to handle" for AI

What interest for health? In theory, the recognition and analysis by algorithms of emotions can “help people who are depressed or who have, for example, anger management problems”, notes Laurence Devillers, professor of artificial intelligence at the University of the Sorbonne and author of “Emotional robots: health, surveillance, sexuality ... and ethics in all this”, contacted by France 24. The recording of the voice can serve as a working support for the doctor who follows the patient or even to warn a person about their emotional state. 

But this technology is not yet 100% reliable. “Emotions are very complex materials to handle [for an AI] because they depend on the context, the person, their culture and their social environment”, underlines Laurence Devillers. Therefore, to be as effective as possible, algorithms must be precisely tuned to the person, whose emotions they are supposed to recognize. And generic solutions like the Halo bracelet can hardly claim to perceive nuances in the voice which can vary from one individual to another.

Amazon is aware of this and is careful not to present its bracelet as a medical accessory. The Internet giant has not, moreover, sought to obtain approval from the FDA (the American drug agency) as did, for example, Apple for its connected watch. Halo is content to want to improve the “emotional well-being” of its owner.

Marketing tool?

Amazon's AI analyzes the “volume of speech, its intensity, its tempo and the rhythm of the voice to determine what emotions others can detect in the tone,” notes the press release presenting the bracelet. The idea would be to educate the user on how an audience perceives a speech or how the quality of sleep can affect your emotional state, the group told the Washington Post.

The premise of an AI on your wrist that judges when you seem to be in a bad mood or not 'enthusiastic' enough seems “straight out of a dystopia invented for an episode of Black Mirror [UK series on dangers of technology, Editor's note] ”, CNN note. 

Laurence Devillers also finds the usefulness of such a gadget very questionable for consumers… but much more obvious for Amazon. “It's an ideal marketing tool”, underlines this specialist in artificial intelligence. With this type of data, the king of e-commerce could easily adapt its offers to the emotional state of its customers. Stephen Hall, reporter for technology news site 9to5Google already imagines on Twitter receiving the following notification: “You seem tense today, which of these tranquilizer teas can we sell you?”.

Nay, replies Amazon. Voice data is processed locally by the Halo smartphone app and will never be sent to a server. “No one will ever hear these recordings and you can erase them from the phone,” the statement said. But "Amazon's track record in terms of privacy does not inspire much confidence," said the Input Mag site. 

An AI that stalks fat

Halo doesn't just guess at emotions. The application that accompanies the bracelet also invites users to take a photo of themselves - lightly dressed, from the front, back and side - in order to assess the fat mass index (which establishes the relationship between the mass of fat and that of muscles). Amazon assures us that this is a more relevant indicator for health status than weight or body mass index.

In addition to this clue, the AI ​​gets to work with the photos taken to create a 3D rendering of the person's body and give them the ability to see what they would look like with more or less fat. "It is a dangerous tool which risks perpetuating the culture of the excessive regime and the fear of gaining weight in the name of 'health'", take offense several users on Twitter.

Please someone write about how the Amazon Halo is a dangerous tool that perpetuates diet culture and fatphobia in the name of "health." I just can't believe you can take pictures of yourself in yr underwear and MAKE YOURSELF LOOK THINNER WITH A SCROLL BARhttps: //t.co/2274hiZmtA

- Kate McKean (@kate_mckean) August 27, 2020

It is a cocktail of use of AI "extremely intrusive and undesirable", slice Laurence Devillers. This is also a very pernicious way of doing things, as the various services are presented as fun functions to use. “This can make people dependent on these technological tools to monitor their well-being and prepare them to accept improved versions that will collect more accurate data,” the scientist fears.

For her, Halo is a new illustration of the need to set up international standards for the ethical development of artificial intelligence. France, moreover, launched the Global Partnership on Artificial Intelligence with Canada last June. This international organization, which brings together around fifteen countries, is thinking in particular about how to assess all innovations like Halo. But we have to act quickly because Amazon's bracelet proves that these multinationals are not afraid to push consumers to put their “well-being” in the hands of AI.

The summary of the week France 24 invites you to come back to the news that marked the week

I subscribe

Take international news everywhere with you! Download the France 24 application

google-play-badge_FR