Analyzes the emotions of his counterpart: The Luna Pepper robot
Pepper speaks, sings and dances, his audience on the couch – three elderly women plus nursing staff – bobs along enthusiastically. This is how it can be seen in a recording of the NDR. However, Pepper is not a human, but a robot that explores the emotions of his counterpart. Heartbeat, blood pressure, facial expression, speech patterns, eye movements: This is just a selection of biometric data that a so-called emotional AI like Pepper can analyze in order to draw conclusions about the emotional state of the other person in real time.
Experts like Kenza Ait Si Abbou (42) warn against mistaking the cute-looking robots for a gimmick. After all, it's about using them not only to analyze people's emotions and behavior, but also to influence them. The IBM manager writes books, gives lectures and talks about the topic on various talk shows. Before her job as a senior manager in the field of artificial intelligence (AI), she was responsible for the topic at Deutsche Telekom. In view of the developments in emotional AI, she speaks of an epochal change when it comes to the relationship between humans and machines: "Now we have an object that behaves like a subject."
Sectors in which emotional AI is already being applied include industries as diverse as medicine, marketing or the automotive industry, among others. It is used, for example, in autism therapy. And in cars, for example, AI should ensure that autonomous driving becomes safer in the future if the driver's emotions are also evaluated. Ait Si Abbou sees two clear directions in the future: "One is in the customer care sector, the other in the medical sector." According to the market research agency Markets and Markets, the market potential is expected to grow to around 2027.42 billion US dollars by 9.
Big tech companies are getting involved in the market
The leading providers of emotional AI include large tech companies such as Google, Microsoft, NEC and IBM. For example, Microsoft completed the acquisition of Nuance in 2022. The AI company is active in the health sector, among other things, and has also developed an interactive driving assistant that analyzes the emotions of drivers and is intended to respond to them. In 2016, Apple acquired the Californian company Emotient, which specializes in recognizing emotions in facial expressions.
However, the market also includes start-ups such as the American recruiting platform HireVue, the Indian market research platform Entropik Technology, the British company Realeyes, which measures customer reactions for companies, and Affectiva. The American start-up has been part of the Swedish tech company Smart Eye since 2021. The main customers are car companies and advertising agencies.
In Germany, the start-up Retorio, among others, caused a stir when itreceived a seven-figure capital injection in February 2021. The company has developed an AI that analyzes and evaluates videos of applicants in human resources based on their facial expressions, body language and speaking behavior. According to the company, investors include venture capital funds such as Conviction VC, Basinghall Partners and Sofia Angels Ventures.
The risks of emotional AI
But the risks associated with emotional AI are high. The technology also offers room for abuse and manipulation – and can discriminate. For example, in an article for the scientific journal "AI & Society", scientists PeterMantello and Ho Manh-Tung describe that the algorithms rarely take cultural or gender-specific factors into account, which in turn can lead to bias problems. The AI thus distorts reality – and thus disadvantages certain people or groups.
For example, a study at the University of Maryland Business School in the United States examined how AI evaluated the facial expressions of black and white football players. Black players were consistently perceived as more angry by one software compared to white players, and more contemptible by the other software. Depending on the area of application of AI in practice, such assessments can have serious consequences – from a job rejection to imprisonment.
Critical areas of application can be found, for example, in the field of security policy. According to the BBC, the Chinese government tested its use on the Uyghur minority, which is brutally oppressed in China: Emotional AI is said to have been used in lie detectors at police stations. With the "iBorderCtrl" project, the EU tested lie detectors at four border crossings in Greece, Latvia and Hungary to check those entering the country.
In addition, the researchers also see a data protection problem. The emotional AI processes biometrically sensitive data, which in turn could serve as the basis for drawing conclusions about gender and age.
Controversial scientific basis
In addition, emotional AI is anything but the exact science it is usually portrayed as, objective, incorruptible. "This is pseudoscience," says Sandra Wachter, a professor of technology and regulation at Oxford University and a doctor of law. "It is not scientifically possible to say what is going on emotionally in another person. You can measure certain parameters, but in reality it doesn't really say anything."
The two scientists Mantello and Manh-Tung also point out that the leading companies in the field are based on a theory by psychologist Paul Ekman, which assumes a universality of emotions. Put simply, this approach assumes that, for example, the emotion "surprise" evokes similar facial expressions in all people – and is therefore measurable. According to the two researchers, however, this theory has long been refuted.
From a neuroscientific perspective, however, it is quite conceivable that machines succeed in recognizing emotions based on movements, behavior and language, says neuroscientist and author Henning Beck: "This can certainly be more accurate than the assessment of people."
In view of the speed with which some AI products are coming onto the market, AI expert Ait Si Abbou is in favour of regulating products with emotional AI. "In the race for the best innovations, some companies develop AI products and solutions too quickly. In some cases, products have been launched that have not been properly considered." In doing so, she also alludes to the discriminatory aspects of AI. "In Germany, we are less agile. But as far as this topic is concerned, it's an advantage because you put so much thought into it and the quality standards are so high."