Decision maker instead of Terminator: Why we often have a false picture of artificial intelligence
Oberschlauer Computer strives for world domination with killer robots - so many Hollywood fabrics still work. Experts warn that this distorted image of artificial intelligence can have serious consequences.
In the cinema or on the couch, artificial intelligence often comes across as an insane computer program that becomes a deadly threat. In the "Terminator" movies the Skynet sends killer machines, in "2001: A Space Odyssey" the on-board computer wants to assassinate astronauts, in "ex-machina" savage sexbots take revenge.
There are also nice robots. Data on the "Enterprise" or the hologram doctor on the "Voyager". Unfortunately, human copies are also not suitable for conveying a fairly realistic picture of artificial intelligence to viewers and spectators. Anyone who only sees clever humanoids or nasty killer programs has a distorted picture.
Researchers and science fiction writers explain why this is a problem at the "South by Southwest" technology festival in Austin. Artificial intelligence is everywhere here, whether traveling through Mars, cancer research or online shopping: computers learn and make decisions that a person would not have come to.
Who gets insurance, who does not get credit?
This is happening now, for example with YouTube's or Netflix's automated suggestions, language translation or support chat: machines instead of people make decisions. That did not do much with the Terminator. Nevertheless, the technology is new, there are unresolved issues and pitfalls.
"Only half of what researchers think about is featured in movies and TV shows," says Christopher Noessel. He works at IBM on Watson, an artificial intelligence that plays "Jeopardy" or supports doctors in diagnoses. Noessel has evaluated 147 films and series as well as 68 manifestos of researchers and companies. He came across parallels - and blanks.
Important questions are virtually absent in the fictitious material: how decisions made by artificial intelligence become comprehensible. So that decisions by software - who gets insurance, no credit, a new heart - can be reviewed and corrected. Also on the agenda of researchers: laws and standards to regulate the new technology.
Rehearsals for the Hollywood emergency
The science fiction writer Cory Doctorow sees filmmakers and storytellers as responsible: "The future depends on what we do." Science fiction should not be confused with fortune telling. "How we tell stories has a big impact." Science fiction does not show what exactly would happen, but what could happen.
He illustrates the influence of Hollywood stories on politics with the help of encryption: when films are cracked in seconds in a film, this arouses the desire of politicians and leads to bad ideas such as the need for a back door. How films and series represent artificial intelligence has a direct influence on the perception of new technology in society.
Disaster films are even worse: As soon as something happens, the people on the screen collide and order collapses. In fact, most people behaved in exactly the opposite way - yet the policy would increasingly prepare for the Hollywood emergency. Soon also with artificial intelligence?
Washing machines instead of robots
Malka Older, author of the acclaimed cyberpunk book "Infomocracy", therefore demands a variety of stories and a close look. Science fiction has the power to change social ideas sustainably. Their example: The kiss between Uhura and Kirk, a black and a white, in the television series "Spaceship Enterprise".
Older worries that even news will increasingly be told as stories and oriented to Hollywood. Using images from Terminator for Articles (as done here), cling to the notion of an artificial intelligence that feels like a human and has its own consciousness. Their hope: Over time, the general knowledge about artificial intelligence increases. For this reason, Older does not write about "artificial intelligence" in her books. The clever functions are finally already installed everywhere. "We also do not call washing machines robots," she says.
More at SPIEGEL +
Intelligence is often full of prejudice, reports civil rights and AI expert Rashida Richardson. Because certain groups of people rarely get into the data with which the programs are trained. Because the programmer himself is a homogeneous troupe of white men. Which led, among other things, that facial recognition hardly works in black people.
A term like a stun grenade
This is also evident in Hollywood: on the movie posters, artificial intelligences are often humanoid robots with white skin color, male killing machines or female sex slaves.
The distorted AI image also leads to excesses like this: Experts have taken a closer look at 2830 European start-ups that use allegedly or supposedly artificial intelligence. At 40 percent, this was either not true or was an excessive exaggeration. The fade works especially well if the general understanding is low.
But not only Hollywood has some catching up to do. A possibly disturbing observation by Noessel: In the manifestos of researchers and companies he examined, hardly a word was said about autonomous weapon systems. Which brings us back to the Terminator.