Zoom Image
ChatGPT logo: Millions use the offer, but shouldn't trust it
Photo: DADO RUVIC / REUTERS
Due to a possible violation of consumer protection laws, the U.S. Trade Commission FTC has initiated investigations against OpenAI, according to a newspaper report. According to the agency, the company is endangering the protection of personal data and people's reputation with its popular chatbot ChatGPT, the Washington Post reported. Therefore, in a 20-page letter, the FTC requested information on how OpenAI deals with the risks associated with artificial intelligence (AI). Neither the FTC nor OpenAI could initially be reached for comment.
So-called generative AI is trained with vast amounts of data that are often collected from the Internet. This also includes personal entries on platforms such as Facebook, Instagram or Twitter. In addition, the input from the users, the so-called prompts, is incorporated into the further training of the AI models. For this reason, Google, whose ChatGPT rival "Bard" is now also available in Germany, is also facing a billion-dollar lawsuit in the USA. The plaintiffs accuse the Alphabet subsidiary of using personal and copyrighted information without permission to train its AI and are seeking at least five billion dollars in damages.
Damage to reputation by vending machine
European regulators have already raised concerns about the use of personal data in chatbots and other artificial intelligence-based services in the spring. After a temporary ban in Italy, ChatGPT went back online there, but the European data protection authorities have not yet decided on the general admissibility of the offer. A key point of the complaints: Services such as ChatGPT spread incorrect factual claims, and defamatory statements become known again and again. The FTC is now demanding detailed information about how ChatGPT was trained and what security measures OpenAI has taken to stop potentially harmful false claims.
Although providers insist that their services do not necessarily reflect the truth, AI models are gaining in importance at the same time, for example through integration into Microsoft's search engine Bing. The apparent eloquence of chatbots often misleads human users, even professionals cannot always distinguish between fact and fiction. In June, for example, two lawyers in New York were fined for leaking ChatGPT allegations into pleadings without verification.
The head of the authority Lina Khan, appointed by Joe Biden, takes a hard line against tech companies. It was only in June that the FTC sued the data dealer Kochava, which in particular collects location data from smartphones and makes it available for a variety of purposes. In the fight against the takeover of the game manufacturer Activision Blizzard by Microsoft, Khan had to accept a court defeat.
tmk/Reuters