Enlarge image

OpenAI CEO Sam Altman: The role of his company's tools in elections is likely to be closely watched

Photo: PATRICK T. FALLON / AFP

"We want to make sure that our technology is not used in a way that could undermine the democratic process": Framed by sentences like these, OpenAI presented on Monday how it plans to adapt its AI offerings with an eye on the election year 2024. For example, a greater focus on precise information on elections and more transparency in general are promised.

In the course of this year, several elections of global political importance will take place, for example in the USA, India and Great Britain. There are many predictions about how big and direct the influence of AI tools such as ChatGPT and image generators such as Dall-E 3 or Midjourney will be on such votes. Some experts are worried about a flood of fake content generated online during election campaigns, while others fear that chatbots could spit out false information about elections that users then believe to be true. And still others consider the influence of the new AI tools on people's voting behavior to be overestimated – at least for the time being.

OpenAI itself emphasizes that it is still in the process of finding out "how effective our tools can be for persuading people": Until the company knows more, however, it does not want to allow anyone to use it to develop applications for political campaigns and lobbying. According to OpenAI, applications that prevent people from participating in democratic processes are also banned.

Different departments work together

The company, led by Sam Altman, writes that in view of the upcoming elections, there is a cross-departmental push in the company that brings together the expertise of different teams. However, it remains unclear in the blog post how big and how high that project is internally.

Specifically, OpenAI wants to prevent, for example, "scaled influencing actions" and chatbots that wrongly give the impression that you are in contact with a candidate or an institution such as a local authority. Tools for improving the factual accuracy of AI systems, for example, which OpenAI has been working on for years, are a good basis when it comes to the topic of election integrity. For example, the in-house image generator Dall-E 3 has "guardrails" due to which the tool refuses to create images of real people such as political candidates.

For Dall-E 3, OpenAI has even more plans. According to the blog post, a technical approach of the so-called Coalition for Content Provenance and Authenticity (C2PA) is to be implemented there at the beginning of this year. This is intended to make it easier for Internet users to obtain details about the origin of content. In addition, OpenAI says it is also working on its own tool for recognizing content generated with Dall-E models. In the past, however, tools such as OpenAI's AI Text Classifier, which was supposed to help recognize AI-created text, did not provide particularly convincing results.

For its popular chatbot ChatGPT, OpenAI announces that it will redirect users to an official website called CanIVote.org if they have questions about certain voting procedures in the US. OpenAI wants to use the experience from its work in the USA in other countries and regions as well.

mbö