"It can function as everything from writing support to classifying texts and helping with programming," says Ariel Ekberg, researcher at AI Sweden, a national centre for applied AI that is both privately and publicly funded.

The model has been developed in a research project and has been trained on the Swedish internet with sources such as the digital scientific archive Diva, 1177, the pharmaceutical database FASS and the Swedish Literature Bank. But also on open forums such as Flashback.

"We have had an ambition to include as much as possible, and that may include some controversial material. But we don't think it's our place to make censorship decisions. We believe that this is best done when you know what you are going to use these models for," says Ariel Ekberg.

Not relevant to train away prejudices

Chat SW3 is a basic model that could be used both in the public sector and as a basis for commercial products.

Commercial chatbots such as chatGPT and the search engine Bing have previously been criticized for generating racist and otherwise inappropriate text. But training away prejudices in AI already in the elementary stage is not relevant, according to Ekberg.

"Both Bing and Chat GPT are further developments and products based on language models. When developing such products, it is very important to check what they say and understand if they have any bias. But we're one step ahead of that, so it's not something we need to work actively on.