A senior official of the EU=European Union, who will attend the G29=Seven Major Countries' Digital and Technology Ministers' Meeting to be held on June 7, told NHK that "it is very effective to think about how to use ChatGPT" and other generative AI based on risk, and expressed his hope that countries will find common ground for appropriate use.

Speaking ahead of the meeting was Senior Vice-President Besteer for AI and digital policy at the European Commission, the EU's executive body.

Regarding the rapid spread of generative AI such as ChatGPT, Mr. Bestair said, "If you pay attention to how it is used incorrectly, it will be a great opportunity, and the importance of safety measures is increasing so that it can be used correctly with ethics and values."

In addition, he mentioned that a bill to regulate the use of AI is being discussed in the EU, and stated that he would like to aim for the enactment of the bill as soon as possible, saying, "We are trying to establish rules on how to use AI so that people are not discriminated against, rather than regulating technology."

Regarding the G7 meeting, he said, "The important thing is to respond to AI even if we have different approaches, and I think it is very effective to think about how to use AI based on risk. Otherwise, people may think that AI is too dangerous to use," he said, expressing hope that countries will find common ground for appropriate use.

EU regulatory bill classifies AI risks into four groups

EU = The European Commission, the executive body of the European Union, submitted a bill to regulate AI to parliament and member states in April, and discussions are still ongoing toward the passage of the law. The bill classifies AI into four groups according to risk.

The riskiest AI: Creditworthiness assessments by public authorities, etc.

The most risky, "unacceptable risk" AI will be banned from use because it violates basic human rights.

Specifically,
it covers AI used in facial recognition technology for
purposes such as assessing and categorizing people's credibility by public institutions and
monitoring people in public spaces.

High-risk AI: Evaluating people in entrance exams, recruitment, etc.

The second riskiest type of AI is classified as "high-risk" and
includes AI used by schools and businesses to evaluate people in entrance exams and recruitment,
and by banks to decide whether to lend.

These AIs have various conditions, such as learning appropriate data and confirming accuracy so as not to make biased decisions, and it is necessary to maintain records and monitor them by humans.

AI with limited risk: Conversational AI and more

Third, there is a transparency obligation for "limited risk" AI, and for interactive AI, it is announced that AI is being used, and when AI is used to create images that are very similar to real people,

it will be clearly stated.

Others are considered "minimum risk" AI and are not subject to new regulations.

If AI services are provided within the EU, they will be subject to regulation

If services using AI are provided within the EU, businesses outside the EU, including Japan, will also be subject to regulations, and if they are violated, they will be fined up to 3000 million euros, more than 44.6 billion yen in Japan yen, or <>% of the world's total sales, whichever is higher.