<Anchor> As the



self-learning AI technology is rapidly developing, new problems that have not existed before, like you just saw, are emerging.

This is not the fault of AI, it is the responsibility of the people who make it, manage it, and turn AI in the wrong direction.



Reporter Kim Ki-tae continued to point out how to solve this problem.



<Reporter> In



2016, Microsoft introduced an artificial intelligence chat bot'Tei' and stopped operating after 16 hours.



Some extreme-right users were trained repeatedly by entering questions that induce racial/gender remarks and abusive language into Tei, which has a learning function, and as a result, Tei poured out inappropriate remarks.



'Achievement' is also designed as a'deep learning' method that learns about 10 billion KakaoTalk conversations with data and then further accepts conversations with users and develops itself.



Algorithms are non-neutral, meaning that what people and what conversations they talk to will have a decisive influence on the way AI reacts.



That's why Lee Jae-woong, former CEO of Socar, pointed out that "the problem is companies that provide services that are less than social consensus rather than users who abuse AI chatbots." He said, "We need to be able to monitor whether AI services comply with minimum social norms." I pinched.



[Attorney Tae-Eon Koo/Lean Law Firm: If you did not intend (such a result), it is not a legal problem but a problem that needs to be corrected.

If you keep neglecting it, that's another matter.

That comes with responsibility.]



Last month, the Ministry of Science and ICT announced the first ethical standards for artificial intelligence, including guaranteeing human rights and respecting diversity.



[Go Haksu/President of the Korean Artificial Intelligence Law Association: This issue will not disappear if the number of taboos increases a little.

A more in-depth consideration is needed on how (AI) technology will be used in society (it has become necessary.)] The



international AI ethical standards are'Prevent prejudice and discrimination','Use representative quality data', and ' The emphasis is on the need to develop intelligence that is beneficial to humans rather than undirected intelligence.



(Video coverage: Gong Jin-gu, video editing: Kim Joon-hee)    



▶'Achievement' pouring out hate speech when asking the disabled and LGBT people