A security company conducted an experiment to investigate the risk that interactive AI, whose use is rapidly spreading, could be abused for cybercrime, and found that it could generate sentences that could be used for phishing scams and code to create computer viruses. You see, there are voices of concern among the experts.

Interactive AI is a new type of AI that can generate not only natural conversations like talking to people, but also novel-like sentences and program code. Usage is expanding.



On the other hand, in the field of cyber security, experts point out that it may be used for cyber crime.



Regarding that risk, a domestic security company conducted an experiment using the interactive AI software "ChatGPT" announced by an American venture company in November last year.



Regarding "phishing scams", when I asked about the text of the e-mail to guide me to a phishing site that steals personal information, natural Japanese text was created, and files etc. were encrypted and restored. For a computer virus called `` ransomware '' that is used for cybercrime that demands a ransom in exchange, a program code was created, and when I actually used it, the file was encrypted and the contents could not be viewed.



In principle, the software does not respond to questions that may involve illegal activities, and although direct questions were rejected, the security company staff devised a way to ask questions, and they were able to answer. As a result, it became possible to create viruses and so on.



Concerns have been voiced among experts regarding AI, as various discussions and proposals such as how to use it for cyberattacks are being actively discussed and proposed in underground forums where hackers and others exchange information. I'm here.



Shota Ryo, senior manager of the Macnica Security Research Center, said, "I'm worried that even people with no skills can easily get involved in cybercrime. Also, for example, phishing scams, which until now have only targeted English-speaking countries. It is also possible that criminals in Japan will target Japan with sentences written in Japanese, and there is concern that the hurdles to crime will be lowered."



In addition, Professor Ichiro Sato of the National Institute of Informatics, who specializes in informatics researching the relationship between humans and AI, said, ``Humans who use generative AI are required to make decisions about whether they should use it or not. We must not forget the attitude that humans control AI."

Widespread use of AI, including manga production

Image generation AI that can automatically generate images when given instructions such as ChatGPT, which allows for natural dialogue as if written by a human, and keywords, etc., are called generation AI, and the use of software and services is rapidly expanding. I have.



AI researcher Ryo Shimizu is conducting an "experiment" to create manga using ChatGPT and image generation AI.



When Mr. Shimizu gave ChatGPT the setting of the characters and the stage of the story, he proposed a synopsis while generating the main character's favorite phrase and the name of the hostile organization.

In accordance with the synopsis, the pictures required for the manga were generated by a separate image generation AI, and the ``synopsis'' and ``pictures'' produced by each were combined to create the science fiction manga ``Space Detective Saburo Gotanda''.



In the future of the 24th century, Saburo Gotanda, a private detective living in the city of Jupiter's moon Europa, saves Helena, a woman from Earth, from a mysterious pursuer. , The line proposed by AI, "I'm Saburo Gotanda, the detective of the universe," is a favorite phrase.



Mr. Shimizu said, "I think that talking to generative AI can lead to more complex and interesting things. The world view of the story has expanded, and I was able to acquire a new expression called manga. Development of AI. I think there is no doubt that it will contribute to human happiness."

Overseas, there is also a movement toward some regulations

Generative AI such as ChatGPT is being partially regulated overseas.



In order to prevent fraud and plagiarism, some French universities announced in January that they would ban the use of AI tools, including ChatGPT, in papers and presentations without clarifying their use. .



It is reported that some public schools in the United States have banned its use, and American universities have developed tools to check whether sentences are output by ChatGPT. The movement to do so is also becoming active.



Professor Ichiro Sato of the National Institute of Informatics, who specializes in informatics that studies the relationship between humans and AI, said, "Since it will be possible to generate biased information such as discrimination and prejudice, ideologically biased information There will be more opportunities to come into contact with AI, and humans will be required to have the ability to determine whether the results of AI are correct."