In the bustling city, the hustle and bustle of the amusement park, and the quiet lakeside forest, three women in different scenes are holding mobile phones and waiting to meet their AI partners. In the movie "My AI Lover", in a world that is gradually virtualized, people have changed dramatically when faced with the choice of loving and being loved.

After the popularity of ChatGPT and related AI technology in the beginning of the year, artificial intelligence overtakes at high speed, and online dating objects seem to be more and more virtual.

No matter how wonderful the emotional relationship between virtual lovers and real people can be, on apps that focus on dating strangers, "emotional liars" are already eager to try. Although scams such as "pig killing plates" are not uncommon, there are more and more AI cheating tools on dating apps, and if they are deliberately used for fraud, it will bring great hidden dangers.

"AI avatar says he likes me"

AI drawing apps, which became popular earlier than ChatGPT, allow people to enter just a few keywords to get a good picture. Today, such models have evolved to the point where they can generate life-like photographs.

Taking the AI beauty account "Xiaoduo who loves travel" on Xiaohongshu as an example, when browsing the account, the reporter of West China Metropolis Daily and the cover news found that the account mainly released photos of virtual beauty Xiaoduo, which attracted tens of thousands of fans and likes in a short period of time. Many people are attracted by the gentleness, intelligence and cuteness of virtual people, and are even willing to establish romantic relationships with them, and many netizens leave messages under the account called "my robot girlfriend".

According to previous media reports, there was a news on the Internet that "there is a yacht maid party in Jinji Lake in Suzhou", and the online propaganda poster showed that the fee for participating in the event was 3000,<> yuan per person, and there was a photo of a sexy maid. According to the report, this so-called "maid party" may be the first AI drawing scam in the country. The maid photos given by the event are all drawings of AI model real human photos, and some maid photos have abnormalities in the finger parts. Due to the complexity of the painting algorithm of the finger part, some AI models often have problems when painting, which is in line with the characteristics of AI painting.

Amelia Winger-Bearskin, an associate professor of AI and art at the University of Florida, has publicly explained on social media that AI is trained on billions of images collected from the Internet, and it does not really understand what a "hand" is, at least not anatomically related to the human body.

But AI that can't draw hands is enough to make users scared, and some dating apps and social platforms have begun to launch virtual characters such as "AI lovers" and "AI beauties", which have similar thinking and emotions to real humans, making people have a strong emotional resonance.

If "AI artificial intelligence drawing" makes it difficult to distinguish between real and false, then if you want to interact and communicate with virtual characters, you need users to be more vigilant against being deceived.

"If I don't listen to the AI Gou Egg, I really doubt that he is a real person." Li Meng (pseudonym), a dating app user, told reporters that he accidentally found the virtual lover "AI Gou Egg" launched in the dating app, and felt very interesting after trying to have a conversation with it. "Although I will feel like I am talking nonsense after talking a lot, it will come back every time, and I will talk to him for a long time when I am bored."

Some people will chat with virtual characters, share their lives, and even celebrate birthdays and give gifts to these virtual characters, constantly giving affection, and treating virtual characters as real partners. Although the virtual character does not have real physical and emotional feelings, this does not stop people from giving sincere feelings to them.

AI changes faces and takes your money in minutes

In addition to creating a new face, in order to gain your trust, AI can also fake the face of your relatives, friends and even colleagues, and with the gradual AI voice technology, criminals can imitate different voices at will, deceive the public, especially the elderly, and then impersonate their relatives to defraud money.

According to Xinhua Vision previously reported, in February 2022, a Mr. Chen went to the Xianyan Police Station of the Ouhai Branch of the Wenzhou Public Security Bureau in Zhejiang Province to report that he had been defrauded by "friends" of nearly 2,5 yuan. After verification by the police, the fraudsters used the video posted by Mr. Chen's friend "Ah Cheng" on social platforms, intercepted his facial video picture and then synthesized it with "AI face changing" technology, creating the illusion of a video chat between Mr. Chen and his "friend" to deceive his trust, thereby committing fraud.

Wenzhou public security once released news that in 2021, the Gongchenqiao Police Station received an alarm, and the victim Xiao Chen said that he was blackmailed by the other party after video chatting with a female netizen. After police investigation, the other party used AI face changing technology to synthesize the face in Xiao Chen's video into an indecent video to blackmail him.

In 2020, an executive of a company in Shanghai was defrauded of 150.<> million yuan because the other party used AI face swapping and artificially generated voice technology to produce the face of the company leader, and asked the executive to transfer money.

According to foreign media reports, in California, USA, in 2021, a woman was defrauded of more than $7,<> after the other party used AI face changing technology to forge a video of her family needing to make money.

Enthusiasts who play with ChatGPT have also discovered ways to generate "phishing email" scams. Although OpenAI has set up an ethical firewall and asked ChatGPT to produce phishing emails will be rejected, if a fictitious context is set, ChatGPT will also give corresponding answers.

In this regard, many local public security bureaus have issued relevant information to remind that some people abroad have used ChatGPT to create a complete "phishing email" infection chain, and unlike the previous wide-net-phishing scheme, it can generate "spear-style" phishing emails targeting specific people or organizations under the inducement of the questioner.

"When using software or Internet products accessed by AI technology, on the one hand, users should pay attention not to maliciously use AI technology to carry out illegal acts, on the other hand, they should also pay attention to identifying and verifying the true identity of the communicating party, do not trust appearances, and carefully consider when transferring money or spending purchases." Liu Qing, a lawyer at Tahota Law Firm, said that if you encounter fraud, you should immediately call the police for help, and pay attention to retaining relevant materials so that the police can grasp the case as soon as possible and severely punish the criminals.

What should I do if I find that my portrait has been stolen or "replaced"? Liu Qing suggested that users should retain evidence as soon as possible for subsequent rights protection. "At the same time, users can also file complaints with network platform operators to forcibly remove infringing pictures or videos, and can also file personality rights lawsuits to protect their legitimate rights and interests with legal weapons."

How should fraud in the "AI era" be regulated?

Because AIGC content is popular, there will be some problems in the face of AI face change or other AI technology scams, if the user invests feelings and recharges a large amount of fees, does the platform constitute fraud? If someone succeeds in the face-swapping scam through AI technology, what legal sanctions will they face?

"In the field of online social networking, different forms of services such as companion chat, companion play, and dating exist at the same time, so the cost of users recharging depends on what name it is." Zhu Peiwei, a lawyer at Beijing Ronggang Law Firm, pointed out that if you pay for a relationship with AI, it is a fraud if the customer mistakes it for a real person, but if the user knows that the other party is AI and recharged, it does not constitute fraud.

However, from the perspective of supervision, even if there have been previous illegal cases, such software is widely available, and the difficulty of supervision lies in the relatively complex legal relationships involved, and administrative supervision has no right to directly intervene. "When a user uses AI face-changing technology to disguise, resulting in the infringement of the pretended person's rights such as portrait rights, or causing a large amount of damage to the feelings and property of others, the platform needs to bear certain assistance infringement, or even direct infringement legal liability, if it fails to fulfill its reasonable review obligations and does not effectively regulate the occurrence of such behaviors." However, unless the platform intentionally commits or assists the fraudster, it is difficult to determine that the platform constitutes a fraud crime. Liu Qing told reporters.

Liu Qing said that if someone deliberately commits fraud through AI technology to change faces, causing the victim to misunderstand and make a wrong disposition of property, it meets the constituent elements of the crime of "fraud", violates the provisions of Article 266 of China's Criminal Law, and the larger amount is sentenced to fixed-term imprisonment of not more than three years, criminal detention or surveillance, and a fine or both; The maximum amount of money or other serious circumstances is a severe penalty of life imprisonment. The "Opinions of the Supreme People's Court, the Supreme People's Procuratorate, and the Ministry of Public Security on Several Issues Concerning the Application of Law in the Handling of Telecommunications Network Fraud and Other Criminal Cases" clearly stipulates that the value of fraudulent property worth more than 266,<> yuan is already a "relatively large amount" stipulated in article <> of the Criminal Law.

In September 2021, the National New Generation AI Governance Professional Committee issued the Code of Ethics for a New Generation of Artificial Intelligence, proposing to integrate ethics into the whole life cycle of AI R&D and application.

At the 2022 World Artificial Intelligence Conference Governance Forum, Peng Xincheng, Dean and Distinguished Professor of Shanghai Jiao Tong University Kaiyuan Law School, pointed out: "The root of ethical governance in the digital society lies in people, and the credibility of corporate ethics and science and technology is, in the final analysis, the credibility of people, so it is necessary to improve legal supervision, clarify basic legal responsibilities, and establish people's behavioral norms." ”

In order to further standardize the role of "people" in the ethical governance of science and technology, the "Opinions on Strengthening the Ethical Governance of Science and Technology" issued in March this year requires that institutions engaged in scientific and technological activities such as life sciences, medicine, and artificial intelligence should establish a science and technology ethics (review) committee if the research content involves sensitive areas of science and technology ethics.

West China Metropolis Daily - Cover News Reporter Bian Xue