Recently, ChatGPT (chat robot program) has become popular. Everyone thinks that this is a big step in the artificial intelligence revolution. At the same time, there are some controversies about the use of ChatGPT.

And some "Li Gui" saw business opportunities and used this to amass illegal wealth.

investigation

  Under the banner of ChatGPT, the fee for the small program is as high as 1,000 yuan

  After ChatGPT became popular, its account became popular on many online shopping platforms and social platforms in China.

In a store selling finished product accounts on an e-commerce platform, as many as 10,000 people pay within a day, and the price is as low as 2 yuan.

A reporter from Beijing Youth Daily had previously obtained a ChatGPT account on an online shopping platform, and the merchant then sent the account and password in private chat.

After logging in, the reporter found that this is a multi-person shared account, and the system runs relatively slowly.

  The merchant said that if a single-person account is required, an additional charge is required. The price is 100 yuan as the base price, and 2 yuan is added for each additional minute of use.

Recently, several e-commerce platforms have banned shops that sell ChatGPT accounts, and related keywords have also been blocked.

The store-related products that the Beiqing Daily reporter previously purchased from the account have also been removed from the shelves.

  With the explosion of ChatGPT, some domestic merchants smelled business, and a group of "Li Gui" appeared.

Some WeChat public accounts and mini-program products use the banner of ChatGPT to obtain traffic, and even charge users, and some accounts even charge as much as 1,000 yuan.

In fact, the Beiqing Daily reporter learned that these "Li ghosts" have been verified, and they are completely inconsistent with the answers given by the ChatGPT official platform.

It is understood that at present, OpenAI, the developer of ChatGPT and an artificial intelligence research institution, does not provide services related to ChatGPT in mainland China, nor does it have a corresponding company to operate.

  It is understood that WeChat has banned most of these related mini-programs and official accounts.

On the afternoon of February 18, a reporter from Beiqing Daily searched for relevant keywords and found that there are almost no small programs named after words such as "ChatGPT" and "Smart Q&A", but there are still many related official accounts.

Among them, a public account called "Super AI Questions and Answers" provides 5 free question opportunities, and if you exceed 5 times, you need to register as a member.

The membership fee is 99 yuan for the weekly card after discount, 199 yuan for the monthly card, and 399 yuan for the quarterly card.

  ChatGPT was induced to generate specific phishing emails

  A reporter from Beiqing Daily noticed that the harm of the copycat version may only be to gain popularity, earn traffic money and service fees. What needs to be guarded more is that criminals link ChatGPT with criminal behavior.

According to the relevant information released on the official website of the Ministry of Public Security, at present, some people abroad use ChatGPT to create a complete infection chain: different from the previous wide-spread phishing, it can generate a specific person or organization under the guidance of the questioner. "Spear phishing" emails.

This type of phishing email is more deceptive and confusing. Once the recipient clicks on the email, the system will be infected and poisoned by malicious code.

  At the same time, the malicious use of ChatGPT by criminals has also brought more data security issues.

At present, relevant agencies in foreign countries have proved that the robot may be used to write malicious software, thereby evading the detection of anti-virus software, or using its anthropomorphic chat function to pretend to be a real person or organization to defraud other people's information.

the case

  Using ChatGPT to Write Press Releases Causes Misinformation to Spread Online

  Tests by the research institute found that if you ask ChatGPT questions rife with conspiracy theories and misleading narratives, it can adapt information in seconds, producing a flood of convincing but unsubstantiated content.

Once ChatGPT is used by lawbreakers, it may cause ChatGPT to become a tool for creating and disseminating online rumors on the Internet.

  On February 16, a piece of news "From March 1, 2023, the Hangzhou Municipal Government will cancel the policy of restricting the use of motor vehicle tail numbers. This move aims to improve urban traffic efficiency, relieve traffic congestion, and facilitate citizens to travel..." in the It went viral on the Internet.

Its stern expression made many citizens think that this was an official official notification.

However, the actual situation is that a community owner group in Hangzhou discussed ChatGPT. One owner tried to use it to write a press release on the cancellation of travel restrictions in Hangzhou, and then live-streamed the writing process of using ChatGPT in the group. After the article was posted in the group, it was screenshotted by other owners. Retweeting, leading to the spread of misinformation.

The owner involved later apologized in the group.

analyze

  Using ChatGPT has certain legal risks

  Lei Guoya, a lawyer from Beijing Jingshi Law Firm, told a reporter from Beijing Youth Daily that ChatGPT has become popular, but the legal risks of artificial intelligence exposed behind it and the disorderly business surrounding ChatGPT deserve attention.

Many people try to use ChatGPT to write papers, research reports, and even write programs, and some even use it to carry out business activities.

These behaviors currently have certain risks, such as intellectual property issues and originality issues.

If it is really necessary for work, use ChatGPT to write relevant content, you must indicate the test, and also mark ChatGPT when signing.

Publishers are responsible for the authenticity of their published content, otherwise they will bear corresponding legal responsibilities.

Take Hangzhou as an example of the abolition of the restriction policy on motor vehicle tail numbers. If the owner fails to fulfill his duty of reminder, he will have to bear certain responsibilities for the consequences.

Text/Photo provided by reporter Zhu Kaiyun/Visual China

relevant

  Microsoft revises Bing bot experience limit to 50 chats per day

  On February 17, Microsoft stated that from that day on, the Bing chatbot experience will be limited to 50 chats per day, with 5 conversations per round, and a conversation includes a user's question and Bing's reply.

This is the conversation limit the company has imposed on its artificial intelligence after Bing lost control several times.

  According to Microsoft, very long chat sessions can confuse basic chat patterns in Bing, and to address these issues, Microsoft has implemented some changes to help focus chat sessions.

The data shows that most people can find the answer they are looking for within 5 rounds, and only about 1% of chat conversations have more than 50 messages.

After the chat session reaches 5 rounds, you will be prompted to start a new thread.

At the end of each chat session, the context needs to be cleared so the model is not confused.

  On the 15th, Microsoft announced that within 7 days of Bing's release, 71% of people had given the AI-driven answer a "thumbs up".

However, if the chat is too long, and it answers more than 15 questions, it will gradually become cranky, giving answers that are not helpful or do not match the tone of the design.

  Microsoft says that very long chat sessions can confuse the model about the question it's answering, so we may need to add a tool to make it easier for you to refresh the context, or start from scratch.

Sometimes, when the model is trying to respond, there may be a tone outside of our design style.

This is a very important scene that requires a lot of prompting.

Most of you won't encounter it, but we're working on how to get finer fine-tuned control.

  On February 7, Microsoft launched a new Bing (Bing) search engine that integrates ChatGPT-4. According to the news released by Yusuf Mehdi, Microsoft’s corporate vice president and chief marketing officer for the consumer sector, within 48 hours of the new version of Bing’s launch, more than 1 million applications to join.

  Text/Reporter Wen Jing Coordinator/Yu Meiying