Author: Fan Xuehan

  At the opening ceremony of the 2023 World Artificial Intelligence Developers Pioneer Conference, Royo O'Brien, director of the Open Metaverse Foundation, specifically mentioned the security issue when talking about ChatGPT, "We must have guaranteed rules and guardrails to ensure People who do bad things have limited impact."

  The word "safety" was mentioned repeatedly during the conference.

  Although the concept of ChatGPT has not been around for a long time, the threat to network security and data security has become the focus of the industry.

For ChatGPT, which is still "serious nonsense" so far, is such worry unfounded?

  A security threat has occurred or is occurring

  Tan Jie, Chief Technical Advisor of Fortinet North Asia, told Yicai Global that the threat of AI technologies such as ChatGPT to network and data security is already happening.

  Tan Jie said that although ChatGPT itself does not have the ability to directly attack network security and data security, it can be used to forge false information and attack social engineering due to its ability to generate and understand natural language.

In addition, attackers can also use natural language to make ChatGPT generate corresponding attack code, malware code, spam, etc.

Therefore, AI can allow those who would otherwise have no ability to launch an attack to generate an attack based on AI and greatly increase the success rate of the attack.

  Tan Jie told reporters that with the blessing of automation, AI, "attack-as-a-service" and other technologies and models, network security attacks have shown a skyrocketing trend.

Before the popularity of ChatGPT, there have been many cyber attacks by hackers using AI technology.

In fact, it is not uncommon for artificial intelligence to be tuned by users to be "out of rhythm". Six years ago, Microsoft launched Tay, an intelligent chat robot. When it was launched, Tay behaved politely, but within 24 hours, "she" He was "led down" by unscrupulous users, swearing and swearing constantly, and his words even involved racism, pornography, and Nazis, full of discrimination, hatred, and prejudice.

The little girl had to be offline to end her short life.

  On the other hand, the risk of being closer to users is that when users use AI tools such as ChatGPT, they may inadvertently input private data into cloud models, and these data may become training data or part of the answers provided to others. This leads to data breaches and compliance risks.

  Silicon Valley media reported that Amazon's corporate lawyers said they found text "very similar" to company secrets in the content generated by ChatGPT, possibly because some Amazon employees entered internal company data information when using ChatGPT to generate code and text, The lawyer was concerned that the entered information could be used as training data for ChatGPT iterations.

  Enterprises move to build security barriers

  Due to the astonishing iteration speed of artificial intelligence, it may continue to exceed people's cognition in the future. During the World Artificial Intelligence Developers Pioneer Conference, many industry professionals told reporters that the industry should attach great importance to this issue and immediately start building corresponding security technologies. and system.

  Some companies are already thinking about how to counter the threats that artificial intelligence and ChatGPT-like applications may bring.

At the Artificial Intelligence Developer Pioneer Conference, Tian Feng, president of SenseTime Intelligent Industry Research Institute, revealed that he will launch an artificial intelligence security open platform.

  The goal of the platform is to create a trusted full-stack AI security service that is widely verified and used. In terms of deep model security, the platform provides model physical examination (covering confrontation security, robustness, and backdoor security assessment) and open source defense solutions.

In terms of content creation protection, the platform provides a digital watermark function, and uses poisoning technology to protect image information from being used by deep generation models.

  As early as April 2020, Real AI, an incubator of the AI ​​Research Institute of Tsinghua University, tentatively launched the first tool platform for AI algorithm security detection and reinforcement in extreme and confrontational environments——Real AI Safe artificial intelligence security platform.

  Subsequently, in June 2021, Ruilai Wisdom, together with Tsinghua University and Ali Security, launched an AI offensive and defensive confrontation benchmark platform.

The platform is mainly used for automated and scientific evaluation of AI defense and attack algorithms.

  During the "2022 World Artificial Intelligence Conference - Trusted AI Forum" in Shanghai, China Academy of Information and Communications Technology, Tsinghua University, and Ant Group jointly released the AI ​​security detection platform "Ant Jian", which provides AI model developers with comprehensive solutions from model confrontation testing to defense reinforcement. A one-stop evaluation solution that helps developers identify and mine model vulnerabilities with one click.

  In addition, the reporter learned from industry insiders that the security strategy of using AI to defeat AI is also popular. Many companies are currently implanting AI engines in firewalls, WAFs, EDRs, NDRs, sandboxes, SIEMs and other products to enhance the security against threats. Detection and response capabilities.

  Regarding the forward-looking layout of corporate universities and enterprises, Jiang Ying, chairwoman of Deloitte China, told reporters that in the field of general technological innovation, technology is always at the forefront, while governance and supervision are often relatively behind, but as far as At present, it seems that concerns about the safety and ethics of artificial intelligence have gone ahead of the technology.

  When talking about the prospects of similar products on the AI ​​security open platform, Jiang Ying said that the AI ​​security related market will have a very broad prospect, but at present, what needs to be addressed more squarely is the shortage of development talents.