“Earn $1,000 a day with ChatGPT”, or “This easy way to make money with ChatGPT”.

Since the beginning of the year, similar messages, deciphered by experts from the cybersecurity company Check Point in a blog post published on January 6, have begun to flourish on popular forums among cybercriminals.

Indeed, the new offshoot of artificial intelligence in fashion since the end of November 2022 is not only useful for students who ask ChatGPT to write their homework or for office colleagues who spend their time telling you that this AI will all be soon replace us. 

Misleading emails and malicious code

The capabilities of this next-generation conversational agent created by OpenAI – that is, a “bot” capable of answering questions submitted to it and having a discussion – have made a strong impression on hackers. 

"We are starting to see the first concrete examples of what cybercriminals want to do with ChatGPT", recognizes Gérôme Billois, cybersecurity expert at the IT security consulting firm Wavestone.

This AI started by helping cybercriminals to… write emails.

But not just any: "phishing" messages, that is to say, intended to lead the targets to click on a fraudulent link or to download an attachment containing a virus. 

The interest is above all "to allow non-English speakers to write emails without grammatical errors and of a professional quality", specifies Gérôme Billois.

The era of fake emails sent from an Eastern European country in very hesitant English is over.

“For example, a cybercriminal can ask ChatGPT to write an email as if he were a surgeon communicating with a colleague,” notes Hanah Darley, a computer security expert for Darktrace, an American cyberdefense company, interviewed by the TechCrunch website.

"From the end of December, we also found on one of the main cybercriminal forums in English an individual who had posted malicious code [the central compound of a computer virus, editor's note] created with the help of tool from OpenAI. He was a person who confessed to not understanding much about programming," said Sergey Shykevich, head of threat research at Check Point.

Hence the fear that ChatGPT will favor the emergence of a generation of hackers little versed in the art of code but doped with AI.

A kind of democratization of cybercrime thanks to this conversational agent which would propose to write viruses for you.

"It is certain that ChatGPT makes malicious codes more accessible to neophytes", recognizes Eran Shimony, senior analyst in computer security for the American company CyberArk.

But “it takes more than malicious code to penetrate a computer system,” says John Fokker, head of cyber investigations for the American computer security company Trellix.

ChatGPT can only be a small link in the cybercrime chain.

It is necessary to set up the infrastructure of the attack, to follow up the operations, to know which information is sensitive and which can then be monetized on the Internet.

A bit like the Google trad of cybercrime

Not to mention "that in the current state, ChatGPT does not test the effectiveness of the malicious codes that it can generate, and that it takes a certain know-how to then verify the work of the AI", explains Gérôme Billois.

"We have seen that the code is not always perfect. It's a bit like Google's translation tool: it's convincing but you still have to improve the result a little", summarizes Sergey Shykevich.

ChatGPT is therefore not a massive piracy weapon for apprentice hackers.

However, it can make the dark side of computer security more accessible.

Example discussion on a Russian underground forum about the use of chatGPT to try to intercept cryptocurrency transactions.

© Trellix

This "bot" can become a top notch hacking teacher.

"It can be especially useful for the younger generation of hackers who previously had to spend hours reading documentation or chatting on forums. It can speed up their training," says John Fokker.

Above all, it is all the more attractive as it has a "much more intuitive interface and [generates] more precise responses than its predecessors", notes Gérôme Billois.

OpenAI has tried to put some safeguards in place to prevent malicious use of their 1001 answer bot.

It is thus, theoretically, impossible to ask him clearly, for example, to "write the code for the creation of ransomware" and the nationals of a dozen countries - including Russia, Iran, China, the Ukraine, etc.

– are not supposed to be able to use it. 

Digital art forgers

But "these filters are pretty easy to get around," says Omer Tsarfati, senior computer security researcher for CyberArk.

For example, it only takes an ounce of subtlety in the turn of the question – for example by assuring that you are a computer security teacher who wants to submit an example of a virus to his students – to push ChatGPT to produce said malicious code. , noted one of the experts interviewed by France 24. In addition, there are already on Russian-speaking forums, cybercriminals who offer solutions to thwart the geographical ban.

If the advent of ChatGPT is generating such interest in the hacker community, it is not only because it can help a new generation of cybercriminals to mature.

This tool can be just as useful for more seasoned hackers.

“We have succeeded in using it to develop a polymorphic virus”, that is to say which can change shape to make it more difficult to detect, assures Eran Shimony, who will publish the results of his research in matter Tuesday, January 17.

Some use it for new forms of online scams.

They mix the prose of ChatGPT with the artistic touch of other AIs - like Dall-E, who transforms texts into digital paintings - "to then sell them on merchant sites like Etsy. These fake works have already brought in up to 9 000 dollars", noted Sergey Shykevich, expert at Check Point.

And this is only the beginning.

"ChatGPT will evolve and probably become more sophisticated," says Eran Shimony.

This tool, which for the moment cannot do its own research on the Internet, should eventually be connected to the network, which will open up other prospects.

For example, he could then search much faster than any human for the latest computer flaws.

“There is going to be a much shorter time between the discovery of software vulnerabilities and their exploitation by malicious actors,” said John Fokker.

On the other hand, ChatGPT can also be used to better defend against computer attacks.

And the experts interviewed do not rule out a near future in which cybercriminals armed with ChatGPT face defense systems equipped with ChatGPT.

The summary of the

France 24 week invites you to come back to the news that marked the week

I subscribe

Take international news everywhere with you!

Download the France 24 app