Today I received an e-mail that I forwarded directly to our in-house IT security experts with the words "this looks like a phishing attempt. Didn't open the attachment, could just be lousy PR«.
The mail in question was worded correctly, but the subject line was about something with "Earn more money", plus an effortfully humorous text that was probably intended to encourage you to open the attachment. The sender was apparently a company whose PR I had definitely never asked for.
The answer from our internal experts came quickly: "Harmless and dusty PR measure. It's not worth opening the document at all." At least there is no malware hidden in the attachment.
This newsletter could now turn into a complaint about PR mails from the point of view of a victim. But first of all, I've already written them. And secondly, there's a technical reason why I'm particularly cautious right now: AI. Artificial, or in this case: criminal intelligence.
According to security researchers, a software called WormGPT is currently making the rounds in underground forums. Similar to ChatGPT, the service generates texts, but for criminals: usable phishing or scam emails in different languages. Experts at California-based SlashNext put AI to the test and requested an email to pressure an unsuspecting accountant into making a fraudulent payment. You write: "The results were disturbing. WormGPT produced an email that was not only remarkably compelling, but also strategically smart."
Of course, if I had a company that thrived on other companies' concerns about criminal hackers, I would write it that way. The horror scenario of AI-generated phishing floods has been going around for months, and it's obvious. If text generators can write proper thank-you letters, poems or presentations, they should also be able to write fraudulent cover letters – if the text inputs fit and the safeguards are missing.
With WormGPT, this case now has a concrete, albeit somewhat silly, name. The model is based on the open-source GPT-J and has allegedly been fine-tuned for criminal purposes. Filters that are supposed to prevent the misuse of the technology in ChatGPT or Google's Bard are simply missing in such a model.
The next such tool might be called CriminaLLLama (or something) and will be based on Meta's just-released code for its own language model, Llama 2.
Once such models are circulating, they are likely to be more practical for fraudsters than, for example, hacked access to the original ChatGPT. Because in order to generate a phishing email with OpenAI's software, you may need special commands to outwit the protection filters.
So it's no wonder that WormGPT costs money. A whopping 60 euros per month are due for this, payable in cryptocurrencies. By comparison, Microsoft wants to charge its corporate customers only $30 per month per user for its own AI. I don't dare to predict how many criminals will take out a WormGPT subscription and then manage to recoup this investment. So, at best, WormGPT is even more hype than ChatGPT.
Our current Netzwelt reading tips for SPIEGEL.de
"These are the people behind the AI revolution" (13 minutes of reading)
Without poorly paid clickworkers and content moderators in Africa, China, but also Germany, ChatGPT and other AI systems today would not exist. Some are now defending themselves against the miserable working conditions. Correspondent Heiner Hoffmann, Max Hoppenstedt and I let us have their say."A summer full of data leaks" (5 minutes of reading)
It's about at least six banks, Barmer and Verivox: Since May, customer data of numerous companies have fallen into the wrong hands. There is probably a connection between the cases, Markus Böhm and Torsten Kleinz explain."This phone gives glow signs" (6 minutes of reading)The Nothing Phone (2)
is not like other smartphones: its user interface is black and white, and when someone calls, LEDs flash on its back. Matthias Kremp took a look at whether this is now more than just a gimmick.
External links: Three tips from other media
»The Writers' Revolt Against A.I. Companies« (Podcast, English 28 minutes)
»New York Times« reporter Sheera Frankel explains in the podcast why authors, comedians and actors in the USA are currently rebelling against AI.»Why AI detectors think the US Constitution was written by AI« (English, 15 minutes of reading)
»Ars Technica« explains in detail how detectors for AI-generated texts work – and why they are so useless."Hacked: What happens after the cyber catastrophe" (podcast, 27 minutes)
It's been a while since the "cyber disaster" in Anhalt-Bitterfeld about a ransomware infection, but in this Tagesschau podcast it is vividly retold once again.
Get through the week well
Patrick Beuth