'ChatGPT' From OpenAI Allows Script Kiddies To Create Malware Quickly And Effortlessly

ChatGPT

Technology has led to many societal changes, and with technology, lives couldn't be any better, or worse.

One of the countless technologies humans have created, include AI. On one hand, this technology allows automation of computation beyond its original intention. The thing is, it also opens the gate to a whole new malicious deeds never before encountered.

'ChatGPT' is OpenAI's AI capable of a wide range of tasks, including writing poetry, technical papers, novels, and essays.

Users can even teach it to learn about new topics.

But what it is not supposed to be able to, is to write malware.

Researchers at security firm Check Point Research reported that within a few weeks of ChatGPT going live, people in the underground world of cybercrime, have began experimenting with the technology.

And here, the researcher realized that even those with little to no coding experience, could use ChatGPT to write functional program.

In this case, with ChatGPT, even script kiddies are able to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks.

ChatGPT
Cybercriminal showing how he created an infostealer using OpenAI's ChatGPT. (Credit: Check Point Research)

In a post of their website, the researchers said that:

"It’s still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web. However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code. CPR will continue to track this activity throughout 2023."

The first case the researchers discovered, was on December 29, 2022, where a thread named “ChatGPT – Benefits of Malware” was made on a popular underground hacking forum.

The thread author said that he was experimenting with ChatGPT to recreate malware strains and techniques described in various research publications and write-ups about common malware.

As an example, he shared the code of a Python-based information stealer that can be used to search for common file types, copies them to a random folder inside the Temp folder, compress them and uploads them to a hardcoded FTP server.

The author managed to use ChatGPT to create a Java snippet, which is able to download PuTTY, a very common SSH and telnet client, to then run it covertly on the system using Powershell.

The second example, is when a malicious author posted a Python script, which the author dubbed as a multi-layer encryption tool.

The author explained that OpenAI gave him a “nice [helping] hand to finish the script with a nice scope.”

Another example is a ChatGPT malware capable of automating online shop purchasing, or trading credentials for compromised accounts, payment card data, malware, and other illicit goods or services.

ChatGPT
Proof of how the malicious author created Java program that downloads PuTTY and runs it using Powershell. (Credit: Check Point Research)

AI has become more and more capable of understanding the context of the world. But to some degree, it missed many points that there are issues.

When GPT-2 was launched by OpenAI, the researchers were afraid of it. And when GPT-3 was launched, many in the AI community were never happier.

This is because GPT-3 has a startling ability to produce text that sounds like it was written by a human.

And when the community is waiting for the successor of GPT-3, OpenAI is still trying to fix some issues, and instead announced ChatGPT.

The AI is adapted from OpenAI’s GPT-3.5 model, but trained to provide more conversational answers.

Whereas GPT-3 in its original form is able to predict what text follows any given string of words, ChatGPT tries to engage with users’ queries in a more human-like fashion.

In addition to creating malware, several malicious actors have opened discussions in additional underground forums, which focus on using ChatGPT for other schemes, like pairing it with another OpenAI technology (DALLE2) and sell the products online through legitimate platforms.

In another example, the threat actor explains how to generate an e-book or short chapter for a specific topic using ChatGPT, and sell it online.

Published: 
09/01/2023