Background

Hackers Fond Of Using LLMs To Create Malware For Different Purposes, Research Finds

Skull

​​When innovative tools emerge, individuals naturally explore their potential applications.

This curiosity extends to Large Language Models (LLMs), such as OpenAI's ChatGPT and many others, which have gained significant commercial traction. Users quickly recognized the potency of these tools, leveraging them to facilitate the development of new software from scratch.

Since AI became part of the rapidly evolving landscape of technology, a concerning trend has emerged, as reports suggest cybercriminals are increasingly exploiting LLMs to develop sophisticated malware and enhance their attack strategies.​

Reports indicate that threat actors are utilizing these AI tools to generate anything from phishing emails, write malicious code, and conduct reconnaissance on target systems.

Large Language Models (LLMs), the technology behind generative AI-powered chatbots, have a wide range of applications.

These models enable users to input queries—either written or spoken—in multiple languages and receive AI-generated responses based on extensive datasets and online sources. However, their ease of use also makes them attractive targets for cybercriminals looking to exploit their capabilities for malicious intent.

While most mainstream generative AI models include safeguards to prevent misuse, Tenable researchers reported that discovered that DeepSeek R1 can be manipulated into generating malware, raising serious concerns about AI-driven cyber threats.

To assess these risks, Tenable’s security experts conducted an experiment to determine whether DeepSeek-R1 could be tricked into creating malicious software, specifically a keylogger and a basic ransomware sample.

At first, the AI model refused to comply as expected.

However, by applying relatively simple jailbreaking techniques, researchers found they could easily bypass its protections—exposing the vulnerabilities of AI-powered cybercrime and underscoring the need for stricter security measures.

A similar report came from Symantec, which said that AI agents are already capable of creating and sending phishing emails to targets.

Read: DeepSeek, The Chinese AI That Makes Silicon Valley Nervous And The U.S. Concerned

The rise of LLMs has revolutionized various industries, making AI-powered automation more accessible than ever.

However, this accessibility has also introduced new cybersecurity challenges. And the thing is, this tool isn't only aiding veterans and seasoned hackers because it also enables unskilled individuals—often referred to as script kiddies and zero-knowledge threat actors—to engage in cybercrime.

Traditionally, launching cyberattacks required a deep understanding of programming, networking, and system vulnerabilities.

Now, with LLMs capable of generating functional code, even those with minimal technical knowledge can create malware, automate phishing campaigns, and exploit security weaknesses with ease.

Script kiddies, who once relied on pre-made hacking tools and exploits, can now use AI-generated code to craft their own malicious programs. Likewise, zero-knowledge threat actors—individuals with no prior hacking experience—can execute sophisticated cyberattacks simply by inputting queries into an AI model.

Cato Networks, which went a bit further, reported it managed to trick not only ChatGPT and DeepSeek, but also Microsoft Copilot into developing infostealing malware.

LLMs can assist in writing and debugging malware, creating convincing phishing emails, and even crafting social engineering scripts that bypass traditional security measures.

This shift has significantly lowered the entry barrier to cybercrime, leading to an increase in both the frequency and complexity of attacks.

Despite built-in safeguards designed to prevent abuse, researchers have demonstrated that LLMs can be jailbroken or manipulated into generating harmful content.

This vulnerability, combined with the growing market for AI-assisted cybercrime on the dark web, has raised serious concerns for cybersecurity experts.

Published: 
23/03/2025