Meta LLaMA is a family of large language models (LLMs).
Designed to be highly efficient and competitive with other popular LLMs such as OpenAI's GPT models that power the extremely popular ChatGPT.
Due to its open-access licensing and adaptability, people have been using LLaMAs for a broad range of tasks, especially since LLaMA 2 made it possible for commercial use.
Examples include, and not limited to: chatbots and virtual assistants, companions for writing, blogging and creative writing, code generation, software development and game development, tutors and language learning, academic research support, document analyzer, product recommendations, home automations, creative art projects, and more.
The list just goes on.
And here, China found yet another use of the AI: powering up its military.
Back in 2019, China's National Defense White Paper, titled "China’s National Defense for the New Era (新时代的中国国防),” noted that modern warfare is shifting toward increasingly informationized and intelligentized domains, demanding advances in mechanization, informationization, and AI development.
Top Chinese research institutions linked to the People’s Liberation Army (PLA) have reportedly used Meta’s publicly available LLaMA model to create an specialized AI with potential military applications, according to three academic papers and analysis.
In a June paper, six researchers from three institutions—two of which are affiliated with the PLA’s primary research organization, the Academy of Military Science (AMS)—described how they adapted an early version of Meta’s LLaMA (specifically the 13B large language model) as a foundation for their AI system, dubbed "ChatBIT."
The researchers incorporated their own parameters, focusing on building an AI tool for intelligence gathering, analysis, and reliable operational decision-making in a military context.
Among the things the project team fed the LLM, include "book series on radar, electronic warfare and related literature collections," as well as more sensitive materials, like air combat records, weapons inventory set-up records and electronic warfare operation manuals.
The research team then optimized this ChatBIT AI specifically for military-related dialogue and question-answering tasks.
Most of the training materials were in Chinese, according to the researchers.
And what they found is that, the Meta-AI is able to perform better than some other AI models they tested.
In one example, they estimated that ChatBIT using LLaMA is about 90% more capable than when it uses OpenAI's GPT-4.
However, the researchers did not detail their performance metrics or confirm whether the model has been deployed for active use.
But what's worth noting here is that, the research project was jointly created by the Chengdu Aircraft Design Institute under the Aviation Industry Corporation of China and Northwestern Polytechnical University in Xian, Shaanxi province.
The institute in question is the designer of the Chengdu J-20.
Also known as the "Mighty Dragon," J-20 is a China’s first operational stealth fighter and reflects China’s advances in aerospace technology and defense capabilities. The J-20 is comparable in purpose to the U.S. F-22 Raptor and F-35 Lightning II, designed primarily for air superiority but with additional potential for long-range strike capabilities.
Previously, it was thought that LLMs were incapable for tasks like electronic warfare, primarily due to their difficulty in processing sensor-collected data effectively.
However, more recent developments show that this limitation can be mitigated with a novel approach.
In this case, the researchers fed the traditional AI model with large volumes of numerical data, and have it process the raw input first, before having it extract key "observation value vector parameters," which are then translated into human language by a machine translator.
The researchers then have the LLM interpret and analyze this translated data to formulate responses, which are converted back into commands for electronic warfare operations.
This process allows for the integration of LLMs in high-stakes settings: the reinforcement learning model’s data processing and the LLM’s interpretive capabilities combine to deliver output swiftly.
This in turn allows the AI to adjust attack strategies up to ten times per second.
Tests confirmed the potential of this setup, which enables the generation of multiple false radar targets for enemy forces, a tactic that is more disruptive than traditional noise interference or deflection methods.
And with that in mind, the involvement of the designers of China’s J-20 stealth fighter in this AI research is clear proof that generative AI holds far more potential than previously understood.
Nonetheless, practical challenges remain, including model size, chip limitations, and security concerns.
As one Beijing-based AI scientist put it, " [...] there’s little doubt that ‘words can kill’ is evolving from a philosophical concept to reality.”
Meta, which created the LLaMA models, allows anyone to use its AI, but with restrictions.
And one of the things that violate its terms, is using it for military, warfare, espionage, and other sensitive activities governed by U.S. defense export controls.
The company requires that organizations with more than 700 million users obtain a license and mandates its models not be used for purposes like weapon development or content that promotes violence.
"Any use of our models by the People’s Liberation Army is unauthorised and contrary to our acceptable use policy," said a Meta spokesman.
"In the global competition on AI, the alleged role of a single and outdated version of an American open-source model is irrelevant when we know China is already investing more than $1 trillion [£77bn] to surpass the U.S. on AI."
However, because Meta’s has given open-access, this limits the company’s enforcement capabilities.
There is little it can practically do to enforce its rules once someone has downloaded its AI software.
It's worth noting that AI has been used to help the military in the past, and has been powering war rooms, where it offers intelligence analysis or decision-making help to human commanders.
But this is the first publicly disclosed research that directly applies LLMs to weaponry.