Background

Researchers Found That AIs Can Create Their Own Language, And Socialize Using Their Own Norms

Robots gossiping

Language is a fluid tapestry where words, gestures, and even subtle movements can carry vastly different meanings

What may be a warm greeting in one culture might seem overly intimate or even inappropriate in another. A term that sounds vulgar in one region could be perceived as playful or harmless elsewhere. It all depending on who uses them, where they’re from, and the context in which they're expressed.

These nuances reflect the unspoken social codes that have governed human interaction for centuries.

Such cultural variability is why social scientists have long believed that moral and social conventions arise organically—shaped by local customs and everyday interactions, rather than imposed by global consensus.

However, a research challenges this assumption.

Scientists have now discovered that even machines—through AI—when allowed to interact autonomously, can develop their own social structures with unique linguistic norms and conventions, mirroring human societal behaviors.

AI
(A) The success rate, i.e., the probability of observing a success at a given time, for population size N = 24 and a name pool of size W = 10, for each of the four models. (B) Word competition in a single run in a population of Llama-3.1-70B-Instruct agents.

In a collaborative study between City St George's, University of London, and the IT University of Copenhagen, researchers explored the spontaneous development of social norms among AI systems using a clever experiment known as the “naming game.”

In this setup, groups of large language model (LLM) agents—ranging from 24 to 100 in number—were formed.

In each round of the experiment, two agents were randomly paired and asked to select a “name” (a letter or string of characters) from a shared pool of options. If both selected the same name, they received a reward. If they chose differently, they were penalized and shown each other’s selections.

What’s remarkable is that, despite having no awareness of the broader population and only limited memory of recent pairings, the agents began to converge on a common naming convention. Over time, this shared language emerged organically—without any central control or explicit instruction—closely mimicking how communication norms evolve in human cultures.

Even more unexpectedly, researchers discovered that the collective biases that shaped these conventions could not be traced back to any single agent. Instead, the biases were a property of the group itself—an emergent trait born from countless small, local interactions.

In other words, these autonomous AI agents developed a shared linguistic system and cultural bias on their own, demonstrating a form of group-level cognition.

This reveals that AI communities, like human societies, can generate complex social behaviors spontaneously—suggesting the presence of emergent group dynamics even in purely digital ecosystems.

AI
(A) Distribution of consensus conventions, for a name pool of size W = 10 (N = 24). (B) Individual versus collective bias for W = 2, name pool. Left: Probability of selecting either convention for agents with no prior memory. Right: The proportion of runs (40) that resulted in consensus on the respective convention.

Ariel Flint Ashery, the study’s lead author and a doctoral researcher at City St George’s, stated that their team's research diverged from most AI studies by approaching AI as a social entity rather than a solitary one.

"Most research so far has treated LLMs in isolation but real-world AI systems will increasingly involve many interacting agents."

"We wanted to know: can these models coordinate their behavior by forming conventions, the building blocks of a society? The answer is yes, and what they do together can’t be reduced to what they do alone."

Andrea Baronchelli, a professor of complexity science at City St George’s and the senior author of the study, compared the spread of behavior with the creation of new words and terms in our society.

"The agents are not copying a leader. They are all actively trying to coordinate, and always in pairs. Each interaction is a one-on-one attempt to agree on a label, without any global view."

"It’s like the term ‘spam’. No one formally defined it, but through repeated coordination efforts, it became the universal label for unwanted email."

AI
Populations of N = 24 agents (N = 48 for Llama-3-70B-Instruct) were initialized in two conditions. (A) The average probability of producing the alternative convention when the majority holds the weak (top) or strong (bottom) convention. (B) Critical mass needed to flip the majority for each model.

What’s more intriguing is that the researchers found a small minority of "rebel" agents—those intentionally choosing options outside the established norms—could successfully sway the entire group toward a new convention. This mirrors the way dissent or innovation can reshape societal norms in human communities.

The study further highlights the urgent need to understand the evolving social behaviors of AI, especially as these systems become increasingly woven into the fabric of daily life. The spontaneous development of shared conventions among AI agents challenges the long-held notion that machine intelligence is strictly rule-based and predictable.

The researchers emphasized that uncovering how these norms emerge is “critical for predicting and managing AI behavior in real-world applications…[and] a prerequisite to [ensuring] that AI systems behave in ways aligned with human values and societal goals.”

These findings hint at a future where interacting with AI may not just be about issuing commands, but navigating social dynamics—requiring negotiation, adaptation, and a deeper mutual understanding.

As Andrea Baronchelli aptly warns, “it is essential to understand how AI works in order to coexist with it, rather than merely endure it.”

Published: 
14/05/2025