In groups, AIs create social norms just like humans do

Researchers have shown that when large language models interact, they converge on group solutions, giving rise to “social conventions” and even collective biases.
When artificial intelligences interact in groups, they exhibit characteristics similar to those of human societies, demonstrates a study published on May 14 in Science Advances. and which the journal Nature echoes .
Human societies are shaped by social conventions—rules of conduct, whether formal or informal, that suggest the behaviors an individual should adopt in the presence of others. What about AI? This is the question a team of researchers asked. “At a time when more and more AI agents communicate using natural language, a key question arises: can they lay the foundations for a true society?” the authors explain in the study.
The researchers designed a series of experiments using different versions of Claude, the large language model (LLM) developed by the company Anthropic, and different versions of Llama, Meta's LLM. In the first two experiments, they had the LLMs play a game similar to one used in studies of human group dynamics, in which team members must name objects.
They found that after a while, the members
Courrier International