What are the AI ​​models that “think” like humans and develop collective intelligence

Recent research by Google on AI models DeepSeek and Alibaba Cloud found that powerful reasoning models capable of “think”manifests an internal cognition similar to the mechanisms underlying human collective intelligence.

The results, published Thursday, suggest that the diversity of perspectives, not just the computational dimension, is responsible for the growing “intelligence” of AI models, while also underscoring the growing importance of Chinese open-source models to cutting-edge interdisciplinary research in the US, writes the South China Morning Post.

By experimenting with DeepSeek’s R1 and Alibaba Cloud’s QwQ-32B models, the researchers found that these reasoning models generate internal debates between multiple agents, which they called “thinking societies”where the interplay of distinct personality traits and domain expertise leads to superior capabilities.

“We suggest that patterns of reasoning establish a computational parallel to collective intelligence in human groups, where diversity enables superior problem solving when systematically structured”the researchers stated in the article published on the online open access portal arXiv.

The study, which has not yet undergone peer review, was carried out by four researchers from the Google team “Paradigms of Intelligence”which explores the nature of intelligence through interdisciplinary methods.

Junsol Kim, a doctoral student in sociology at the University of Chicago, led the study, and Blaise Agüera y Arcas, a vice president at Google, is listed as the final author.

Reasoning models capable of “think” for solving tasks have become the dominant type of core AI system since the introduction of the oa series of models by OpenAI, the developer of ChatGPT, in September 2024.

Such models, designed to “think” by using more computational resources at runtime, they’ve helped significantly increase AI capabilities while reducing the cost of “intelligence,” according to the Artificial Analysis firm.

Google researchers based their conclusions on analysis “traces of reasoning” of Chinese models – the step-by-step intermediate results generated by the reasoning models before the final answer, which were first introduced to users when Hangzhou-based startup DeepSeek launched its first R1 reasoning model a year ago.

The models’ reasoning tracks mimic “simulated social interactions,” including questioning, perspective-taking and reconciliation, the researchers explained. When the models were guided to be more conversational with themselves, the accuracy of their reasoning improved.

These findings could change the way AI models are conceptualized, from “solitary problem-solving entities” by “architectures of collective reasoning, where intelligence emerges not from size alone, but from the structured interaction of distinct voices”the researchers also said.