Get in touch

Success!
We've successfully received your inquiry and we'll be in touch with you soon.
Oops!
Something went wrong while submitting the form.
X icon for closing

AI 'swarms' could fake public consensus and quietly distort democracy, Science Policy Forum warns

WRITTEN BY
Maria A. Ressa, Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Amit Goldenberg, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay J. Van Bavel, Sander van der Linden, and Jonas R. Kunst
January 27, 2026

Read more

The article argues that the central risk is not only false content, but 'synthetic consensus'

MANILA, Philippines — A new Science Policy Forum article warns that the next generation of influence operations may not look like obvious “copy-paste bots,” but like coordinated communities: fleets of AI-driven personas that can adapt in real time, infiltrate groups, and manufacture the appearance of public agreement at scale.

In the world-renowned journal, the authors describe how the fusion of large language models (LLMs) with multi-agent systems could enable “malicious AI swarms” that imitate authentic social dynamics—and threaten democratic discourse by counterfeiting social proof and consensus. Maria Ressa, Nobel Peace Prize laureate and The Nerve's head of global strategy, is one of the authors of this article.

The article argues that the central risk is not only false content, but synthetic consensus: the illusion that “everyone is saying this,” which can influence beliefs and norms even when individual claims are contested. According to the authors, this risk compounds existing vulnerabilities in online information ecosystems shaped by engagement-driven platform incentives, fragmented audiences, and declining trust.

The authors define a malicious AI swarm as a set of AI-controlled agents that can maintain persistent identities and memory; coordinate toward shared objectives while varying tone and content; adapt to engagement and human responses; operate with minimal oversight; and deploy across platforms. Compared with earlier botnets, such swarms could be harder to detect because they can generate heterogeneous, context-aware content while still moving in coordinated patterns.

"The next few years will be decisive in whether we succeed in combating the next generation of AI-driven influence operations designed to damage and influence societies and democracies," says researcher Daniel Thilo Schroeder of SINTEF.

Instead of moderating posts one by one, the authors argue for defenses that track coordinated behavior and content provenance: detect statistically unlikely coordination (with transparent audits), stress-test social media platforms via simulations, offer privacy-preserving verification options, and share evidence through a distributed AI Influence Observatory — while also reducing incentives by limiting monetization of inauthentic engagement and increasing accountability.

"The danger is no longer just fake news, but that the very foundation of democratic discourse — independent voices — collapses when a single actor can control thousands of unique, AI-generated profiles," says Professor Jonas R. Kunst from BI Norwegian Business School.

Read the full article here.

The article, titled “How malicious AI swarms can threaten democracy,” is authored by Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Amit Goldenberg, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Maria Ressa, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay J. Van Bavel, Sander van der Linden, and Jonas R. Kunst. The report was published in Science on Thursday, January 22.

Download Free Report

Join our newsletter to download this report and receive the latest updates from TheNerve.
Thank you!
Thank you for subscribing. You'll receive valuable insights and exclusive offers directly to your inbox.
Download PDF
Download PDF
Oops!
Something went wrong while submitting the form.