In brief

  • Researchers warn that AI swarms could coordinate “influence campaigns” with limited human oversight.
  • Unlike traditional botnets, swarms can adapt their messaging and vary behavior.
  • The paper notes that existing platform safeguards may struggle to detect and contain these swarms.

The era of easily detectable botnets is coming to an end, according to a new report published in Science on Thursday. In the study, researchers warned that misinformation campaigns are shifting toward autonomous AI swarms that can imitate human behavior, adapt in real time, and require little human oversight, complicating efforts to detect and stop them.

Written by a consortium of researchers, including those from Oxford, Cambridge, UC Berkeley, NYU, and the Max Planck Institute, the paper describes a digital environment in which manipulation becomes harder to identify. Instead of short bursts tied to elections or politics, these AI campaigns can sustain a narrative over longer periods of time.

Daniel Thilo Schroeder, a research scientist at SINTEF, a Norwegian research organization, said while full autonomy remains a technical hurdle, AI-driven influence operations are already becoming more sophisticated.

“We’re already seeing AI used in influence operations for content generation, persona management, and targeting, and early forms of coordinated, multi-account activity,” Schroeder told Decrypt. “What’s still hard is fully autonomous, long-horizon orchestration at scale: maintaining many credible personas over time, coordinating across platforms, and adapting robustly without human oversight. That said, ‘human-in-the-loop’ swarms (where AI does most of the work but humans supervise key steps) are feasible now, and the autonomy level is likely to increase quickly as agent tooling improves.”

A swarm is a group of autonomous AI agents that work together to solve problems or complete objectives more efficiently than a single system. The researchers said AI swarms build on existing weaknesses in social media platforms, where users are often insulated from opposing viewpoints.

Most defenses, however, are still optimized for content and obvious spam, rather than coordination and behavior, Schroeder said.

“As a result, adaptive many-to-many operations can evade detection by varying language while still moving in lockstep behaviorally,” he said. “A second major gap is transparency and access: without consistent, privacy-preserving data access and independent auditing, it’s very difficult for external experts to detect or measure coordinated operations reliably.”

That shift is already visible on major platforms, according to Sean Ren, a computer science professor at the University of Southern California and the CEO of Sahara AI, who said that AI-driven accounts are increasingly difficult to distinguish from ordinary users.

“I think stricter KYC, or account identity validation, would help a lot here,” Ren told Decrypt. “If it’s harder to create new accounts and easier to monitor spammers, it becomes much more difficult for agents to use large numbers of accounts for coordinated manipulation.”

Earlier influence campaigns depended largely on scale rather than subtlety, with thousands of accounts posting identical messages simultaneously, which made detection comparatively straightforward. In contrast, the study said, AI swarms exhibit “unprecedented autonomy, coordination, and scale.”

Ren said content moderation alone is unlikely to stop these systems. The problem, he said, is how platforms manage identity at scale. Stronger identity checks and limits on account creation, he said, could make coordinated behavior easier to detect, even when individual posts appear human.

“If the agent can only use a small number of accounts to post content, then it’s much easier to detect suspicious usage and ban those accounts,” he said.

No simple fix

The researchers concluded that there is no single solution to the problem, with potential options including improved detection of statistically anomalous coordination and greater transparency around automated activity, but say technical measures alone are unlikely to be sufficient.

According to Ren, financial incentives also remain a persistent driver of coordinated manipulation attacks, even as platforms introduce new technical safeguards.

“These agent swarms are usually controlled by teams or vendors who are getting monetary incentives from external parties or companies to do the coordinated manipulation,” he said. “Platforms should enforce stronger KYC and spam detection mechanisms to identify and filter out agent-manipulated accounts.”

Editor's note: Adds comments from Schroeder.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.