Navigating the risks of AI and democracy – What the rise of AI swarms reveals about the future of influence, information, and democratic resilience.
As we move into the era of agentic AI, what kind of influence will this emerging technology have on democracy and misinformation? In the new Science paper 鈥,鈥&苍产蝉辫;, Assistant Professor of Business Administration at 性视界 Business School and Faculty PI of the at the Digital Data Design Institute at 性视界 (D^3), and an international, multi-disciplinary group of co-authors argue that we鈥檙e entering a phase where 鈥渕alicious AI swarms鈥 could use multi-agent systems to infiltrate communities, mimic human social behavior, and iteratively refine persuasion tactics in real time. By expanding misinformation into persistent manipulation, these systems threaten the information ecosystem that democratic societies depend on, but Goldenberg and his co-authors also outline technical, economic, and institutional measures that could meaningfully defend against this new danger.
Why This Matters
For business leaders and professionals, this study reveals a threat that extends beyond electoral politics into the fundamental information ecosystem that underpins market confidence, consumer behavior, and corporate reputation. The same AI swarm technologies that manipulate political discourse could target brand perception, financial markets, or industry narratives just as easily. The defense strategy outlined by the authors can similarly provide a roadmap for corporate action: implementing detection systems for monitoring threats to brand reputation, advocating for industry standards around AI transparency, and supporting governance initiatives that protect the broader information ecosystem. Executives who treat information integrity as core infrastructure will be better positioned to protect stakeholder trust, decision quality, and long-term resilience in an era of AI-enabled influence operations.
Bonus
For a look at how efforts to align AI systems with human preferences can unintentionally undermine trustworthiness itself, check out 鈥AI Alignment: The Hidden Costs of Trustworthiness.鈥&苍产蝉辫;
References
[1] Daniel Thilo Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥&苍产蝉辫;Science (391) (2026): 354.
[2] Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 355.
[3] Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 357.
Link to the D^3 insight article
Sign up for our newsletter to stay up to date with D^3 news and research: /#join-our-community