AI and Cyber-Enabled Threats to Democracy through Algorithmic Manipulation and Generative AI in Undermining Democratic Integrity
Abstract
The increasing integration of artificial intelligence (AI) into digital platforms has escalated threats to democratic integrity worldwide, primarily through algorithmic manipulation, generative AI technologies, and large language models (LLMs). This study comprehensively investigates how these advanced technologies are systematically leveraged by state and non-state actors to destabilise democracies. The paper scrutinises empirical cases from the United States, European Union, India, Türkiye, Argentina, and Taiwan, analysing the operational mechanisms and socio-political implications of AI-driven disinformation. Findings demonstrate how generative AI, deepfake technologies, and sophisticated behavioural targeting exacerbate polarisation, weaken institutional trust, and distort electoral processes. Despite the growing prevalence of such cyber-enabled interference, regulatory and institutional responses remain fragmented and inadequate. Consequently, this research culminates in proposing a robust strategic implementation framework, emphasising platform transparency, regulatory innovation, technological safeguards, and civic resilience measures. This framework provides actionable guidance for safeguarding democratic integrity amid evolving AI threats.
Downloads
References
2. Bajraktari, Y. (2024). AI and geopolitics: How emerging technologies shape democratic backsliding. Carnegie Europe Policy Brief.
3. Benkler, Y., Faris, R., & Roberts, H. (2025). Network propaganda: Manipulation, disinformation, and radicalisation in the age of AI (2nd ed.). Oxford University Press.
4. Bradshaw, S., & Howard, P. N. (2023). Industrialized disinformation: 2023 global inventory of organised manipulation. Oxford Internet Institute Report.
5. Coeckelbergh, M. (2025). AI ethics for democracy: Beyond individual harms to systemic disruption. AI & Society, 40(2), 101–120.
6. Donovan, J., & Friedberg, B. (2024). Manufacturing consensus: How synthetic media challenges public discourse. Harvard Kennedy School Discussion Paper.
7. European Commission. (2024). AI Act and electoral protection package: Policy briefing. Brussels: European Union. Retrieved from https://ec.europa.eu/digital-strategy
8. Floridi, L. (2025). Democratic epistemology in the age of artificial intelligence. Philosophy & Technology, 38(1), 1–20.
9. Freedom House. (2023). Freedom in the World 2023: Digital authoritarianism intensifies.
10. Gorwa, R. (2025). The politics of platform governance: Beyond content moderation. Journal of Cyber Policy, 10(1), 90–118.
11. Helmus, T. C., & Bodine-Baron, E. (2023). Russian information warfare: Deepfakes and digital sabotage. RAND Corporation Report.
12. Marwick, A., & Lewis, R. (2023). Media manipulation and disinformation online. Data & Society Research Institute.
13. McGregor, S. E. (2024). Ethical AI in electoral contexts: Regulatory pathways and design norms. Ethics & Information Technology, 26(4), 385–404.
14. Meta Transparency Centre. (2024). Quarterly enforcement report on election integrity.
15. Taiwan FactCheck Center. (2024). Annual disinformation audit: Foreign influence in national elections. Taipei: TFCC.
16. UNESCO. (2024). AI and electoral integrity: Policy guidelines for member states. United Nations Educational, Scientific and Cultural Organization Report.
17. Center for Humane Technology. (2025). Attention hijacked: The algorithmic design of electoral manipulation. San Francisco.
18. G7 Digital Ministers. (2024). Joint declaration on democratic AI governance. Tokyo Summit.
19. Global Disinformation Index. (2024). Ranking platforms by election risk exposure. London.
20. Indian Election Commission. (2023). White paper on digital interference in regional elections. New Delhi.
21. LINE Corporation. (2024). AI integrity protocols and disinformation flagging in Taiwan. Tokyo.
22. MIT Technology Review. (2024). The rise of generative AI in political persuasion campaigns. Cambridge, MA.
23. Oxford Internet Institute. (2023). Computational propaganda and the global elections index. Oxford University Press.
24. Pew Research Center. (2023). Public trust and synthetic news: A global survey of AI literacy. Washington, D.C.
25. Stanford Cyber Policy Center. (2023). Generative AI and democracy: Risks and recommendations. Stanford University.
26. Canada Centre for Cyber Security. (2023). AI interference and hybrid threats in parliamentary elections. Ottawa.
27. Bennett, W. L., & Livingston, S. (2023). The disinformation order: Disruptive communication and the decline of democratic institutions. Information, Communication & Society, 26(1), 1–20.
28. Mihailidis, P. (2024). Civic resilience in the algorithmic age: Countering polarisation with pedagogy. Media, Culture & Society, 46(3), 311–330.
29. Moir, C., & Lee, Y. (2025). AI elections: Ethics of campaign automation in the Global South. Digital Ethics Journal, 7(2), 55–76.
30. OpenAI. (2023). Model behaviour and content provenance protocols: Election-season risk framework.
31. Cofacts. (2024). Collaborative fact-checking protocols and Telegram misinformation alerts. Taipei.
Copyright (c) 2025 Md. Abul Mansur

This work is licensed under a Creative Commons Attribution 4.0 International License.