← Back to all shades
Shade 11 ~75%

Foreign AI Subversion

Tier 2: Highly Probable

Unmanaged -4
Governed 1
Dividend 5

AI enables foreign interference at a scale and subtlety that prior generations of disinformation could not approach. The threat has moved beyond crude bot farms. AI can generate millions of unique, psychologically targeted messages, each optimized for its recipient, delivered through channels indistinguishable from organic conversation. A foreign power need not hack an election if it can shape what every voter believes in the months before they cast a ballot. The content is not false in any easily identifiable way; it is selectively true, framed to manipulate, not to inform.

Romania provided the proof of concept. In November 2024, previously obscure far-right candidate Călin Georgescu won Romania’s presidential first round after polling in the single digits weeks earlier. His campaign was almost entirely digital, generating approximately 150 million TikTok views in two months. Romanian intelligence uncovered over 85,000 cyberattacks against electoral infrastructure, coordinated bot networks, AI-generated content amplification through Telegram channels, and stolen election server credentials found on Russian forums. On December 6, 2024, Romania’s Constitutional Court annulled the first-round results, an unprecedented act in EU history, citing overwhelming evidence that the election’s integrity had been compromised. TikTok’s own 2025 report identified a network of 27,217 accounts coordinating to promote Georgescu through a fake engagement provider. When Romania reran the election in May 2025, the digital interference returned. Monitoring firm Refute detected approximately 32,500 TikTok videos containing inauthentic content, with 48% of engagement originating outside Romania despite only 24% of Romanian nationals living abroad. The European Commission opened formal proceedings against TikTok under the Digital Services Act, an investigation that remains ongoing more than a year later, prompting Romanian MEPs to call the response “unacceptably slow” (IIEA, 2025; Global Witness, 2025; TechPolicy.Press, 2025; Romania Insider, 2026).

The Romanian case was state-level manipulation with relatively crude tools. The next iteration will not need bot networks that can be identified post hoc. Voice cloning now requires just 20 to 30 seconds of audio. Convincing video deepfakes can be created in 45 minutes using freely available software. Deepfake fraud surged 1,740% in North America between 2022 and 2023, with financial losses exceeding $200 million in Q1 2025 alone. The WEF reports that human ability to identify deepfakes hovers at 55 to 60%, barely better than random chance, while automated detection systems experience 45 to 50% accuracy drops when confronted with real-world deepfakes compared to laboratory conditions (WEF, 2025). An October 2025 European Parliamentary Research Service report found that AI-generated content had overtaken the quantity of human-made content online by November 2024, reaching 52% of all content by May 2025 (EPRS, 2025). When most of what people encounter online is already synthetic, the additional cost of inserting state-sponsored manipulation into that stream approaches zero.

The domestic attack surface expanded concretely in 2024-2025. President Trump shared AI-generated images to ridicule political opponents. A Virginia congressional candidate debated an AI-generated avatar of his opponent. Senator Amy Klobuchar confronted a deepfake of herself spewing vulgarities. Former governor Andrew Cuomo deployed deepfake technology against his mayoral opponent (TechPolicy.Press, 2026). These are domestic examples. State-level foreign operations have stronger incentives, fewer constraints, and access to the same tools. In April 2025, NewsGuard found that AI chatbots repeated false narratives from Russian influence operation Storm-1516, a spinoff of Russia’s Internet Research Agency, 32% of the time. The operation laundered disinformation through fake local news sites and fabricated whistleblower videos, directly targeting European leaders with deepfakes designed to discredit Ukraine (EPRS, 2025).

Romania’s institutions did function, and that matters. The court annulled a compromised election, new elections were held, the pro-European candidate won. Democracies have proven more resilient to information operations than pessimists predicted. Social media platforms have gotten substantially better at identifying coordinated inauthentic behavior. CISA and similar bodies provide early warning. Detection technology improves alongside generation technology.

But the cost curves are diverging. Generation is cheap and scales effortlessly; detection is expensive, lags behind, and degrades outside laboratory conditions. Producing a convincing deepfake video costs minutes and dollars. Verifying every piece of content encountered by every voter costs effectively infinite resources. Mustafa Suleyman’s “containment problem,” articulated in The Coming Wave (2023), applies directly: once a powerful technology is widely accessible, restricting its misuse becomes structurally harder with each passing year. Content provenance infrastructure, digital watermarking, and public media investment define the governed outcome. The 25 states that have passed laws regulating AI in elections represent early efforts, but the federal vacuum leaves defenses fragmented. And the EU’s response to Romania, widely criticized as too slow, suggests that even democracies with advanced regulatory tools cannot move at the speed the threat demands.

Key tension: The openness that makes democratic discourse possible is the same vulnerability that makes it exploitable. Romania demonstrated both the threat and the institutional response. The question is whether institutions can scale their defenses as fast as adversaries can scale their attacks.