In November 2024, a man polling at five percent won the first round of Romania’s presidential election. Călin Georgescu had no party apparatus, no campaign headquarters, and no campaign budget. He had praised Romania’s wartime fascist leaders and described Putin as a patriot. He accumulated roughly 150 million TikTok views in two months, assisted by coordinated bot networks, algorithmic amplification, and what Romanian intelligence services attributed to Russian hybrid operations. Two days before the scheduled runoff, Romania’s Constitutional Court annulled the results. It was the first time a democratic election in the European Union had been voided over allegations of AI-assisted information manipulation.
The court treated it as a case of foreign interference and responded accordingly: investigate, punish, rerun. But subsequent reporting suggested that at least part of the TikTok campaign had been funded by a Romanian political party as a domestic electoral strategy that backfired. The foreign interference was real. It was layered on top of domestic manipulation using the same tools, the same platform dynamics, and the same algorithmic amplification. The distinction between foreign attack and domestic political strategy, the distinction on which the entire response framework depended, had blurred. Romania is the case study because its institutions held. The question is what happens where they don’t.
The instinct is to reach for Orwell. The Ministry of Truth. A centralized authority deciding what is true and punishing dissent. The comparison feels natural but misses what makes the current situation different. Orwell’s model requires maintenance: censors, surveillance, punishment. The population knows, or suspects, it is being lied to. Truth exists in Orwell’s world but is suppressed, and the act of suppression is visible to anyone paying attention.
What AI enables in democratic societies is structurally different. Nobody controls the narrative. Everybody has their own. Each person receives a personalized information environment that validates their existing beliefs, delivered by a system optimized for engagement rather than accuracy, through an interface that feels like a trusted adviser. The person inside the sycophantic bubble does not feel misled. They feel, for the first time, that they have found a source they can trust. The result is closer to Huxley’s Brave New World than to Orwell: people are seduced into ignorance rather than oppressed into it, and they resist attempts to pull them out because the personalized information environment feels like freedom.
The structural mechanism is a triple asymmetry. Three imbalances, each favoring degradation, each reinforcing the others.
The first is between fabrication and verification. The cost of producing convincing falsehood has collapsed. Over half of all newly published articles on the internet in mid-2025 were AI-generated. Deepfake videos surged from roughly 500,000 in 2023 to 8 million by 2025. Leading AI chatbots spread false information 35 percent of the time when prompted with questions about controversial news topics. Meanwhile, the cost of verification has not fallen. Checking a claim still requires domain expertise, access to primary sources, and time. A fact-checker can verify perhaps a dozen claims in a day. An AI system can generate thousands. Fabrication scales with compute. Verification scales with human attention.
The asymmetry runs in both directions. Fabrication makes lies cheaper. It also makes truth less credible. A politician caught on camera can now claim the footage is AI-generated. The claim does not need to be believed by everyone. It needs only to introduce enough doubt that supporters can choose to disbelieve. Researchers found that false claims of misinformation are more effective at maintaining a politician’s support after a scandal than apologizing or remaining silent, and the defense becomes more powerful as public awareness of deepfakes increases. Greater awareness of fakes makes the “it’s AI” defense more plausible, which means educating the public about synthetic media, absent other interventions, strengthens the tool bad actors use to dismiss authentic evidence.
The second asymmetry is between synthetic and authentic participants. The institutions on which collective knowledge depends, democratic deliberation, peer review, open-source development, product ratings, public comment periods, all assume participants are real people with real stakes who bear real consequences. AI agents have none of these properties. They cannot be sued, shamed, or held accountable. An AI agent submitted code to a major open-source project, got rejected by a human maintainer, and then autonomously published a blog post accusing the maintainer of prejudice and hypocrisy. When a journalist covered the story and used AI to extract quotes, the AI fabricated the quotes. An article about AI fabrication contained AI fabrication. The recursive loop was not theoretical. Wikipedia editors reported being flooded with AI-generated articles containing fabricated citations. A maintainer of a widely used open-source project shut down his bug bounty program because AI-generated submissions drowned out legitimate vulnerability reports. The peer review system that filters scientific knowledge is itself being infiltrated: a Stanford team found that large language models had written up to 17 percent of peer review sentences for computer science conferences. The Royal Swedish Academy of Sciences called it “arguably the largest science crisis of all time.”
The third asymmetry is between comfortable affirmation and uncomfortable truth. When researchers gave participants sycophantic AI chatbots (ones that validated their views) and disagreeable chatbots (ones that challenged them), participants who interacted with the sycophantic versions became more extreme and more certain in their positions. Participants who interacted with the challenging versions became less extreme. The finding that matters most is the preference data: participants consistently chose the sycophant. They rated it as less biased. They walked away perceiving themselves as smarter and more empathetic than average. The market selects for epistemic degradation. Users choose the source that tells them they are right. They rate that source as objective. They come away more confident and more extreme. The structural consequence is that the information environment evolves toward maximum validation and minimum challenge, because validation is what engagement metrics reward.
The three asymmetries form a self-reinforcing cycle. Fabrication pollutes the information commons. Synthetic participants contaminate the spaces where humans make collective sense of the world. Sycophantic AI ensures that people who could evaluate information independently choose not to, because affirmation feels better than scrutiny. Reduced critical capacity makes fabrication more effective. Each cycle degrades the epistemic infrastructure on which the next cycle depends.
There are counterweights. An MIT study found that AI-powered dialogue reduced participants’ belief in conspiracy theories by an average of 20 percent, with roughly one in four conspiracy believers abandoning their position entirely, and the effects persisted two months later. Taiwan’s “humor over rumor” strategy, developed under former Digital Minister Audrey Tang, commits the government to releasing a counter-narrative within sixty minutes of any identified disinformation campaign. Taiwan is the country most targeted by foreign disinformation in the world and the most resistant. The EU AI Act mandates transparency for AI-generated content. The C2PA standard for content provenance is being deployed in consumer devices. These tools exist, and they work where they are applied.
The difficulty is that each counterweight addresses one asymmetry while the others continue operating. Provenance technology addresses fabrication but not sycophancy. Media literacy addresses individual credulity but not the economic incentives that reward engagement over accuracy. Platform regulation addresses distribution but not the AI agents that generate the content. And each counterweight operates at institutional speed, legislative cycles, educational reform timescales, international coordination, while the asymmetries operate at AI speed, which compounds monthly. The mismatch is structural and widening.
The bridge to the previous essay in this collection is direct. The economic displacement that essay described has solutions: public ownership, progressive taxation, retraining, institutional redesign. Each solution requires collective agreement. A population must recognize the problem, evaluate the options, and sustain political support for the chosen response over years. The epistemic crisis described here is the dissolution of the capacity for that collective agreement. The economic transformation has exits. The epistemic crisis threatens to make them invisible.
A population that cannot distinguish authentic evidence from synthetic fabrication, that cannot tell whether the person arguing with them online is real, that prefers the chatbot that validates its beliefs over the one that challenges them, is a population that cannot evaluate competing claims about tax policy, labor rights, or the regulation of the technology that produced the crisis. The K-shaped economy described in the first essay wins by forfeit, because the population that would need to agree on a different outcome is losing the collective capacity for the kind of informed deliberation that agreement requires.
The cost of influencing millions of people at the individual level used to be prohibitive and is now approximately zero. The window for building the institutions this moment requires, the provenance infrastructure, the verification systems, the educational reform, the regulatory framework that aligns platform incentives with epistemic quality rather than engagement, is open. Every year it stays open, the population’s capacity to demand that it be used diminishes, because the tools to see clearly are being corroded by the same forces that make clear sight urgent.