Two distinct phenomena are converging on the same result. The first is the flood of synthetic text. As of May 2025, over half of all newly published articles on the internet were generated by AI, up from roughly five percent before ChatGPT launched in late 2022 (Graphite via Axios). Much of this content is SEO filler and boilerplate rather than deliberate deception. It is simply unreliable: produced at a volume that makes quality control impossible and increasingly indistinguishable from content a person actually wrote and stands behind. The second phenomenon is deliberate fabrication. In January 2024, fraudsters used deepfake technology to impersonate the CFO and multiple colleagues of UK engineering firm Arup on a video conference call, extracting $25.6 million across 15 transactions before the fraud was discovered a week later. Every participant on the call, except the victim, was an AI-generated fabrication (CNN; Fortune). In the 2024 global election supercycle, over 80 percent of countries experienced observable AI interference in their electoral processes (CIGI). Synthetic slop and targeted deepfakes differ in intent. They are identical in effect: they make it impossible to know whether what you are reading, watching, or hearing is real. The largest longitudinal study of news diffusion on social media found that falsehood spread farther, faster, and more broadly than truth in every category, with false stories 70 percent more likely to be shared and reaching audiences roughly six times faster (Vosoughi, Roy & Aral, Science, 2018). That was before generative AI reduced the cost of producing convincing fabrication to nearly zero.
The deeper danger is that people will stop believing anything. For most of human history, the difficulty of fabrication functioned as an invisible guarantor of trust. Photographs were evidence because faking them required a darkroom and expertise. Video was definitive because no one could manufacture it convincingly. That assumption is now broken. Legal scholars Robert Chesney and Danielle Citron have called the resulting dynamic the “liar’s dividend”: the existence of deepfakes allows real wrongdoers to dismiss authentic evidence as fabricated. A study in the American Political Science Review found that false claims of misinformation are more effective at maintaining politician support after a scandal than apologizing or remaining silent (Schiff, Schiff & Bueno, 2024). The corrosion runs in every direction. Politicians dismiss authentic recordings by claiming AI manipulation. A veteran CNN anchor ranted on air about a video of a congresswoman that was a labeled AI parody, watermarked “parody 100% made with AI” (Salon). UNESCO has warned of a “synthetic reality threshold” beyond which humans can no longer distinguish authentic from fabricated media without technological assistance (UNESCO). The consequences reach into courtrooms, where the Advisory Committee on the Federal Rules of Evidence voted in May 2025 to seek public comment on a new rule governing AI-generated evidence (Quinn Emanuel). They reach into custody disputes, where doctored recordings have been submitted to portray a parent as violent (University of Baltimore Law Review). The institutions that hold democratic society together, courts, science, the press, elections, are all built on the assumption that evidence is hard to fabricate. That assumption no longer holds. Two inadequate responses remain. The first is epistemic surrender: people stop evaluating claims on their merits and instead believe whoever their group believes. This is already the dominant mode in American political life, where partisan affiliation predicts belief on empirical questions more reliably than education. The second is state control. China’s internet censorship apparatus blocks over 100,000 websites, employs AI to filter content in real time, and removed over 2.5 million messages in 2024 under its “Clean Network” campaign (Freedom House, 2025). The system controls narrative effectively. It also eliminates dissent, punishes journalism, and manufactures the unreality it claims to prevent.
The contamination extends to the knowledge base itself. In 2024, Wiley retracted over 11,300 articles from its Hindawi journals and shut down 19 titles after discovering they had been flooded with paper mill submissions. A Stanford team found that large language models had written up to 17 percent of peer review sentences for computer science conferences (Chemistry World). When the peer review system supposed to filter knowledge is itself being written by the technology it evaluates, the trust infrastructure of science is circular. The contamination is recursive in another sense. A study in Nature demonstrated that when AI models are trained on data that includes their own output, they undergo “model collapse”: the tails of the original distribution disappear, and diversity degrades progressively (Shumailov et al., Nature, 2024). Pre-2024 human-generated content may prove to be among the most valuable datasets in existence. It is also finite, increasingly stale, and already being scraped, paywalled, and restricted. Models trained on clean pre-AI data will progressively lag behind a changing reality. Models trained on current data will progressively absorb the synthetic contamination that degrades reliability. The epistemic infrastructure risks degrading in both directions.
The governance response centers on content provenance. The Coalition for Content Provenance and Authenticity (C2PA), an alliance of Adobe, Microsoft, Intel, the BBC, and over 200 organizations, has developed Content Credentials: a cryptographic standard that functions as a nutrition label for digital media. Camera manufacturers including Leica, Sony, and Nikon now ship hardware that signs photographs at capture. The NSA endorsed the standard in January 2025 (NSA/DoD; C2PA). This is the right direction. It is also insufficient. C2PA proves authenticity when present. It does not prevent removal. Verification is opt-in. Fabrication is default. There is, however, a third model between individual helplessness and state control. Wikipedia has operated for over two decades on crowd-regulated content with transparent sourcing requirements, real-time editorial review, and accountability mechanisms that are neither governmental nor anarchic (PEN America/Wikimedia Foundation, 2024). The model is imperfect: coverage is uneven and editing demographics skew narrow. It also suggests that the space between “trust nothing” and “trust the state” can be filled by institutional structures that distribute verification across communities and enforce transparent standards. C2PA provides the cryptographic layer. What is missing is the institutional layer: the community structures, editorial standards, and accountability mechanisms that make provenance meaningful.
Key tension: Fabrication is cheap, fast, and default. Verification is expensive, slow, and opt-in. Every institution that depends on evidence, from courts to science to democracy, is built on an assumption of scarcity that no longer holds.