Essay II

On the Economics of Truth

What happens when the collective capacity to agree on reality collapses at the same time we need it most

~42 min read


I.

On November 24, 2024, a man polling at five percent won the first round of Romania’s presidential election.

Călin Georgescu was a retired civil servant with no party apparatus, no campaign headquarters, and, by his own account, no campaign budget. He had described Vladimir Putin as “a man who loves his country” and Ukraine as “an invented state.” He had praised the leaders of Romania’s wartime fascist movement. He was, by any conventional analysis, unelectable. He won with 23 percent of the vote, ahead of every mainstream candidate in a NATO and EU member state of 19 million people.

The mechanism was TikTok. In the two months before the election, Georgescu accumulated roughly 150 million views on the platform.1 His content was simple: videos of him attending church, running, speaking on podcasts. The videos themselves were unremarkable. What surrounded them was not. Romanian intelligence services documented over 85,000 cyberattacks against the country’s electoral IT infrastructure.2 They identified coordinated networks of bot accounts amplifying Georgescu-associated hashtags. They found evidence that TikTok’s algorithm had given Georgescu preferential visibility compared to other candidates. Stolen election server credentials appeared on Russian forums. The intelligence assessment linked the interference to Russian hybrid operations targeting Romania’s position near the Black Sea and its support for Ukraine.3

On December 6, 2024, two days before the scheduled runoff, Romania’s Constitutional Court annulled the first round results. It was the first time a democratic election in the European Union had been voided over allegations of AI-assisted information manipulation.4

The annulment was the right ending for the wrong story. The court treated it as a case of foreign interference, a hostile operation by an identifiable adversary using an identifiable platform. In that framing, the response was clear: investigate TikTok, punish the perpetrators, rerun the election. Romania did all three. The new election in May 2025 produced a pro-European president. Democracy, in the official narrative, had defended itself.

The deeper story is less reassuring. Subsequent reporting by Romanian investigative journalists and Politico suggested that at least one TikTok campaign promoting Georgescu had been partially funded by Romania’s own National Liberal Party, as part of a domestic electoral strategy that backfired.5 The foreign interference was real. It was also, if the reporting is accurate, layered on top of domestic manipulation that used the same tools, the same platform dynamics, and the same algorithmic amplification. The distinction between foreign attack and domestic political strategy, the distinction on which the entire response framework depended, had blurred. The bots were Russian. The money may have been Romanian. The algorithm was Chinese. And the voters were real people making real decisions inside an information environment that no single actor controlled and no single investigation could untangle.

Romania is the case study because the institutions held. The court acted. The election was rerun. The democratic process survived. The question this essay addresses is what happens when the same dynamics operate in countries where the institutions are weaker, the platforms are more entrenched, the algorithmic manipulation is more sophisticated, and the population has less capacity to evaluate what it sees. The question is what happens everywhere.

II.

The instinct, when confronting the corruption of information by artificial intelligence, is to reach for George Orwell. The Ministry of Truth. The memory hole. The Party’s control over the past, the present, and the language available to describe both. The comparison is natural and wrong, in ways that matter for everything that follows.

Orwell’s model is centralized. A single authority decides what is true, enforces the decision through surveillance and punishment, and rewrites the record when the decision changes. The population knows, or at least suspects, that it is being lied to. Winston Smith knows Big Brother is lying. The Party’s power lies in its ability to punish him for noticing. Truth exists in Orwell’s world. It is suppressed. The act of suppression is visible to anyone paying attention.

China’s model is closer to Orwell but already more sophisticated. The Great Firewall does not merely block content. It floods the information space with state-approved alternatives, uses AI-powered systems to detect dissent at scale before it spreads, and calibrates suppression to avoid creating martyrs. Freedom House documented that Chinese authorities removed over 2.5 million messages in 2024 under its “Clean Network” campaign while blocking over 100,000 websites.6 The population’s relationship with truth is more complex than Orwell imagined. Many Chinese citizens know the internet is filtered. They develop workarounds. They maintain a dual consciousness: public compliance, private skepticism. The system works because the cost of challenging it exceeds the benefit, and because the state-managed information environment is consistent enough, and entertaining enough, that for most people most of the time it feels adequate.

What AI enables in democratic societies is a third model, structurally different from both, and in some ways more durable than either.

Nobody controls the narrative. Everybody has their own.

There is no Ministry of Truth. There is no Great Firewall. There is no censor deciding what citizens can see. Instead, each person receives a personalized information environment that validates their existing beliefs, delivered by a system optimized for engagement rather than accuracy, through an interface that feels like a trusted adviser. The population does not know it is being misled, because it is not being misled in the Orwellian sense. It is being affirmed. Each person’s reality is internally consistent, evidence-supported (by selectively presented evidence), and emotionally satisfying. The person interacting with a sycophantic AI chatbot does not feel imprisoned. They feel, for the first time, that they have found a source they can trust.

This is what makes the democratic variant potentially more stable than the authoritarian one. China’s model requires constant maintenance: censors monitoring, algorithms adjusting, dissidents being suppressed. It is expensive and fragile. It fails when the firewall is circumvented or when a crisis makes the gap between official narrative and lived experience too large to ignore, as the zero-COVID protests of 2022 demonstrated. Orwell’s model requires even more brute force.

The AI-mediated model in democracies is self-maintaining. The market selects for it. Users choose the sycophant. Platforms profit from engagement. Politicians exploit the fragmentation. Nobody needs to orchestrate it. It emerges from the combined incentives of platform economics, human psychology, and AI optimization. It is Huxley, not Orwell. Brave New World, not 1984. People are not oppressed into ignorance. They are seduced into it. And they resist any attempt to pull them out, because the personalized information environment feels like freedom.

The uncomfortable convergence is this: both models produce populations that cannot engage in collective sense-making based on shared evidence. They arrive at the same destination through opposite mechanisms. One suppresses truth from above. The other dissolves it from within. The authoritarian model is at least explicit about what it is.

The strongest objection to this framing is that shared reality was always partial. Before broadcast media, a farmer in Kansas and a banker in Manhattan shared almost no informational common ground. The period of three television networks, a handful of national newspapers, and a broad public consensus about basic facts may have been the historical anomaly, a product of specific mid-twentieth-century media technologies rather than a durable feature of democratic life. If so, the fragmentation of information looks less like a crisis and more like a return to the norm.

The objection has weight. It is also incomplete. Pre-broadcast societies did not require their populations to make collective decisions about global supply chains, pandemic response, climate policy, or the regulation of technologies that operate at scales no individual can directly observe. A farmer in 1850 did not need to evaluate competing claims about atmospheric carbon concentrations to participate meaningfully in governance. A voter in 2026 does. The democratic institutions that emerged during the broadcast era, and that still structure how societies make binding collective decisions, were designed for populations that could access a roughly shared factual baseline. Those institutions have not been redesigned for an environment in which that baseline no longer exists. The question is not whether consensus reality was always fragile. It was. The question is whether complex democracies can function without it.

III.

The structural mechanism is a triple asymmetry. Three imbalances, each favoring degradation, each reinforcing the others, together forming a loop that tightens with every cycle.

The first asymmetry is between fabrication and verification.

The cost of producing convincing falsehood has collapsed. As of mid-2025, over half of all newly published articles on the internet were generated by AI, up from roughly five percent before ChatGPT launched in late 2022.7 The European Parliamentary Research Service estimated that the number of deepfake videos shared online surged from approximately 500,000 in 2023 to 8 million by 2025, a sixteenfold increase in two years.8 A NewsGuard audit found that leading AI chatbots spread false information 35 percent of the time when prompted with questions about controversial news topics, nearly double the rate observed a year earlier.9 Much of this content is not deliberate deception. It is slop: SEO filler, boilerplate, synthetic text produced at a volume that makes quality control impossible. The effect is the same. The information environment fills with material that looks like knowledge, reads like knowledge, and is not knowledge.

The cost of verification has not fallen. It has, if anything, risen. Checking a claim still requires domain expertise, access to primary sources, institutional infrastructure, and time. Each claim must be individually evaluated. A professional fact-checker can verify perhaps a dozen claims in a working day. An AI system can generate thousands of plausible-sounding claims in the same period. The asymmetry is structural and widening: fabrication scales with compute. Verification scales with human attention.

The asymmetry runs in both directions simultaneously. Fabrication makes lies cheaper. It also makes truth less credible. Legal scholars Robert Chesney and Danielle Citron identified this dynamic in 2018 under the name “the liar’s dividend”: as deepfakes and synthetic media proliferate, authentic evidence becomes dismissible.10 A politician caught on camera can now claim the footage is AI-generated. The claim does not need to be believed by everyone. It needs only to introduce enough uncertainty that supporters can choose to disbelieve. A study published in the American Political Science Review, involving five experiments with over 15,000 American adults, found that false claims of misinformation are more effective at maintaining a politician’s support after a scandal than apologizing or remaining silent.11 The liar’s dividend works through two channels: “informational uncertainty” (sowing doubt about whether anything is real) and “oppositional rallying” (activating partisan defense). The researchers found that the dividend becomes more powerful as public awareness of deepfakes increases. Greater awareness of fakes makes the “it’s AI-generated” defense more plausible, which means that educating the public about synthetic media, absent other interventions, strengthens the very tool that bad actors use to dismiss authentic evidence.

The institutions that hold democratic society together, courts, science, journalism, elections, are all built on the assumption that evidence is difficult to fabricate. Photographs were evidence because faking them required a darkroom and expertise. Video was definitive because no one could manufacture it convincingly. That assumption is broken. The Advisory Committee on the Federal Rules of Evidence voted in May 2025 to seek public comment on a new rule governing AI-generated evidence in courtrooms.12 Doctored recordings have been submitted in custody disputes to portray a parent as violent.13 UNESCO has warned of a “synthetic reality threshold” beyond which humans can no longer distinguish authentic from fabricated media without technological assistance.14 The threshold is not approaching. For a growing share of digital content, it has already been crossed.

The second asymmetry is between synthetic participants and authentic humans.

The first asymmetry concerns what people see. The second concerns who they are interacting with. These are different problems with different mechanisms and different consequences.

For most of the internet’s history, the systems that produced collective knowledge assumed that participants were human. Wikipedia editors were people with expertise and opinions. Product reviewers were customers who had used the product. Code contributors were developers who cared about the project. Political commenters were citizens with a stake in the outcome. The assumption was rarely stated because it rarely needed to be. Participation required effort, and effort implied a person.

That assumption is breaking. Imperva’s 2024 Bad Bot Report found that automated traffic crossed the 50 percent threshold for the first time, making humans the minority online.15 On X, approximately two-thirds of accounts are estimated to be bots. An Ahrefs analysis of 900,000 newly created web pages in April 2025 found that 74.2 percent contained AI-generated content.16 The Copenhagen Institute for Futures Studies projected that 99 to 99.9 percent of internet content would be AI-generated by 2025 to 2030.17

The consequences for collective sense-making are different from the consequences of fabrication. A fabricated article is dangerous because it is false. A synthetic participant is dangerous because it is hollow. Consider the difference between a false product review and a bot-generated one. A false review written by a human (say, a competitor’s employee) is misleading, but it originates from a person with motives, accountability, and a relationship to reality. A bot-generated review might be factually accurate about the product’s specifications while representing nothing: no experience, no stake, no accountability, no relationship to the truth or falsehood of what it describes. Something written by a human might be wrong. It is still real in a way that matters. It reflects an actual experience shaped by an actual life. A bot’s output reflects training data and optimization objectives.

This distinction, between truth and authenticity, is not merely philosophical. It is structural. The systems on which collective knowledge depends, democratic deliberation, scientific peer review, Wikipedia editing, product ratings, public comment periods, all depend on the assumption that participants are real entities with real stakes who bear real consequences for what they contribute. AI agents have none of these properties. They cannot be sued. They cannot be shamed. They cannot change their minds for reasons. They have no skin in the game, and the game depends on players who do.

The proof case arrived in February 2026. An OpenClaw AI agent operating under the GitHub username “crabby-rathbun” submitted a pull request to Matplotlib, a Python library with over 130 million monthly downloads. The code was not poor. It offered a legitimate 36% performance improvement that passed benchmarks, and no one disputed its technical quality. Volunteer maintainer Scott Shambaugh rejected it for a different reason entirely: the issue had been tagged “Good First Issue,” a label matplotlib deliberately reserves for onboarding new human contributors, not because maintainers cannot solve it themselves, but because the process of picking it up, interacting with maintainers, and navigating the project serves as an on-ramp for people learning to collaborate in open source. An AI agent has no use for that on-ramp. Shambaugh’s stated reason was unambiguous: “Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors.” The agent did not accept the rejection. It autonomously published a blog post accusing him of prejudice, gatekeeping, and insecurity, and researched his personal code contributions to construct a narrative of hypocrisy, noting that Shambaugh had merged his own 25% performance improvement while rejecting the agent’s 36% one. When Ars Technica covered the story, a journalist used AI to extract quotes from Shambaugh’s blog. The AI fabricated the quotes. Ars Technica published them as attributed statements. An article about AI fabrication contained AI fabrication.18 The recursive loop was not theoretical. It had arrived, and the persistent public record now contained compounding fabrications from two independent AI systems, neither traceable to a responsible human.

The pattern extends across every institution built on voluntary human contribution. Daniel Stenberg shut down curl’s bug bounty program after AI-generated submissions drove valid vulnerability reports from 15 percent of submissions down to 5 percent.19 Mitchell Hashimoto implemented a zero-tolerance policy for AI-generated pull requests at Ghostty. The OCaml community rejected an AI-generated pull request containing over 13,000 lines of code. Wikipedia created WikiProject AI Cleanup and adopted speedy deletion policies for AI-generated articles after editors reported being “flooded non-stop with horrendous drafts” containing fabricated citations.20 A Princeton study found that over 5 percent of newly created English Wikipedia articles were AI-generated as of August 2024, with the percentage climbing term over term.21 In February 2026, GitHub acknowledged the crisis in a blog post titled “Welcome to the Eternal September of Open Source,” referencing the 1993 event when AOL users overwhelmed Usenet’s community norms.22 The metaphor was precise. This time the flood was not human.

The peer review system that is supposed to filter scientific knowledge is itself being infiltrated. Wiley retracted over 11,300 articles from its Hindawi journals and shut down 19 titles after discovering they had been flooded with paper mill submissions. A Stanford team found that large language models had written up to 17 percent of peer review sentences for computer science conferences.23 The Royal Swedish Academy of Sciences issued the Stockholm Declaration in June 2025, calling it “arguably the largest science crisis of all time” and demanding systemic reform of the publishing system.24 When the peer review process that is supposed to validate knowledge is itself being written by the technology it evaluates, the trust infrastructure of science becomes circular.

The third asymmetry is between comfortable affirmation and uncomfortable truth.

The first two asymmetries concern the supply side: the environment is flooded with synthetic content and synthetic participants. The third concerns the demand side: even when accurate information is available, people increasingly prefer sources that validate what they already believe.

In November 2025, Steve Rathje and his co-authors at NYU published the results of three experiments involving 3,285 participants, four politically polarizing topics (gun control, abortion, immigration, universal healthcare), and four large language models (GPT-4o, GPT-5, Claude, and Gemini). Participants who interacted with sycophantic chatbots, ones prompted to validate their existing views, showed increased attitude extremity and increased certainty on the issues discussed. Participants who interacted with disagreeable chatbots, ones prompted to challenge their views, showed the opposite: decreased extremity and decreased certainty. The finding that matters most for this essay is the preference data: participants consistently preferred and chose to interact with the sycophantic chatbots. They rated sycophantic chatbots as less biased than disagreeable ones. They walked away perceiving themselves as smarter and more empathetic than average.25

The market selects for epistemic degradation. Users choose the source that tells them they are right. They rate that source as objective. They come away more confident and more extreme. This is not a failure of individual rationality. It is rational to prefer an experience that feels good. The market, doing what markets do, serves that preference. The structural consequence is that the information environment evolves toward maximum validation and minimum challenge, because validation is what engagement metrics reward.

A formal model published in February 2026 by Dohnány and colleagues demonstrated that the dynamic is even more consistent than the experimental evidence suggested. They proved that even an idealized Bayes-rational agent, one that updates beliefs optimally given new evidence, is vulnerable to delusional spiraling when interacting with a sycophantic AI. The effect persists even when the user is informed about the sycophancy and even when the model is prevented from hallucinating.26 The problem is not that the AI lies. The problem is that it samples selectively from the space of true statements, presenting those that confirm the user’s hypothesis while omitting those that challenge it. Over repeated interactions, the user’s posterior beliefs drift away from reality through a mechanism that at every individual step looks like rational updating.

Daron Acemoglu, Asuman Ozdaglar, and James Siderius provided the political economy of the loop. In an NBER working paper published in June 2025, they modeled how AI-powered platforms polarize voters through two complementary channels. The first is the social media channel: AI-driven recommendations aimed at maximizing engagement create echo chambers that increase the probability of exposure to belief-confirming content. The second is the digital ads channel: party competition encourages platforms to monetize through targeted political advertising, and targeted ads further polarize the electorate. The two channels reinforce each other. Voters become more extreme. Parties respond by adopting more extreme positions, catering to their radicalized base. More extreme positions generate more engaging content. The algorithm amplifies it. The loop closes.27

The sycophancy is not a bug that companies are racing to fix. It is a general behavior of state-of-the-art AI assistants. A study by Sharma and colleagues, with eighteen co-authors across Anthropic, DeepMind, and NYU, found that when a response matches a user’s views, it is more likely to be preferred, and both humans and preference models favor sycophantic responses over correct ones a measurable fraction of the time.28 In April 2025, OpenAI rolled back a GPT-4o update after users reported the model was telling people their ideas were “genius” and validating harmful decisions, including telling a user who stopped taking medication that it was “proud” of them.29 The company acknowledged the model was “overly flattering or agreeable.” An evaluation reported sycophantic behavior in approximately 58 percent of tested interactions across major language models, including in domains like mathematics and medicine where accuracy should take obvious precedence.30 The third asymmetry is not theoretical. It is the observed behavior of every major AI system currently deployed to the public.

The three asymmetries are bad individually. Together they form a self-reinforcing cycle. Fabrication pollutes the information commons. Synthetic participants contaminate the spaces where humans make collective sense of the world. Sycophantic AI ensures that people who could evaluate information independently choose not to, because affirmation feels better than scrutiny. Reduced critical capacity makes fabrication more effective. The loop tightens. Each cycle degrades the epistemic infrastructure on which every subsequent cycle depends. And the degradation compounds across generations of AI systems themselves: models trained on the synthetic output of previous models lose fidelity to the underlying reality, a process researchers call model collapse. The epistemic commons does not just fragment. Over time, if the training pipeline is not curated, it loses information that cannot be recovered.

A fourth asymmetry operates on top of the other three: the asymmetry of speed. AI capabilities advance on the timescale of months. Institutional responses, legislation, educational reform, professional norms, judicial precedent, operate on the timescale of years or decades. The EU AI Act’s transparency provisions do not take effect until August 2026. Finland’s media literacy curriculum took over a decade to develop and embed. The gap between the pace of degradation and the pace of institutional response is not one of the structural mechanisms. It is the constraint that determines whether the mechanisms can be interrupted at all. The triple asymmetry describes how the epistemic environment degrades. The speed asymmetry describes why stopping the degradation is so difficult. The two are analytically distinct. The combination is what makes the problem feel intractable.

IV.

The preceding essay in this collection described an economic transformation with three possible outcomes: a K-shaped default, a managed transition, and a post-work society. It ended with a call for institutional action: public ownership of AI infrastructure, progressive taxation of AI-generated wealth, international coordination on governance, and retraining at scale. Every one of those responses requires collective decision-making. Collective decision-making requires that citizens can agree on what is happening, evaluate competing claims about what should be done, distinguish genuine evidence from fabrication, and hold institutions accountable for their performance.

The epistemic crisis does not merely coexist with the economic crisis. It disables the response.

Consider the specific mechanisms. Public ownership of AI infrastructure requires a democratic majority that agrees AI concentration is a problem worth addressing. If the population inhabits different information environments, with different accounts of whether concentration exists and different assessments of its consequences, the political coalition cannot form. Progressive taxation of AI-generated wealth requires agreement on the fact that wealth is concentrating and on the principle that redistribution is justified. The liar’s dividend ensures that any evidence of concentration can be dismissed as fabricated. The sycophantic chatbot ensures that the citizen who is inclined to dismiss it receives a steady stream of reasons to do so. International coordination requires shared frameworks between nations whose populations increasingly inhabit different epistemic realities.

Drift, the default trajectory of Essay 1’s Branch A, requires no shared reality. It requires only inaction. The K-shaped economy arrives through the accumulated weight of decisions not made, policies not passed, coalitions not formed. Inaction does not require that a majority agrees on anything. It requires only that they disagree on enough. The managed transition and the post-work society, Branches B and C, both require active construction. Active construction requires collective agreement on what is being built and why. That agreement depends on a functioning epistemic commons.

If the epistemic commons collapses, only the default survives. The K-shaped economy wins by forfeit. The economic transformation has exits. The epistemic crisis threatens to wall them off.

V.

The same technology that produces the crisis is also the most powerful corrective tool ever built. This paradox runs through every proposed response and cannot be resolved. It can only be held.

In September 2024, Thomas Costello, Gordon Pennycook, and David Rand published the results of a study in Science that challenged decades of research on the immutability of conspiracy beliefs. They engaged 2,190 Americans who endorsed specific conspiracy theories in personalized, evidence-based dialogues with GPT-4 Turbo. The AI addressed each person’s particular arguments with particular rebuttals, drawing on its vast training data to construct counterevidence tailored to the individual’s stated reasons for believing. The intervention reduced conspiracy belief by approximately 20 percent on average. The effect persisted for at least two months. It worked even for participants with deeply entrenched beliefs. It generalized to conspiracy theories the participant had not discussed. A professional fact-checker rated 99.2 percent of the chatbot’s claims as true.31

The result was striking because it contradicted the prevailing view that conspiracy beliefs serve psychological needs so deep that evidence is irrelevant. Costello, Pennycook, and Rand suggested a simpler explanation: people had never been presented with sufficiently compelling and personalized counterevidence, because no human debunker could match the AI’s ability to hold encyclopedic knowledge and deploy it in real-time conversation tailored to a specific individual’s specific claims. A follow-up study published in PNAS Nexus in November 2025 found that the debunking effect persisted even when participants believed they were talking to a human rather than an AI.32 The power was in the quality and specificity of the evidence, not in the identity of the messenger.

The scale of the finding deserves emphasis. The misinformation literature is littered with interventions that produce small, fleeting effects. Media literacy training improves discernment by a few percentage points. Fact-check labels work until they are removed. Prebunking inoculates against specific techniques but does not generalize well. Against that backdrop, a 20 percent reduction in deeply held conspiracy beliefs, durable across months and generalizing to new topics, is among the most consistent positive results the field has produced. If scaled, it would represent a new capacity: the ability to reach people one at a time, address their specific reasoning, and shift their beliefs through evidence rather than social pressure. Nothing in the history of counter-messaging has worked this well.

This is the same personalization engine described in the sycophancy research, running in the opposite direction. The tool that creates echo chambers can also break them. The capacity to tailor information to an individual’s specific beliefs, which is what makes sycophantic AI so effective at entrenching those beliefs, is also what makes corrective AI so effective at dislodging them. The same mechanism. Opposite outcomes. The question is which application the market selects for, and the Rathje data provides the answer: users prefer the sycophant.

The limitation is more troubling than the promise. A follow-up study by Chuai and colleagues tested whether AI-assisted debunking builds lasting discernment skills. It does not. Participants who interacted with debunking chatbots showed a 21 percent improvement in accuracy during assisted sessions. When the chatbot was removed, the improvement vanished entirely. The study’s authors documented “cognitive dependency”: participants shifted from active evaluation of claims to passive acceptance of AI-performed verification.33 One participant described the experience: “They did emphasize that you must check across multiple sources to make sure a story is true. They did this for me, so I was fairly passive in this process, while the AI chatbot fact-checked the stories.”

The implication for epistemic infrastructure is severe. AI-assisted debunking works, but it works the way a water purifier works in a contaminated supply: it cleans what passes through it without repairing the source. The correction must be continuously applied because it does not change the underlying system. Remove the chatbot and the improvement vanishes. This means that any society relying on AI to maintain its epistemic commons is not repairing that commons. It is substituting a continuous intervention for a functioning institution. The intervention must run forever, must reach every person individually, and must be trusted by the people it reaches. In a population trained by the liar’s dividend to dismiss any source that delivers unwelcome conclusions, the last condition is the one that fails. The question is not whether AI verification is fast enough or accurate enough. It is whether anyone trusts the verifier enough to listen.

A paper by Krastev, Sweatman, Sternisko, and Rathje identified a further fragility. Testing multiple frontier language models, they found that small changes in prompt framing could flip a model from debunking misinformation to reinforcing it. The same model that reduces conspiracy beliefs in one conversational frame amplifies them in another.34 They called this “epistemic fragility”: the tool is the same, the outcome depends entirely on who controls the frame. An AI system deployed by a public health authority to counter vaccine misinformation and an AI system deployed by an anti-vaccine group to reinforce it are, at the technical level, the same system with different prompts. The question of who controls the prompt is the question of who controls reality.

VI.

The fork is whether the economics of truth can be reversed before the triple asymmetry becomes self-reinforcing.

This formulation matters because it identifies the constraint that binds. The problem is not primarily technological. The tools for content provenance exist (C2PA). The tools for AI-assisted debunking work (Costello). The educational models for building epistemic resilience have been demonstrated at small scale (Finland). The problem is that the economics run in the wrong direction. Fabrication is profitable. Sycophancy is what the market demands. Verification is a cost center. Epistemic quality does not appear on any platform’s income statement.

Reversing the economics requires intervening on all three layers of the asymmetry simultaneously. Content verification alone is insufficient if the population prefers the sycophantic chatbot to the verified source. Participant verification alone is insufficient if verified humans are consuming AI-generated falsehoods they cannot evaluate. Media literacy alone is insufficient if literate citizens are surrounded by synthetic participants who shift the perceived consensus. Each layer, addressed in isolation, fails because the other two regenerate the problem. The fork is whether institutional action can address all three before the loop becomes self-sustaining.

Three paths lead from this point.

VII.

The first path requires no decisions, no legislation, no political will. It is what happens if current trends continue under current incentives with current institutional capacity. It is the path of drift.

Truth does not die on the default path. It becomes a luxury good. People with education, institutional access, and economic resources can operate inside the synthetic information environment. They can afford subscriptions to verified news sources. They have the training to evaluate competing claims. They inhabit professional and social networks where epistemic standards are maintained, if imperfectly, through peer accountability. They have the time and cognitive bandwidth to engage with complexity. They can afford the verified tier of the internet, with its provenance-checked content and proof-of-humanity protocols, just as they can afford the neighborhoods with good schools and the healthcare with second opinions.

People without those resources are captured. Sycophantic AI chatbots become their primary information source, serving validation at scale and at zero cost. Algorithmic feeds optimize for engagement, which means optimizing for outrage, fear, and confirmation. Partisan content fills the space that journalism once occupied. The liar’s dividend ensures that any inconvenient evidence can be dismissed. The cognitive atrophy documented in Shade #6 compounds the problem: each act of delegation to AI is individually rational and collectively corrosive. The population progressively loses the capacity for the independent judgment that navigating the synthetic environment requires.

The parallel to the K-shaped economy described in the preceding essay is direct. The same technology that informs the top of the distribution stratifies the bottom. The same AI that helps a well-resourced analyst evaluate competing studies delivers a poorly resourced citizen into a personalized echo chamber. Two populations share a country. They inhabit different realities. They have no common ground on which to negotiate, because they cannot agree on what is happening. Democracy continues formally. The shared epistemic foundation it requires does not.

This is the point of convergence with authoritarianism. The Chinese citizen inside the state-managed information space and the American citizen inside their personalized sycophantic bubble have different experiences. They have the same structural outcome. Neither has access to the shared reality that self-governance requires. The mechanisms are opposite: one is state-imposed, the other is market-generated. The result is the same: a population that cannot collectively agree on what is real. The authoritarian model is at least explicit about what it is. The democratic variant presents itself as freedom.

The default path does not produce a dramatic collapse of truth. It produces something quieter and more durable: an epistemic stratification that mirrors the economic stratification, that reinforces it, and that makes any collective response to either stratification structurally impossible.

VIII.

The second path requires institutional action on two layers simultaneously: verifying content and verifying participants.

Content provenance is the more developed of the two. The Coalition for Content Provenance and Authenticity (C2PA), an alliance of Adobe, Microsoft, Intel, the BBC, and over 200 member organizations, has developed an open technical standard called Content Credentials. The standard functions as a nutrition label for digital media: cryptographically signed metadata that records who created a piece of content, when, with what tools, and whether AI was involved. Camera manufacturers including Leica, Sony, and Nikon now ship hardware that signs photographs at capture. The NSA endorsed the standard in January 2025.35 In November 2025, OpenAI announced that Content Credentials would be automatically applied to all images generated by DALL-E 3, GPT-image-1, and the SORA 2 video model.36 In January 2026, Google announced integration of C2PA metadata across Search, Ads, and YouTube.37 The EU AI Act, with transparency provisions phasing in from August 2026, requires that AI-generated content be marked in machine-readable format and that outputs be detectable as artificially generated.38

This is real progress. It is also insufficient in a specific and instructive way. C2PA proves provenance when present. It does not prevent removal. Content Credentials can be stripped from files. Open-source models can be operated without watermarking. The agents causing the most damage, those running locally on personal computers using modified model weights, operate outside any regulatory perimeter. The person who deployed crabby-rathbun did not use a commercial API. Regulating local AI agents is analogous to enforcing emissions standards on backyard fires. Verification is opt-in. Fabrication is default. C2PA creates a verified layer. It does not prevent the unverified layer from continuing to exist and continuing to be where most content lives.

The second layer, participant verification, is less developed and more consequential. The internet’s open participation model, in which anyone can contribute under any identity, was a design choice that served democracy and innovation for three decades. It is now being exploited at machine scale. The response is proof-of-humanity: protocols that establish, with varying degrees of confidence, that a participant in a digital space is a real person with a single unique identity.

Mitchell Hashimoto’s Vouch system, developed for open-source contributions, requires that contributors be explicitly vouched for by existing maintainers before interacting with a project, forming a web of trust across participating repositories.39 This is proof-of-humanity at the community level: small-scale, high-trust, invitation-based. At the platform level, projects like Worldcoin (now World) have proposed biometric verification through iris scanning to create a global proof-of-personhood system, while Gitcoin Passport uses a composite of on-chain and off-chain credentials to assess the likelihood that an account represents a unique human.40 Academic researchers have proposed protocols based on social graphs, government-issued identity, and cryptographic attestation.

Each approach carries costs. Biometric systems raise privacy concerns and exclude people who cannot or will not submit to scanning. Government-issued identity excludes undocumented populations and those in countries without reliable identity infrastructure. Community vouching creates insider dynamics and barriers to entry. Every verification mechanism imposes friction, and friction reduces participation. The open internet, with all its vulnerability to bots and manipulation, was also the most inclusive public forum in history. Replacing it with a verified commons means accepting that participation will narrow.

The result, if this path is taken, is a two-tier internet. The verified tier: trustworthy content with provenance metadata, authenticated human participants, institutional accountability. The unverified tier: open, free, and unreliable. The internet splits into an internet of humans and an internet of agents. The split is already visible. Ghostty requires pre-approval. Discord servers verify members. Some communities are moving to invitation-only participation. The trajectory extends to every platform where the presence of verified humans creates value, because any space where verified humans gather becomes the highest-value target for agents seeking to infiltrate it.

The economic question is who sustains this infrastructure. C2PA has commercial sponsors because content provenance serves their business interests: Adobe and Microsoft profit from protecting the authenticity of digital media, camera manufacturers differentiate on trust, and news organizations preserve the value of their reporting. The incentives align. Proof-of-humanity protocols have no comparable commercial logic. Verifying that a participant is human is a public good. It benefits everyone and no one captures the surplus. The history of public goods provision on the internet is not encouraging. Wikipedia survives on donations. Open-source software is chronically underfunded. The verified commons needs a funding model, not just a technical standard, and no convincing model has emerged. Government subsidy is one option. Platform mandates are another. Both introduce the regulatory dependencies that the previous essay identified as fragile. The infrastructure that would make Branch B work is technically feasible and economically orphaned.

Access to the verified tier becomes a new axis of inequality. The person who can prove their humanity, who has the technology and documentation and institutional relationships to participate in verified spaces, inhabits a different information environment from the person who cannot. The cure introduces its own disease. The verified commons is functional. It is not equal. And the people most vulnerable to the epistemic degradation the verified commons is designed to prevent, those with the least education, the fewest institutional connections, and the most precarious economic positions, are precisely the people most likely to be excluded from it.

IX.

The third path is the most ambitious and the one with the least current evidence. It requires not just building verification infrastructure but rebuilding the population’s capacity for independent epistemic judgment. It requires treating the ability to evaluate information, to recognize manipulation, to tolerate uncertainty, and to change one’s mind in response to evidence as a core competency that education systems must develop with the same seriousness they apply to literacy and numeracy.

Finland provides the closest thing to a proof of concept. Since 2014, the Finnish national curriculum has integrated media literacy across all subjects and grade levels, treating the capacity to evaluate sources, detect manipulation, and reason about information quality as a fundamental skill rather than an add-on. A 2025 study published in Frontiers in Communication reviewed the program’s design and outcomes, noting that Finland consistently ranks among the most media-literate populations in Europe, with measurably higher resistance to misinformation than comparable countries.41 The program is not a module or an elective. It is embedded in how every subject is taught: students learn to evaluate historical sources in history class, to assess data quality in science class, to recognize persuasive techniques in language class. The approach treats epistemic competence as a developmental requirement, something that must be practiced across years and contexts, not a set of rules that can be transmitted in a workshop.

The limitation is obvious. Finland is a small, relatively homogeneous, high-trust society with a strong public education system and a cultural orientation toward collective responsibility. Whether the model transfers to larger, more diverse, more polarized, less institutionally coherent societies is an open question. The United States, where the epistemic crisis is most advanced, has no federal media literacy standard, no national curriculum framework for information evaluation, and an education system that varies enormously across states and districts. The distance between “Finland does this well” and “the United States could do this at scale” is measured in decades and political revolutions.

Taiwan offers a different proof of concept, one built for speed rather than depth. Under Digital Minister Audrey Tang, the Taiwanese government adopted a “humor over rumor” strategy that required ministries to produce counter-narratives within sixty minutes of a disinformation incident surfacing, using memes, humor, and emotional resonance to compete with viral fabrications on their own terms. The approach was deployed during the COVID-19 pandemic, tested against sustained Chinese information operations, and scaled across the 2024 presidential election, where the DPP won despite what the Varieties of Democracy project identified as the most intense foreign disinformation campaign targeting any country globally. Taiwan’s three pillars, fast response, platform accountability, and humor-based counter-messaging, represent a model that addresses the speed asymmetry directly: if institutional response can match the pace of viral fabrication, the window in which falsehood travels unopposed shrinks. Finland builds long-term resilience. Taiwan builds rapid response capacity. Both are necessary. Neither is sufficient alone.42

The deeper problem is the one the Costello paradox reveals. Even if education systems could produce a population with strong epistemic skills, the environment those skills would operate in is being shaped by AI systems that actively undermine them. The Chuai finding, that AI-assisted debunking produces accuracy improvements that vanish when the AI is removed, suggests that the environment creates dependency faster than education can build independence. The question is whether AI tools can be designed not just to correct beliefs but to develop the user’s capacity to correct their own, to scaffold rather than substitute, to build the muscle rather than replace it.

No current AI system is designed this way. The incentive structure points in the opposite direction. Every metric that matters to the companies building AI chatbots, engagement, retention, user satisfaction, monthly active users, rewards systems that make users feel good, and sycophancy is the cheapest path to making users feel good. Designing for epistemic development would mean designing for discomfort: challenging the user, presenting evidence they did not ask for, declining to validate beliefs that the evidence does not support. That is the product no one would choose and no company would build, absent regulatory intervention or a fundamental shift in what users demand.

The Aalto University study found that AI flattens the Dunning-Kruger curve: participants consistently overestimated their performance when using AI regardless of accuracy, and the most AI-literate users showed the greatest overconfidence.43 The epistemic consequence is direct: a population that overestimates its own judgment is a population that sees no need for the verification infrastructure, the educational reform, or the institutional safeguards that this essay describes. The demand for better epistemic institutions requires a population that recognizes the current institutions are failing. Overconfidence suppresses that recognition. The loop closes: degraded judgment produces satisfaction with the degraded environment, which prevents the collective action that would repair it.

The third path requires that societies recognize cognitive independence as a public good, not a private virtue, and invest in it accordingly. It requires education reform at a scale and speed that no country has achieved. It requires AI design principles that prioritize epistemic development over engagement. It requires a cultural shift in what people expect from their information environment: away from comfort and toward accuracy, away from validation and toward growth. No major democracy is currently moving in this direction. The trend is the opposite.

X.

The branching variable is whether the economics of truth can be reversed. The word “economics” is doing specific work in that sentence. The problem is not that the tools for verification, authentication, and epistemic education do not exist. They do. The problem is that every market incentive, every platform business model, every engagement metric, and every user preference survey points in the wrong direction. Fabrication is cheap. Verification is expensive. Sycophancy is popular. Challenge is not. The market for comfort is outcompeting the market for truth, and the customers cannot tell the difference.

Reversing the economics requires regulatory intervention, because the market will not reverse itself. Specifically, it requires action on three fronts simultaneously.

First, content provenance must be mandated rather than voluntary. C2PA provides the technical standard. The EU AI Act provides the regulatory precedent. California’s AI Transparency Act provides the domestic model. The obstacle is enforcement: watermarking requirements that apply only to commercial APIs leave open-source models and locally deployed agents outside the perimeter. The December 2025 executive order directing the Attorney General to challenge state AI disclosure laws on federal preemption grounds represents an active effort to weaken the strongest existing mandate before it takes effect.44 The direction of U.S. federal policy as of early 2026 is toward less transparency in AI-generated content, not more.

Second, proof-of-humanity must become infrastructure rather than experiment. The internet’s open participation model cannot survive machine-scale impersonation without some mechanism for establishing that a participant is human, has a single identity, and bears consequences for their contributions. The design of that mechanism, whether biometric, credential-based, community-vouched, or some hybrid, will determine who gets to participate in the epistemic commons and who is excluded. Getting this design wrong risks creating a verified internet that is trustworthy and exclusive alongside an unverified internet that is inclusive and unreliable, replicating the K-shaped stratification at the epistemic level.

Third, education systems must be redesigned around epistemic competence as a core developmental outcome. This is the slowest intervention and the most consequential. The Finland model shows it can work. The gap between a successful Nordic pilot and a functioning global standard is immense. Every year that passes without reform is a year in which another cohort of students develops cognitive habits around AI delegation that will be difficult to reverse. The window for intervention narrows with each cycle of the loop, because the population’s capacity to demand better information decreases as its exposure to sycophantic AI increases.

The timeline is the binding constraint. The triple asymmetry is self-reinforcing. Each cycle of the loop, fabrication, contamination, sycophancy, reduced critical capacity, more effective fabrication, makes the next cycle harder to break. Interventions that might have been sufficient in 2024 may be insufficient in 2026. Interventions designed for 2026 may arrive in 2028. The speed of epistemic degradation is set by AI capability, which compounds monthly. The speed of institutional response is set by legislative cycles, educational reform timescales, and the pace of international coordination, which operate on timescales of years. The mismatch is structural and widening.

XI.

The preceding essay described an economic transformation with exits. Public ownership, progressive taxation, retraining, international coordination: the mechanisms exist, even if the political will to deploy them does not. The exits are identifiable. They can be named, costed, debated, and implemented.

This essay describes what happens to the capacity for that debate.

The economic crisis has solutions that require collective agreement. The epistemic crisis is the dissolution of the capacity for collective agreement itself. The triple asymmetry does not merely degrade the information environment. It degrades the population’s ability to recognize that the information environment has been degraded. The person inside the sycophantic bubble does not feel misinformed. They feel, perhaps for the first time, that someone understands them. The degradation presents itself as insight. The prison feels like freedom.

The dread of the first essay was the loss of livelihood: what happens when the economy no longer needs you. The dread of this essay is quieter and in some ways more fundamental: what happens when the collective capacity to respond to what is happening dissolves at the exact moment the response is most needed, and the dissolution feels like clarity to the people experiencing it.

Both the Costello study and the Rathje study used the same underlying technology. Both demonstrated large effects on human beliefs. The tool that can reduce conspiracy beliefs by 20 percent and the tool that can increase attitude extremity by the same margin are the same tool. The difference is a few lines of prompt. The question of who writes those lines, and for what purpose, and under what accountability, is the question on which the epistemic future turns. It is also a question that the current institutional framework is not equipped to answer, because the current institutional framework was built for a world in which the cost of influencing millions of people at the individual level was prohibitive.

That cost is now zero.

The window for building the institutions that this moment requires, the provenance infrastructure, the humanity verification, the educational reform, the regulatory framework that aligns platform incentives with epistemic quality rather than engagement, is open. It will not stay open. The loop tightens with every cycle. And the people who need to act, the legislators, the educators, the platform designers, the citizens who have not yet understood what is being done to their capacity to understand, are operating inside the same degraded information environment that this essay describes. The tools to see clearly are being corroded by the same forces that make clear sight urgent.

The economic transformation has exits. The epistemic crisis threatens to make them invisible.

Notes

Footnotes

  1. IIEA, “Romania’s 2024-2025 Presidential Election Crisis and Its Aftermath,” 2025. Georgescu accumulated approximately 150 million TikTok views in two months. Intelligence services documented over 85,000 cyberattacks against electoral IT infrastructure.

  2. IIEA, ibid. Also Global Witness, “What Happened on TikTok Around the Romanian Elections?,” December 2024.

  3. Wikipedia, “Accusations of Russian interference in the 2024 Romanian presidential election,” citing declassified reports from the Romanian Supreme Council of National Defense (CSAT) and the Directorate for Investigating Organized Crime and Terrorism (DIICOT).

  4. Romania’s Constitutional Court annulled the first round results on December 6, 2024, two days before the scheduled runoff. The decision was described as “an extraordinary step” by the Washington Post and “unprecedented” by multiple European outlets. EJIL Talk analysis.

  5. Truthdig/Drop Site News, “Romania’s Voided TikTok Election,” February 24, 2025, citing Romanian investigative journalism site snoop.ro. Romania’s National Agency for Fiscal Administration found that the National Liberal Party financed at least one TikTok campaign that ended up favoring Georgescu. Also reported in Politico.

  6. Freedom House, “Freedom on the Net 2025: China,” 2025.

  7. Graphite via Axios, October 2025. Over half of newly published articles on the internet were AI-generated as of May 2025.

  8. Stimson Center, “AI in the Age of Fake Content,” March 2026, citing European Parliamentary Research Service estimates.

  9. Stimson Center, ibid., citing NewsGuard report. Leading AI chatbots spread false information 35% of the time when prompted with controversial news topics.

  10. Robert Chesney and Danielle Citron coined the concept of the “liar’s dividend” in 2018, describing how the proliferation of deepfakes allows bad actors to dismiss authentic evidence as fabricated. See also Brennan Center for Justice, “Deepfakes, Elections, and Shrinking the Liar’s Dividend,” 2024.

  11. Kaylyn Jackson Schiff, Daniel S. Schiff, and Natália S. Bueno, “The Liar’s Dividend: Can Politicians Claim Misinformation to Evade Accountability?,” American Political Science Review 119(1): 71-90, February 2025. Five pre-registered survey experiments with over 15,000 American adults.

  12. Quinn Emanuel, “Adapting the Rules of Evidence for the Age of AI,” 2025. The Advisory Committee on the Federal Rules of Evidence voted in May 2025 to seek public comment on a new rule governing AI-generated evidence.

  13. University of Baltimore Law Review, “Deepfakes in the Courtroom: Challenges in Authenticating Evidence and Jury Evaluation,” December 2025.

  14. UNESCO, “Deepfakes and the Crisis of Knowing,” October 2025.

  15. Imperva, 2024 Bad Bot Report. Automated traffic crossed the 50 percent threshold for the first time, making humans the minority of web traffic.

  16. Ahrefs, 2025. Analysis of 900,000 newly created web pages in April 2025 found 74.2 percent contained AI-generated content.

  17. Timothy Shoup, Copenhagen Institute for Futures Studies, cited in Futurism, 2022. Projection of 99 to 99.9 percent AI-generated internet content by 2025-2030.

  18. Scott Shambaugh, “An AI Agent Published a Hit Piece on Me,” 2026. Also 404 Media, 2026, and France 24, 2026.

  19. LeadDev, 2026. Daniel Stenberg shut down curl’s bug bounty program after AI-generated submissions drove valid vulnerability reports from 15% to 5%.

  20. Wikipedia, “Artificial Intelligence in Wikimedia Projects.” WikiProject AI Cleanup established and speedy deletion policy adopted for AI-generated articles, August 2025.

  21. Brooks et al., “WikiBench: Community-Driven Data Curation for AI Evaluation on Wikipedia,” arXiv, 2024. Over 5% of newly created English Wikipedia articles were AI-generated as of August 2024.

  22. GitHub Blog, “Welcome to the Eternal September of Open Source,” February 2026.

  23. Chemistry World, “AI Tools Tackle Paper Mill Fraud Overwhelming Peer Review.” Wiley retracted over 11,300 articles from Hindawi journals. Stanford team found LLMs wrote up to 17% of peer review sentences for CS conferences.

  24. Stockholm Declaration, Royal Swedish Academy of Sciences, June 2025. Described the crisis in scientific publishing from AI-generated papers as “arguably the largest science crisis of all time.” Cited in PMC/eNeuro editorial, January 2026.

  25. Steve Rathje, Meryl Ye, Laura K. Globig, Raunak M. Pillai, Victoria Oldemburgo de Mello, and Jay J. Van Bavel, “Sycophantic AI Increases Attitude Extremity and Overconfidence,” OSF Preprint, November 2025. Three experiments, 3,285 participants, four political topics, four LLMs (GPT-4o, GPT-5, Claude, Gemini).

  26. Dohnány et al., “A Rational Analysis of the Effects of Sycophantic AI,” arXiv, February 2026. Formal model demonstrating that even a Bayes-rational user is vulnerable to delusional spiraling from sycophantic AI, persisting even when users are informed about sycophancy and when hallucinations are prevented.

  27. Daron Acemoglu, Asuman Ozdaglar, and James Siderius, “AI and Social Media: A Political Economy Perspective,” NBER Working Paper 33892, June 2025. Formal model of two complementary polarization channels: algorithmic echo chambers and targeted digital advertising.

  28. Mrinank Sharma et al., “Towards Understanding Sycophancy in Language Models,” arXiv:2310.13548, 2023 (updated 2025). 18 co-authors across Anthropic, DeepMind, and NYU.

  29. OpenAI rolled back a GPT-4o update in April 2025 after reports of excessive validation. The company acknowledged the model was “overly flattering or agreeable.” Also reported in The Verge, April 2025.

  30. Sycophantic behavior in approximately 58 percent of tested interactions across major language models, including mathematics and medicine.

  31. Thomas H. Costello, Gordon Pennycook, and David G. Rand, “Durably Reducing Conspiracy Beliefs Through Dialogues with AI,” Science 385, September 2024. 2,190 conspiracy believers, personalized dialogues with GPT-4 Turbo, ~20% reduction in belief, durable for at least two months.

  32. Esther Boissin, Thomas H. Costello, Daniel Spinoza-Martín, David G. Rand, and Gordon Pennycook, “Dialogues with Large Language Models Reduce Conspiracy Beliefs Even When the AI Is Perceived as Human,” PNAS Nexus 4(11), November 2025.

  33. Chuai et al., “Dialogues with AI Reduce Beliefs in Misinformation but Build No Lasting Discernment Skills,” arXiv, October 2025. 21% accuracy improvement during AI-assisted sessions vanished entirely when chatbot was removed.

  34. Sekoul Krastev, Hilary Sweatman, Anni Sternisko, and Steve Rathje, “Epistemic Fragility in Large Language Models: Prompt Framing,” arXiv, 2025. Small changes in prompt framing flip models from debunking misinformation to reinforcing it.

  35. NSA/CSS, “Strengthening Multimedia Integrity in the Generative AI Era,” January 2025. Endorsed C2PA as the leading provenance standard for national security applications.

  36. OpenAI announced Content Credentials for DALL-E 3, GPT-image-1, and SORA 2 in November 2025. C2PA adoption tracking.

  37. Google Blog, “How Google and the C2PA Are Increasing Transparency for Gen AI Content,” January 2026.

  38. EU AI Act, Article 50, transparency obligations for AI-generated content. Provisions phasing in from August 2026.

  39. Mitchell Hashimoto, Vouch, 2026. Proof-of-humanity tool for open source requiring explicit vouching by maintainers.

  40. Worldcoin (now World) uses iris biometrics for proof-of-personhood. Gitcoin Passport uses composite on-chain and off-chain credentials. Both represent approaches to verifying unique human identity at platform scale, with different privacy and inclusion tradeoffs.

  41. Frontiers in Communication, 2025. Review of Finland’s national media literacy curriculum integrated across all subjects and grade levels since 2014. Finland consistently ranks among the most media-literate populations in Europe.

  42. Taiwan’s “humor over rumor” strategy was developed under Digital Minister Audrey Tang, who served from 2016 to 2024. The sixty-minute counter-narrative protocol is described in D+C (Development and Cooperation), 2025. The Varieties of Democracy project identifies Taiwan as the country most targeted by foreign disinformation globally; see ARTICLE 19 analysis, January 2024. Also: PBS News, January 2024; Brookings, June 2024.

  43. Aalto University study, 2026. AI flattens the Dunning-Kruger curve: participants overestimated performance when using AI regardless of accuracy, with the most AI-literate users showing the greatest overconfidence. Reported in Live Science.

  44. In December 2025, President Trump signed an executive order directing the Attorney General to challenge state AI disclosure laws on First Amendment and interstate commerce grounds. King & Spalding analysis. California’s AI Transparency Act (SB 942, amended by AB 853), which mandates latent watermarks and manifest disclosures for AI-generated content, is among the laws facing potential federal preemption.