← Back to all shades
Shade 20 ~30%

The Democratic AI / Cognitive Bill of Rights

Tier 4: Possible

Unmanaged N/A
Governed 4
Dividend Active creation

Democratic input into AI is not hypothetical. It is being attempted, at national scale, with measurable results. The question is whether these experiments will remain advisory consultations hosted by the companies they are meant to constrain, or whether they will become structural governance with binding authority over the infrastructure that is reshaping how billions of people think, work, and participate in public life.

The urgency of that question became concrete in February 2026. The Pentagon demanded that Anthropic remove two safeguards from its $200 million defense contract: prohibitions on mass domestic surveillance and fully autonomous weapons. Anthropic refused. Defense Secretary Pete Hegseth gave CEO Dario Amodei a deadline: accept unrestricted use “for all lawful purposes” by 5:01 p.m. on February 27, or face consequences. Amodei refused again, stating that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” The Pentagon designated Anthropic a “supply chain risk to national security”, a classification previously reserved for foreign adversaries like Huawei. Within hours, OpenAI signed a replacement deal with the Pentagon. Anthropic sued the Trump administration, calling the designation “unprecedented and unlawful”. As Lawfare noted, both Amodei and OpenAI’s Sam Altman have publicly stated that these decisions should be made by Congress, not by companies. The Harvard Kennedy School’s Carr-Ryan Center called the standoff evidence of “governance by procurement”: the most consequential decisions about AI in warfare are being made in bilateral contract negotiations between a defense secretary and a startup CEO, with no democratic input, no durable constraints, and no framework that survives the next change of administration. Oxford’s analysis of the dispute identified a further structural gap: neither the Anthropic nor the OpenAI agreement prohibits mass surveillance of foreign nationals, and NATO and Five Eyes partners who integrated Anthropic models into shared platforms now face unresolved legal and operational questions. Bilateral contracts cannot produce universal protections. That is the vacancy this shade is written to fill.

The dispute’s resolution confirmed the competitive substitution mechanism. When the Iran offensive began on February 28, 2026, the Pentagon was already using Palantir’s AI targeting systems, which incorporate Claude, to identify strike targets. Over 11 days, AI-enabled targeting supported approximately 5,500 strikes. OpenAI signed a replacement contract. The market worked as the shade predicted: the provider with the fewest restrictions won. Pentagon spokesperson Kingsley Wilson made the logic explicit: “America’s warfighters supporting Operation Epic Fury and every mission worldwide will never be held hostage by unelected tech executives and Silicon Valley ideology. We will decide, we will dominate, and we will win.” Members of Congress responded by demanding oversight. Rep. Jill Tokuda (D-Hawaii, House Armed Services Committee): “We need a full, impartial review to determine if AI has already harmed or jeopardized lives in the war with Iran. Human judgment must remain at the center of life-or-death decisions” (NBC News, March 2026). China warned against “unrestricted application of AI by the military” and “using AI as a tool to violate the sovereignty of other nations.” The governance vacuum the shade describes is no longer theoretical. It is producing civilian casualties in real time, with international legal frameworks debating language in Geneva while AI-enabled strikes hit targets in Iran.

The most developed proof-of-concept is Taiwan’s vTaiwan platform, launched in 2015 by the civic technology community g0v and built around the open-source deliberation tool Polis. Polis uses a design insight that inverts the logic of social media: there is no reply button. Participants can propose statements and vote on others’ statements, but they cannot argue. The algorithm surfaces consensus across opinion clusters rather than amplifying division. The result is a system that gamifies agreement-finding instead of engagement-maximization. Since its launch, vTaiwan has hosted deliberations on 28 policy issues, with over 80 percent leading to legislative action. In 2023, Taiwan’s Ministry of Digital Affairs partnered with the Collective Intelligence Project to launch Alignment Assemblies: 450 citizens selected via stratified random sampling deliberated on AI regulation over six hours, and their consensus informed Articles 30-32 of the Executive Yuan’s draft Fraud Prevention and Control Act. Audrey Tang, Taiwan’s former Minister of Digital Affairs, has argued that this deliberative approach is the optimal response to the AI governance challenge because it is democratic rather than technocratic, and because intelligence, as she puts it, stems from the mind and spaces between people. The honest limitations deserve equal space. At its peak, vTaiwan had only a few thousand active participants in a country of 23 million. Government agencies are not obligated to use it; adoption depends on individual civil servants choosing to engage. The platform relied heavily on Tang’s personal advocacy and political capital. Taiwan’s success emerged from a specific context: a small democracy, a recent civic mobilization (the 2014 Sunflower Movement), and a tech-literate population. Whether the model transfers to larger, more polarized, less digitally unified democracies is an open question.

Taiwan’s experiment demonstrates that democratic deliberation about AI can produce actionable policy. The AI labs have also begun exploring democratic input into model behavior, though with a critical structural difference: the input is advisory, and the labs retain full control over whether and how to act on it.

OpenAI’s “Democratic Inputs to AI” program (2023) funded 10 teams from nearly 1,000 applicants to design public participation processes. The program produced working prototypes, including the Collective Dialogues process that generated policy guidelines with over 72 percent support across Democrats, Independents, and Republicans on divisive issues including vaccine information. OpenAI subsequently created a “collective alignment” team to build these methods into its model development pipeline. Anthropic partnered with CIP to run a parallel experiment: Collective Constitutional AI, in which approximately 1,000 Americans used Polis to draft a constitution for a language model. The publicly-sourced model showed lower bias across nine social dimensions while maintaining equivalent performance on helpfulness and harmlessness benchmarks, demonstrating that deliberated public input can technically steer model behavior without degrading capability. These experiments are real. The limitation is also real. A review published in Patterns (May 2025) identified six assumptions embedded in the OpenAI program, including that participation must be scalable, aimed at a single model, and oriented toward consensus. The Odyssean Institute argued in September 2025 that the experiments function as market research more than governance, noting that OpenAI simultaneously signed a $200 million Department of Defense contract while publicly committing to democratic input processes. TIME posed the question at the center of all these initiatives: if a company consulted the public through a democratic process and was told to stop or slow down, would it do so?

The Collective Intelligence Project is attempting to bridge this gap between consultation and governance at global scale. In 2025, CIP ran six deliberative dialogues across more than 70 countries, engaging over 10,000 participants. Their Weval platform converts public priorities into concrete evaluations that labs and governments can apply to frontier models. In 2026, Global Dialogues becomes standing infrastructure with longitudinal panels and the first annual Global Trust and Expectations Index. This is the closest thing that currently exists to a functioning mechanism for democratic input into AI at international scale. It is also still advisory. No lab or government is obligated to act on its findings.

Democratic governance of AI requires more than input into model behavior. It requires public control over the material infrastructure that makes AI possible. Compute, the processing power needed to train and run advanced AI systems, is dominated by three companies: Amazon, Google, and Microsoft. California’s SB 53, signed in September 2025, established the CalCompute consortium within the University of California system to create a public computing cluster for safe, ethical, and equitable AI research. New York’s Empire AI is a $400 million initiative building public computing infrastructure through a university consortium. The UK’s AI Research Resource and the OECD’s framework for “public AI” represent parallel efforts to ensure that AI development infrastructure has public attributes: open, interoperable, auditable, and transparent. The analogy is public highways rather than private toll roads. The original shade’s adversarial point, that government-run AI would be slower and less capable than private alternatives, applies to the delivery mechanism. It does not apply to the governance of the infrastructure. You can have cheap private AI and still face the power concentration problems that Shades #10 (Corporate Capture) and #17 (Permanent Underclass) describe. The prescription is about who controls the terms under which AI is developed and deployed, not about who runs the servers.

The EU AI Act, which entered into force in August 2024 with provisions phasing in through 2027, represents the largest existing exercise in democratic AI governance through representative institutions. It classifies AI systems by risk, prohibits certain practices (social scoring, manipulative AI, workplace emotion recognition), and imposes strict documentation and conformity requirements on high-risk systems used in employment, credit decisions, education, and law enforcement. Penalties reach up to 7 percent of global annual turnover. The Act’s implementation has been difficult. Standardization bodies missed their 2025 deadlines for producing technical standards. The Commission itself missed guidance deadlines. Industry calls for delay have been strong, and the Commission’s Digital Omnibus proposal (November 2025) may push high-risk obligations to December 2027. At the February 2025 Paris AI Summit, U.S. Vice President JD Vance cautioned that excessive regulation could kill a transformative industry. The EU AI Act is evidence that democratic governance of AI can be legislated. It is also evidence that such governance faces enormous implementation pressure from industry, from transatlantic competition, and from the sheer speed of technological change.

That competitive pressure raises the hardest question for this shade. Democratic AI governance is a framework for democracies. China is building AI governance under an authoritarian model with no deliberative input mechanism, while closing the capability gap with U.S. labs on reasoning and coding tasks and leading global open-source model downloads as of mid-2025. The Atlantic Council forecasts that 2026 will see intensifying competition between U.S. and Chinese “AI stacks,” with the White House making it policy to export the U.S. stack to third-party countries. If democracies slow their AI development through governance processes while authoritarian competitors do not, the dynamics from Shade #7 (The Geopolitical AI Arms Race) apply. The counterargument is that democratic governance may produce more robust and trustworthy AI systems, because broader input catches more failure modes and generates greater public trust in deployment. The EU AI Act’s explicit goal of creating “trustworthy AI” rests on this logic. Whether trustworthiness confers sufficient competitive advantage to offset governance costs is an empirical question that the next decade will answer. The prescription of this shade does not require that democracies unilaterally slow down. It requires that they govern the direction and distribution of AI capability through institutions that derive their authority from the people affected.

The structural tension at the center of this shade was articulated most clearly by Nobel laureate Daron Acemoglu in his ten-point framework on AI and shared prosperity (January 2025) and his March 2026 conversation with Michael Sandel for Project Syndicate. Acemoglu identifies a Catch-22: we need democratic institutions to redirect AI toward pro-worker, pro-public outcomes, but AI is already damaging the democratic institutions needed to provide that redirection. The current ethos among leading AI technologists is, in his framing, anti-democratic: founders and researchers believe that experts (meaning themselves) should make the key decisions, and democratic processes get in the way of necessary acceleration. The thesis of Power and Progress (Acemoglu and Johnson, 2023) applies: technology has never automatically benefited the broad public. Every historical case in which technological gains were widely shared required institutional counterpressure, organized labor, democratic governance, regulatory intervention, operating against the interests of those who controlled the technology. AI is no exception to this pattern. It is the most consequential test case.

The adversarial case against democratic AI governance deserves more space than the current shade gives it, because the objections are serious. The first is speed: democratic deliberation operates on cycles measured in months or years; AI capability advances on cycles measured in weeks. Governance that cannot keep pace with the technology it governs becomes decorative. The second is competence: a 2025 Carnegie California survey found citizens evenly split on whether AI can help them become more informed voters, and 55 percent were “very concerned” about AI-generated content heightening political violence. The public is uncertain about its own capacity to govern a technology it does not fully understand. A Yale ISPS study found that providing detailed information about AI use in governance can overwhelm and confuse the public, creating a “transparency dilemma.” The third is capture: government-managed AI infrastructure would be subject to political interference, regulatory capture, and the bureaucratic inertia that has historically limited public technology programs. The fourth is demonstrated preference: when given the choice, most people use the most capable AI available, which is currently private. Open-source models are free, improving rapidly, and by late 2025 rivaled closed models on most key benchmarks while costing nothing to download. The market may deliver universal access faster than any public program can.

These objections have force, and the responses are less clean than advocates of democratic AI governance sometimes admit. Speed: Taiwan’s Alignment Assemblies produced actionable consensus in a single six-hour session, but that was one deliberation on one narrow topic. Governing AI at the pace of capability development would require something closer to a standing deliberative body with real-time authority, and nothing like that exists. The honest answer is that governance will always lag capability by some interval, and the question is whether that interval is months or decades. Competence: the public does not need to understand transformer architecture to have legitimate preferences about whether AI should be used in hiring, criminal sentencing, or political communication. Governance is about values and priorities. But most consequential AI governance decisions sit at the intersection of values and technical specification: whether a particular bias detection method works, whether a safety evaluation is meaningful, whether a model capability threshold is set correctly. Democratic governance needs technical infrastructure to translate public values into technical constraints, and that infrastructure is in its infancy. The Anthropic CCAI experiment showed this translation is technically feasible, but it operated on a single model in a controlled setting. Capture: private AI is already captured, by the commercial incentives of the companies that build it. Acemoglu’s framing is direct: the current AI ethos is anti-democratic, with leading technologists believing that experts should make all key decisions. The choice is between commercially captured and democratically accountable AI. But democratic accountability is only as strong as the institutions providing it, and those institutions are under strain from the very dynamics this collection describes. Demonstrated preference: universal access to tools does not equal governance of the infrastructure. Every citizen has access to electricity without controlling the grid. The analogy holds, but it also reveals the difficulty: utility regulation took decades to develop, and energy companies fought it at every stage.

The 30 percent likelihood reflects the difficulty. This shade requires overcoming the very power concentrations that Shades #10, #14, and #17 describe. It requires democratic institutions to act at a moment when those institutions are under pressure from polarization (#18), information collapse (#5), and cognitive atrophy (#6). It requires AI companies to accept constraints on their autonomy at the moment of their greatest leverage and profitability. The governance dividend is not a number but a designation: active creation. This is the shade that the rest of the collection is written to motivate.

The “Cognitive Bill of Rights” in the title names what the democratic governance framework would need to protect. The concept is nascent, but its core elements are emerging from the experiments described above: the right to know when you are interacting with an AI system (already codified in the EU AI Act’s transparency requirements), the right to a human decision-maker in high-stakes contexts (employment, credit, criminal justice), the right to access equivalent AI tools regardless of income (the prescription of #19), and the right to meaningful participation in the governance of AI systems that affect your life (the prescription of this shade). None of these rights currently exist in binding form outside the EU. All of them are technically achievable. The gap is political.

Every other shade in this collection describes a failure mode. This shade describes what those failure modes look like when met by institutional counterpressure. The outcome would not be elegant. Democratic AI governance would be slow, contested, imperfect, and perpetually under pressure from commercial interests and geopolitical competition. The alternative is governance by default: the companies that build the technology set the terms, the market distributes access, and the public experiences the consequences without a formal mechanism for shaping them. The Anthropic-Pentagon dispute demonstrated what that alternative looks like in practice: the rules governing whether AI can be used for mass surveillance of American citizens were decided in a contract negotiation between a defense secretary and a CEO, with Congress absent, the public uninformed, and the outcome determined by which company was willing to say yes.

Key tension: The experiments in democratic AI governance are real, producing results, and insufficient. The structural gap between advisory input and binding authority remains enormous. The EU AI Act proves governance can be legislated; its implementation struggles prove legislation is the beginning of the problem, not the end. Closing the gap between consultation and control requires unprecedented coordination at the moment when the incentives against coordination are strongest, and when the geopolitical competition for AI dominance penalizes any democracy that governs more slowly than its authoritarian competitors.