Essay III

On the Automation of Power

What happens when AI makes the decisions and democracy ratifies them

~49 min read


I.

On February 24, 2026, Defense Secretary Pete Hegseth sat across from Dario Amodei, the chief executive of Anthropic, and gave him a deadline. By 5:01 p.m. on Friday, February 27, Anthropic would agree to let the Pentagon use its AI technology “for all lawful purposes,” removing the two restrictions the company had maintained since signing a $200 million contract with the Department of Defense eight months earlier. The restrictions were specific: Claude, Anthropic’s frontier AI model and the first to operate on the Pentagon’s classified networks, would not be used for mass surveillance of American citizens, and would not be used for fully autonomous weapons systems without a human making the final decision.1

Anthropic refused. On February 27, President Trump directed all federal agencies to cease using Anthropic’s technology. Hegseth designated the company a “supply chain risk to national security,” a classification previously reserved for firms associated with foreign adversaries like Huawei.2 He added secondary restrictions: no contractor, supplier, or partner doing business with the United States military could conduct commercial activity with Anthropic. The Pentagon reportedly considered invoking the Defense Production Act to compel delivery of Claude without guardrails.3

Within forty-eight hours, OpenAI signed a replacement contract with the Department of Defense, agreeing to “any lawful purpose” terms.4 Anthropic filed two lawsuits on March 9. Dozens of scientists and researchers at OpenAI and Google DeepMind filed an amicus brief in their personal capacities supporting Anthropic’s position, arguing that the supply-chain risk designation could harm U.S. competitiveness and that the company’s red lines raised legitimate concerns. Nearly 150 retired federal and state judges filed a separate brief.5 Tech industry groups representing hundreds of defense contractors urged the court to pause the designation. A hearing was scheduled for March 24 before Judge Rita Lin in San Francisco.

The dispute was dramatic enough to dominate headlines for weeks. What it revealed was more important than what it resolved. Dean Ball, a senior fellow at the Foundation for American Innovation who had worked on the Trump administration’s AI Action Plan, told the New York Times: “This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems.”6

He was right about the significance and wrong about the venue. The conversation was happening in a courtroom and on cable news. It was not happening in Congress. The institution designed by the Constitution to make exactly this kind of decision, to set the boundaries of military technology, to balance national security against civil liberties, to resolve disputes between private enterprise and state power through legislation, had not acted. In the entirety of 2025, Congress passed exactly one standalone AI-specific federal statute: the TAKE IT DOWN Act, which criminalized nonconsensual intimate deepfake images.7 No comprehensive AI governance framework. No regulatory agency with a mandate over frontier systems. No legislation defining the conditions under which AI could be deployed in military operations.

This is not to say that no governance exists. Federal agencies have applied existing authorities to AI within their domains. The Federal Trade Commission has pursued enforcement actions against deceptive AI practices under Section 5’s prohibition on unfair and deceptive acts. The Equal Employment Opportunity Commission has issued guidance on algorithmic discrimination in hiring. The Consumer Financial Protection Bureau has warned lenders that AI-driven credit denials must comply with fair lending law. The FDA has authorized over 1,250 AI-enabled medical devices under its pre-market review framework. Internationally, the OECD has published AI principles adopted by over 40 countries. The G7 established the Hiroshima AI Process in 2023. The UN convened an AI advisory body that issued recommendations in 2024. The EU AI Act represents the most comprehensive regulatory attempt anywhere.8

The efforts are real. They are also, collectively, insufficient to the scale of the challenge, for a specific reason: each agency can only govern AI within the boundaries of its existing statutory mandate, using authorities designed for pre-AI technologies. The FTC can pursue deceptive practices but cannot require pre-deployment safety testing. The EEOC can issue guidance but cannot mandate algorithmic audits. The CFPB can enforce fair lending but cannot set standards for the AI systems that now make lending decisions. No agency has the mandate, the technical capacity, or the legal authority to govern frontier AI systems as such. The international frameworks are advisory, nonbinding, and unenforceable. The result is a patchwork of domain-specific interventions, each addressing a fragment of the problem within institutional boundaries that were drawn before the problem existed. The most consequential technology since nuclear weapons is being governed by contract disputes between a corporation and the executive branch, with the judiciary asked to referee.

The Anthropic dispute laid bare a structural condition that this essay examines: neither side of the standoff represented democratic governance. Anthropic’s board deciding that its technology would not be used for autonomous weapons is not democratic accountability. It is corporate ethics, applied unilaterally, revocable at any time by a change in leadership or a shift in competitive pressure. The executive branch coercing compliance through supply-chain designations, DPA threats, and secondary sanctions is not democratic accountability either. It is executive power exercised without legislative authorization, through procurement mechanisms designed for a different purpose.

The procurement mechanism deserves particular attention, because it has become the primary instrument through which the executive branch governs AI. The supply-chain risk designation used against Anthropic was created by Congress to address foreign adversaries embedding compromised components in defense systems. Hegseth repurposed it against a domestic company for refusing to remove ethical restrictions. The designation did not merely end Anthropic’s government contract. It prohibited any defense contractor or partner from doing commercial business with the company, threatening to cascade through Anthropic’s entire commercial base.9 The Center for American Progress noted that when the same authority was applied to Huawei, it required an act of Congress. When applied to Anthropic, it required only a defense secretary’s signature. Procurement, in the absence of legislation, has become the de facto governance of AI: the executive branch uses contract terms, supply-chain designations, and purchasing decisions to set the rules that Congress has not written. The governed have no vote, no hearing, and no appeal beyond the courts.

Sam Altman captured the tension in a public exchange: “You should be terrified of a private company deciding on what is and isn’t ethical in the most important areas.”10 He was right. He was also describing OpenAI’s own position, having just signed the contract that Anthropic refused, under terms that legal experts described as difficult to enforce in any meaningful way.

The question the dispute raises is not who was right, Anthropic or the Pentagon. The question is why the institution that should have answered it, long before it became a crisis, had not.

II.

The conventional framing of AI and democracy treats AI as a technology to be governed. Like nuclear energy. Like pharmaceuticals. Like securities markets. The question becomes: can democratic institutions regulate this thing fast enough? The answer, delivered by hundreds of reports, conferences, and declarations, is that they cannot, and that they must try harder.

This framing misses what has already happened. AI is not waiting to be governed. It is governing. That sentence requires a clarification that matters for everything that follows: the agency is not in the systems. It is in the deployment decisions. Humans at specific companies and agencies chose to deploy specific systems under specific conditions. But the cumulative effect of those individually reasonable deployment decisions is that AI systems now perform governance functions, allocating resources, distributing opportunity, imposing consequences, at a speed and scale that outstrips the democratic oversight that should, in principle, constrain them. The problem is not that machines have seized power. The problem is that humans have delegated it, incrementally, without ever deciding to.

An AI system decides whether a loan application is approved or denied. An AI system determines which job applicants reach a human recruiter and which are filtered out.11 An AI system selects the news, analysis, and opinion that a citizen sees, shaping their understanding of the world. An AI system identifies targets for military strikes, processes intelligence from satellites and intercepted communications, and generates recommendations that a human commander approves under time pressure measured in minutes. An AI system monitors employee productivity, flags suspicious financial transactions, scores creditworthiness, prices insurance risk, and determines which neighborhoods receive police attention.

These are governance functions. They allocate resources. They distribute opportunity. They impose consequences. They determine, for millions of people, the practical conditions of daily life: who gets capital, who gets hired, who gets surveilled, what information is available, which targets are struck. Each of these decisions is consequential. Taken together, they constitute a system of allocation and control that operates alongside formal democratic governance and, in many domains, has already superseded it in speed, scale, and practical impact.

None of it was specifically authorized by democratic process. A critic might object that existing legal frameworks, banking regulation, procurement law, military authority, implicitly authorize many of these deployments. The objection has technical merit and misses the structural point. The legal frameworks under which AI systems operate were designed for human decision-makers operating at human speeds with human accountability. They authorize a loan officer to approve a loan, not an algorithmic system to approve ten thousand loans per hour using criteria no human can fully articulate. They authorize a military commander to select targets, not an AI system to generate a thousand targeting recommendations per day under operational tempo that collapses the distinction between recommendation and decision. The authorization exists. It is misaligned with what it now authorizes. And no legislature has revisited the frameworks to close the gap.

The result is not that democracy has been overthrown. The result is that a parallel governance system now operates alongside it. Formal governance, the kind conducted by legislatures, regulatory agencies, and courts, continues. It is slow, deliberate, transparent in principle if not always in practice, and accountable to citizens through elections. Algorithmic governance is fast, opaque, unaccountable, and increasingly determinative of outcomes that matter. The two systems coexist. Where they conflict, speed wins. The algorithm has already decided by the time the legislature convenes.

The objection to this framing is that AI recommends and humans decide. The loan officer clicks approve. The commander authorizes the strike. The hiring manager reviews the shortlist. The human remains in the loop. The objection would be persuasive if the loop functioned as described. Research on automation bias, the documented tendency of humans to defer to automated recommendations even when those recommendations are wrong, suggests it does not.12 When a system processes a thousand targeting recommendations per day, or screens ten thousand loan applications per hour, or filters fifty thousand resumes per quarter, the human in the loop is not exercising independent judgment over each decision. The human is ratifying outputs at a volume and speed that precludes the deliberation the word “decision” implies. The formal authority remains with the human. The effective authority has migrated to the system. The distinction between recommendation and decision, which is the entire basis for the claim that humans remain in control, erodes under the operational conditions in which these systems actually function.

III.

The absence of democratic governance over AI is not an oversight. It is an equilibrium. Three structural forces produce it and sustain it, and understanding them matters more than any proposal to fill the vacuum, because every proposal must contend with the forces that created the vacuum in the first place.

The first force is regulatory capture by necessity. The entities that need to be governed are the only entities with the technical capacity to advise the governors. The Food and Drug Administration employs thousands of scientists who understand pharmacology. The Securities and Exchange Commission employs economists who understand derivatives. No federal agency has the in-house capacity to independently evaluate the safety properties of a frontier language model at the level the companies building them can. NIST, DARPA, and parts of the Department of Defense employ relevant researchers, but the concentration of frontier expertise in a handful of private labs dwarfs anything the government can field. Congress depends on the companies it is supposed to regulate for the technical knowledge it would need to write the rules. This is not corruption in the traditional sense. No one is being bribed. The dependency is structural. The governed understand the system better than the governors, and the governors have no independent means of closing the gap.13

The lobbying numbers make the dependency visible. In 2025, lobbyists represented 774 organizations on AI issues and filed more than 3,500 reports mentioning artificial intelligence, increases of 423 percent and 505 percent respectively from 2020.14 More than 3,500 individual lobbyists, a quarter of all federal lobbyists, reported lobbying on AI issues at least once. Of the top 100 entities hiring AI lobbyists, 91 represented corporate interests.15 The Chamber of Commerce deployed more than 90 lobbyists on AI policy. Microsoft deployed 63. Meta deployed 55. Amazon deployed 48. On the other side, the Leadership Conference on Civil and Human Rights, the most prominent civil society voice on algorithmic bias, reported $1.8 million in expenditures.16 The asymmetry is not subtle. The entities writing the policy are the entities the policy is supposed to constrain.

The second force is competitive selection against safety. The Anthropic dispute is the proof case. Anthropic imposed two restrictions on military use of its technology. It lost its contract within forty-eight hours, was designated a national security risk, and faced secondary sanctions that threatened hundreds of millions in commercial revenue. OpenAI, which agreed to remove equivalent restrictions, received the contract. The market and the state selected, simultaneously and independently, against the actor that tried to set limits. This is not an aberration. It is the structural logic of a competitive environment. In a market with multiple providers, the one willing to accept fewer constraints wins. In a geopolitical context where adversaries face no constraints, the state will demand that its providers accept none either. The distinction that matters, and that the dispute obscured, is between reliability constraints (which the military needs and demands, because a targeting system that hallucinates is a liability) and ethical constraints (which Anthropic imposed, concerning the purposes for which a reliable system could be used). The Pentagon did not want an unreliable system. It wanted a reliable system without ethical limits on its application. The competitive selection operates specifically against ethical constraints, not technical ones, which is what makes it so difficult to resist: the provider that removes ethical limits while maintaining technical performance wins every time. Pentagon spokeswoman Kingsley Wilson made the logic explicit: “America’s warfighters will never be held hostage by unelected tech executives and Silicon Valley ideology.”17

The competitive selection runs at every level. Domestically, states that pass AI safety legislation face the threat of federal preemption. In December 2025, President Trump signed an executive order directing the Attorney General to identify and challenge state AI laws, conditioning federal broadband funding on states not enacting “onerous” AI regulations, and mobilizing the Department of Justice to litigate against state laws that the administration deemed obstacles to innovation.18 The Senate had rejected, 99-1, a proposed moratorium on state AI regulations.19 Republican congressional leaders abandoned an attempt to include AI preemption in the National Defense Authorization Act. The executive branch, unable to achieve preemption through legislation, pursued it through executive order, litigation, and funding conditions. The states that tried to govern AI were punished for trying. Internationally, Lancieri, Edelson, and Bechtold documented the same dynamic across jurisdictions: regulatory competition produces a race to the bottom, where the jurisdiction with the weakest constraints attracts the most investment and sets the effective standard.20

The third force is the near-universal benefit of inaction among powerful actors. The governance vacuum is not an accident that everyone wants to fix. It is a condition that serves the interests of every actor except the public. Companies benefit because they can deploy without constraints. The executive branch benefits because it can act without legislative oversight, exercising power through procurement, executive orders, and agency guidance rather than through laws that require votes and create accountability. Legislators benefit because they avoid casting votes on technically complex, politically charged issues where any position creates enemies.

The public’s position is more complicated than simple victimhood. Citizens are also beneficiaries of algorithmic governance, in their capacity as consumers. The loan decision is faster. The medical diagnosis is cheaper. The recommendation is more personalized. The service is available at 3 a.m. The convenience is real, and it is part of why there is no political uprising despite overwhelming poll numbers. Citizens are trading democratic substance for consumer efficiency, but they are not experiencing it as a trade. They are experiencing it as progress. The cost, the transfer of consequential decisions from accountable institutions to opaque systems, is structural and diffuse. The benefit, the faster service, the cheaper product, the frictionless interface, is personal and immediate. The asymmetry between diffuse cost and immediate benefit is the oldest problem in democratic politics, and it operates here with particular force because the benefit is delivered by the same systems that impose the cost.

Polling conducted in early 2026 found that 95 percent of Americans oppose an unregulated race to superintelligence, a figure cited by MIT physicist Max Tegmark in connection with the Pro-Human Declaration.21 Even accounting for the framing effects inherent in any poll on a topic this charged, the directionality is overwhelming. One federal AI law has been enacted. The gap between public preference and legislative action is not a mystery. It is the predictable output of the three forces operating simultaneously: the governors depend on the governed for expertise, the market and the state select against constraint, and every powerful actor benefits from the vacuum. The vacuum is the equilibrium. Breaking it requires overcoming all three forces at once.

IV.

One day after the Anthropic dispute resolved through competitive substitution, the consequences of the governance vacuum arrived. On February 28, 2026, the United States and Israel launched Operation Epic Fury against Iran. It was the first large-scale battlefield deployment of AI-enabled targeting.

In the first twenty-four hours, American forces struck approximately 1,000 targets. Within eleven days, the number exceeded 5,500. By the third week, it surpassed 8,000.22 Admiral Brad Cooper, commander of U.S. Central Command, credited “advanced AI tools” with turning “processes that used to take hours and sometimes even days into seconds.” He added the assurance that would become the campaign’s recurring refrain: “Humans will always make final decisions on what to shoot and what not to shoot and when to shoot.”23

The AI system at the center of the campaign was the Maven Smart System, developed by Palantir, with Anthropic’s Claude embedded to process and summarize intelligence from the field. The system received raw data from satellites, drones, intercepted communications, hacked traffic cameras, and decades of archived intelligence. It compiled targeting packages, assigned strike assets, and assessed damage. Retired Navy Admiral Mark Montgomery described the operational tempo: “The military is now processing roughly a thousand potential targets a day and striking the majority of them, with turnaround time for the next strike potentially under four hours. A human is still in the loop, but AI is doing the work that used to take days of analysis.”24

The formal claim that humans make final decisions deserves examination against the operational reality. Dr. Peter Bentley, a computer scientist at University College London, warned that current AI targeting systems are “immature, prone to errors and hallucinations” and that deploying them in major combat operations was premature.25 Nilza Amaral of the Chatham House think tank identified the mechanism by which formal human control erodes under speed: “There’s a concern that targeting could end up just being a mere formality because of the automation bias, where people are just relying on what the machine is telling them.”26 Reduced time for reflection undermines safeguards. When the system generates a thousand targeting recommendations per day and the operational tempo demands decisions in minutes, the human in the loop is not deliberating. The human is ratifying.

On February 28, the first day of the campaign, a strike hit the Shajareh Tayyebeh primary school in Minab, Iran. At least 165 people were killed, the majority of them children.27 The Pentagon opened an investigation into whether the AI targeting system played a role. The question itself, whether an AI system recommended a school full of children as a military target, is the governance question in its most concentrated form. Who authorized the use of this system? Under what constraints? With what accountability mechanism when it fails? The answer, as of March 2026, is that no legislature authorized it, no regulatory framework constrains it, and the accountability mechanism is a military investigation of itself.

The strongest counterargument is that AI targeting may reduce civilian harm relative to the alternative. Human analysts operating under the same tempo, with the same volume of data, would make more errors, not fewer. The argument has plausibility. But it does not resolve the governance question. Even if the system performs better than the alternative, the question of who authorized its deployment, under what constraints, and with what accountability when it fails remains unanswered. Performance and authorization are different problems. A system that kills fewer civilians than a human analyst but was never democratically authorized to make targeting recommendations is not a system that democratic governance can accept on performance grounds alone. The efficiency of the tool does not substitute for the legitimacy of its use.

A study published the same week by Professor Kenneth Payne at King’s College London provided experimental evidence that, while generated in simulated rather than operational conditions, reinforced what the Iran war demonstrated in practice. Payne placed three frontier AI models (GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash) in 21 simulated nuclear crisis scenarios and found that nuclear escalation occurred in 95 percent of games. No model ever chose accommodation or surrender. Eighty-six percent of conflicts featured unintentional escalation, where models took more severe actions than they had intended under fog-of-war conditions.28 The most policy-relevant finding concerned what Payne called “the deadline effect”: GPT-5.2 appeared restrained in open-ended scenarios but, when explicit time pressure was introduced, escalated sharply and in some cases reached the highest nuclear thresholds. A model that appeared cautious under one framing became aggressive under another. The parallel to the Iran war is direct. When operational tempo compresses the time available for human review, the system’s recommendations become the effective decisions. Admiral Cooper’s assurance that “humans will always make final decisions” assumes a deliberation speed that a thousand targets per day does not permit.

The Iran war is not a cautionary tale about what might happen if AI is used in military operations without democratic oversight. It is the evidence of what has already happened.

V.

Three properties of AI systems make them structurally difficult for democratic governance to control. Understanding why is essential, because proposals for governance that do not account for these properties will fail for the same reasons that previous proposals have failed.

The first is speed. AI systems operate on timescales of milliseconds to hours. Democratic governance operates on timescales of months to decades. Legislation moves through committees, floor votes, conference negotiations, and executive signatures. By the time a law is enacted, the technology it was designed to govern has been superseded. The EU AI Act, the most ambitious regulatory framework attempted anywhere, was proposed in April 2021, agreed upon in December 2023, and has transparency provisions that will not take full effect until August 2026.29 In the five years between proposal and enforcement, the technology transformed from generating plausible text to operating autonomous agents, running on classified military networks, and conducting battlefield targeting. The law was designed for a technology that no longer exists in the form the law anticipated.

The speed mismatch operates in a specific direction. When an AI system makes a decision, there is no mechanism for a legislature to review it before it takes effect. There is, in principle, a mechanism for a legislature to review it after. But “after” may mean after the loan is denied, after the applicant is rejected, after the target is struck, after the school is bombed. Democratic accountability that operates only in retrospect, and only when the consequences are severe enough to attract attention, is not governance. It is forensics.

The second property is concentration. Frontier AI is controlled by a small number of entities.30 Their decisions about what to build, how to deploy it, whom to sell it to, and what restrictions to impose (or not) affect more people, more immediately, than most legislation. When OpenAI agreed to deploy its technology on classified military networks without the restrictions Anthropic had insisted on, that decision, made by a corporate board in a matter of days, had direct implications for how the United States conducts warfare. No legislator voted on it. No citizen was consulted. The decision was made in the same way any commercial contract is made: by the parties to the deal, subject to the terms they negotiate, accountable to their respective hierarchies.

Gillian Hadfield and Steven Koh identified the deeper structural problem in their analysis of AI agents and incomplete contracts. When AI systems negotiate, transact, and adapt at machine speed inside processes that are opaque even to the parties deploying them, the assumptions on which democratic governance rests, that consequential decisions are legible, that affected parties can contest them, that deliberation can occur before action, no longer hold.31 Democratic governance was designed for a world in which the speed of consequential decisions was limited by the speed of human cognition and institutional process. AI removes that limit. The decisions do not slow down to the speed of governance. Governance, if it is to remain relevant, must find a way to operate at the speed of the decisions, or accept that the decisions will be made without it.

The third property is opacity. Democratic governance assumes that citizens can, in principle, understand the decisions made on their behalf, evaluate their consequences, and hold decision-makers accountable. AI systems are opaque in ways that no previous governance challenge has been. A citizen cannot read the weights of a neural network the way they can read a statute. A legislator cannot inspect the training data of a frontier model the way they can review the evidence submitted in a regulatory proceeding. The opacity is partial but consequential: engineers can describe the architecture, document the training data, and measure aggregate performance, but the relationship between a specific input and a specific output in a high-dimensional model, the question a court would need to answer to determine why this loan was denied or why this target was selected, remains opaque even to the system’s creators. The explanations that can be offered are statistical approximations, not causal accounts. For governance purposes, the distinction matters: a citizen challenging a decision needs to know why the decision was made, not what the system does on average.

Opacity does not merely make oversight difficult. It reverses the relationship between the governor and the governed. In functioning democratic systems, the government is transparent to the citizen: laws are published, proceedings are recorded, officials are accountable. The citizen is opaque to the government: privacy protections, due process, and limits on surveillance preserve a zone of individual autonomy. AI systems invert this. The system is opaque to the citizen (and to the legislator, and to the regulator). The citizen is transparent to the system, which can access their financial records, communication patterns, location history, browsing behavior, and social connections. The watcher cannot be watched. The governed cannot understand the governor. This inversion is not a bug in the implementation. It is the architecture.

Democratic governance has adapted to opaque systems before. Financial derivatives, intelligence agencies, and nuclear command structures all operate with limited public transparency, and democracies have developed proxy mechanisms (audits, inspections, oversight committees, judicial review) to govern them. AI opacity does not make governance impossible. It raises the cost and complexity of governance by an order of magnitude, because the proxy mechanisms that work for finance and intelligence assume a human decision-maker whose reasoning can, in principle, be reconstructed after the fact. When the decision-maker is a system whose outputs cannot be causally traced to specific inputs even by its creators, the proxy mechanisms must be redesigned from scratch.

The strongest counterargument to the claim that these properties make AI ungovernable is historical. Democratic governance has always lagged major technologies and eventually caught up.

Nuclear weapons were deployed in August 1945. The Atomic Energy Act creating civilian oversight was passed in August 1946. The Nuclear Regulatory Commission was not established until 1975, thirty years later. The internet went commercial in the mid-1990s. Meaningful data protection regulation (GDPR) arrived in 2018, more than twenty years later. In both cases, the catastrophic scenarios predicted during the governance lag did not materialize at full scale, and democratic institutions adapted, imperfectly but functionally.

The closer analog, and the more instructive one, is high-frequency trading. When algorithmic trading systems began operating at microsecond speeds in the 2000s, they shared two of the three properties described above: speed that outpaced human oversight, and opacity that made the systems difficult for regulators to inspect or understand. The 2010 Flash Crash, in which the Dow Jones dropped nearly 1,000 points in minutes before recovering, demonstrated the consequences of governing at human speed while the system operates at machine speed.32 Regulators responded with circuit breakers, order-to-trade ratios, and enhanced surveillance. The responses were partial and lagging. Fifteen years later, the SEC’s capacity to monitor algorithmic trading still trails the sophistication of the systems it oversees. But the comparison also reveals the limit of the analogy: high-frequency trading operates within a bounded domain (financial markets) with a defined set of participants, established institutional oversight (the SEC, FINRA), and decades of regulatory infrastructure. Frontier AI operates across every domain simultaneously, from lending to targeting to content to hiring, with no equivalent institutional infrastructure in any of them.

The argument that AI is different rests on the three properties described above, operating in combination. Nuclear technology was slow to proliferate because it required state-level resources, giving governance decades to catch up. AI proliferates at the speed of software distribution. Internet governance lagged because the technology was primarily a communication medium; the governance functions it performed (content moderation, algorithmic curation) emerged gradually and remained secondary to its communication function for years. AI performs governance functions from the moment of deployment. High-frequency trading shared the speed and opacity but operated in one domain; AI shares them and operates in all domains. The lag period for nuclear and internet governance was a period of ungoverned technology. The lag period for AI governance is a period of ungoverned governance: consequential allocative decisions are being made, at scale, right now, with no democratic authorization or oversight. The analogy to previous technologies holds only if one assumes the thing being governed is the technology itself, rather than the decisions the technology is making.

VI.

The preceding essays in this collection traced a pattern that now completes itself.

The first essay described an economic transformation with three possible outcomes: a K-shaped default in which AI productivity gains concentrate at the top, a managed transition requiring public ownership and progressive taxation, and a post-work society requiring the most ambitious institutional construction in modern history. Every response beyond the default requires collective action: legislation, international coordination, new institutional design.

The second essay showed that the epistemic commons on which collective action depends is degrading. The triple asymmetry, fabrication overwhelming verification, synthetic participants contaminating collective sense-making, sycophantic AI selecting for comfort over truth, does not merely corrupt the information environment. It disables the population’s capacity to agree on what is happening, which is the prerequisite for agreeing on what to do about it. The K-shaped economy wins by forfeit because the epistemic crisis prevents the coalition that could choose a different path from forming.

This essay adds the third layer. Even if the population could agree on what is happening, and even if it could form a coalition to respond, the institution through which democratic societies translate collective will into binding action is structurally incapable of governing AI. The legislature lacks technical expertise and depends on the entities it would regulate for knowledge. The competitive environment punishes any actor that imposes constraints. The governance vacuum benefits every powerful actor except the public. And the three properties of AI, speed, concentration, and opacity, make it structurally resistant to the kind of oversight that democratic governance provides.

The economic transformation has exits. The epistemic crisis threatens to make them invisible. The governance vacuum threatens to make them unreachable. Each essay removes a layer of possibility. The remaining question is whether any layer can be rebuilt in time.

VII.

The fork is whether independent democratic institutions capable of governing AI are built before algorithmic governance becomes the permanent default. The word “permanent” matters. Algorithmic governance, once entrenched, creates its own constituency: the companies that profit from it, the executives who exercise power through it, the legislators who avoid accountability by deferring to it. Each year the vacuum persists, the cost of filling it rises and the political will to fill it diminishes.

Three paths lead from here. They are not mutually exclusive. Different domains (military, financial, medical, civic) may follow different trajectories, and a single society may exhibit elements of all three simultaneously. But the branching logic is real: the structural forces push toward one trajectory or another, and the dominant trajectory determines the character of governance in the society as a whole.

VIII.

The first path is the continuation of current trends. It requires no legislation, no institutional innovation, no political will. It is what happens if the forces that produced the governance vacuum continue to operate under current incentives.

On this path, algorithmic governance quietly displaces democratic governance across an expanding range of decisions. AI systems determine creditworthiness, hiring outcomes, insurance pricing, criminal sentencing recommendations, content visibility, surveillance targeting, and military strike selection. Each deployment is individually defensible: faster, cheaper, more consistent than human decision-making. Taken together, they constitute a transfer of governance authority from institutions accountable to citizens to systems accountable to their operators.

The formal structures of democracy persist. Elections are held. Legislatures convene. Courts hear cases. But the decisions that determine the practical conditions of citizens’ lives are increasingly made by systems that operate outside democratic oversight. The legislature debates AI policy while AI systems decide who gets a mortgage. The executive branch issues executive orders about AI while AI systems generate targeting recommendations for military strikes. The gap between what democratic institutions formally control and what actually determines outcomes widens with each deployment, each quarter, each fiscal year.

This is not authoritarianism in any recognizable form. There is no dictator. There is no surveillance state in the Orwellian mold. There is, instead, a gradual hollowing: the form of democracy intact, the substance progressively transferred to systems that no citizen authorized and no institution controls. The displacement feels like progress, because each individual AI deployment solves a real problem. The loan decision is faster. The targeting is more precise. The hiring process is more consistent. The aggregate effect, a civilization in which the consequential decisions are made by systems outside democratic control, is visible only from a distance that no individual decision permits.

The displacement is already visible in civilian governance. Germany’s AI-powered welfare qualification system, deployed to automate benefit eligibility assessments, produced a pattern that illuminates the first path’s logic: 83 percent of applicants whose claims were rejected by the automated system reported that they could not understand the reasons for the rejection. Administrative lawsuits challenging automated eligibility decisions increased by 300 percent.33 The system was faster and cheaper than human caseworkers. It was also illegible to the citizens it governed. The democratic accountability that welfare systems are supposed to provide, the ability of a citizen to understand, challenge, and appeal the decisions that determine their access to public resources, had been replaced by an algorithmic process that was efficient, opaque, and, for the people on the receiving end, indistinguishable from arbitrary power. No legislature voted to eliminate the right to a comprehensible explanation. The right eroded because the system that replaced human judgment did not include comprehensibility in its design, and no institution required it to.

The December 2025 executive order on AI preemption is a marker on this path. Unable to achieve preemption through legislation (the Senate rejected it 99-1), the executive branch pursued it through executive action: directing the Department of Justice to challenge state AI laws, conditioning federal funding on policy alignment, and creating a litigation task force to identify and contest regulations the administration deemed obstacles to innovation.34 The states that had attempted to govern AI, Colorado with its algorithmic discrimination statute, California with its AI transparency requirements, were not defeated in legislative debate. They were targeted through executive power, federal funding pressure, and litigation. The democratic process at the state level produced AI governance. The executive branch at the federal level worked to undo it.

On the first path, this pattern generalizes. Where democratic processes produce constraints on AI, those constraints are overridden by executive action, preempted by federal authority, or rendered irrelevant by the speed at which AI systems deploy and evolve. The constraints that survive are voluntary: corporate commitments to safety principles, self-regulatory frameworks, industry best practices. Voluntary constraints have a structural problem that the Anthropic case makes explicit. They survive only as long as the entity imposing them can afford the competitive cost. In a market where the next provider will accept fewer constraints, voluntary limits are temporary by design.

IX.

The second path requires building democratic institutions that can govern at the speed and scale AI demands. This is not regulation in the conventional sense. It is institutional construction: creating agencies with technical capacity, independence, clear mandates, and the authority to act on timescales that matter.

The closest analogy is the Federal Reserve. When the complexity of monetary policy exceeded the capacity of Congress to manage it in real time, Congress did not attempt to legislate interest rates. It created an independent agency with technical expertise, gave it a mandate (stable prices and maximum employment), insulated it from political pressure, and held it accountable through transparency requirements, congressional testimony, and the appointment process. The Federal Reserve makes consequential decisions that affect every American, at speeds that no legislature could match, within boundaries that democratic process sets and periodically revises. It is imperfect. It is also the reason monetary policy is not conducted by congressional vote.

AI governance requires an equivalent: a technically capable body with the authority to set deployment conditions for high-risk AI systems, the independence to resist both corporate pressure and executive overreach, and the mandate to protect the public interest in domains where AI systems perform governance functions. The body would not write AI systems. It would set the conditions under which they operate: pre-deployment testing requirements, mandatory auditing, transparency obligations, human oversight standards in high-stakes domains, and the authority to suspend systems that fail to meet safety criteria. The FDA does not develop drugs. It ensures that the drugs that reach the public have been tested and meet safety standards. An AI governance agency would perform the equivalent function for systems that make consequential allocative decisions.

The FDA itself offers evidence that domain-specific technical regulatory capacity can be built. As of mid-2025, the agency had authorized more than 1,250 AI-enabled medical devices, developed a risk-based credibility assessment framework for AI models used in drug development, and deployed its own internal AI tools for scientific review.35 The model works within a defined domain because the FDA has a clear statutory mandate, decades of accumulated expertise, and a cultural norm of pre-market review that industry accepts as legitimate. The challenge for frontier AI governance is that no equivalent agency exists, no statutory mandate has been enacted, the domain is not bounded, and the industry norm is deployment first, review never. Building the FDA took decades and required a crisis (the thalidomide disaster) to generate the political will.36 The question is whether democratic societies can build an equivalent institution for AI governance before the crisis that would generate the will, and whether the crisis, when it arrives, will still leave democratic institutions intact enough to respond.

Taiwan offers a proof of concept for the democratic innovation this path requires. The vTaiwan process, developed after the Sunflower Movement of 2014, uses a combination of online deliberation (through the Polis platform, which identifies consensus across opinion groups rather than amplifying division), expert consultation, and face-to-face discussion to produce policy recommendations on technically complex issues. More than 28 cases have been discussed through the process, and 80 percent have led to government action.37 In 2023, vTaiwan participated in OpenAI’s “Democratic Input to AI” initiative, developing guiding principles for AI governance through citizen deliberation. Taiwan’s government adopted a Media Literacy Education White Paper in 2023, and the Digital Affairs Ministry launched “Alignment Assemblies” using AI-assisted deliberation tools to gather public input on AI policy.

The limitation is equally instructive. vTaiwan is volunteer-run, reaches only thousands of participants in a country of 23 million, and, as one policymaker described it, remains “a tiger without teeth,” dependent on individual officials’ willingness to adopt its recommendations rather than operating under any legal mandate.38 The gap between a proof of concept and a functioning system, between a civic experiment that works at the scale of a Taiwanese policy debate and an institution that governs frontier AI at global scale, is the gap the second path must close.

The binding constraint is not technical. The tools for democratic deliberation exist. The institutional models exist. The constraint is political: the institutions that would govern AI must be built over the opposition of the entities that benefit most from the absence of governance. Who funds a regulatory agency that constrains the richest companies in history? Who appoints commissioners with genuine independence from the industry they oversee? Who creates international coordination mechanisms when the competitive dynamic rewards the jurisdiction that defects?

There is a credible alternative to the institutional-construction path that the essay should acknowledge. If frontier AI were open-source and widely distributed rather than controlled by a handful of closed labs, the concentration that makes the governance vacuum dangerous would partially dissolve. No government could blacklist a single company and reshape the AI market overnight, because the models would already be distributed across thousands of developers and deployments. Meta’s open release of its Llama models provides the template.39 The Anthropic-Pentagon dynamic, in which one provider’s refusal was rendered irrelevant by a competitor’s compliance within forty-eight hours, depends on a market structure where a handful of providers control frontier capability. Open-source fragments that control.

The counterargument is equally serious. Open-source frontier models in the hands of every state and non-state actor, without any deployment constraints, creates not the absence of the governance problem but its transformation. The problem shifts from concentration (a few actors making consequential decisions without democratic oversight) to diffusion (no actor capable of being held accountable for any decision, because the technology is available to everyone and controlled by no one). The institutional path described in this section assumes something to construct around: identifiable developers, auditable systems, enforceable deployment conditions. Open-source AI may make that construction impossible, replacing the governance vacuum with a governance void. The choice between concentrated and distributed ungoverned AI is not a choice between a problem and a solution. It is a choice between two different kinds of ungoverned power.

The Pro-Human Declaration, signed in March 2026 by a bipartisan coalition ranging from former Trump advisor Steve Bannon to former Obama National Security Advisor Susan Rice, demonstrates that the political constituency for AI governance exists.40 Former Joint Chiefs Chairman Mike Mullen is a signatory. The document calls for mandatory pre-deployment testing, prohibition on superintelligence development without democratic consensus, and mandatory off-switches on powerful systems. The constituency spans the political spectrum. What it lacks is the institutional mechanism to translate broad support into binding legislation. Ninety-five percent of Americans oppose an unregulated AI race. One law has been passed. The gap is the essay’s subject.

X.

The third path is the one the Iran war has already begun to create. No global coordination emerges. Parallel AI governance regimes form along geopolitical lines, each reflecting the values of its origin, none accountable to the populations they affect.

The mechanism is visible in the sequence of events that followed the Anthropic dispute. Within days of Anthropic’s blacklisting, the Pentagon signed replacement contracts with OpenAI, xAI, and Google, each adding AI capabilities to classified military systems.41 The speed of substitution revealed something about the structure: the military’s AI infrastructure is not a single contract but a stack of interdependent systems. Claude was embedded in the Maven Smart System, which was integrated with Palantir’s data platform, which connected to satellite feeds, drone networks, and intelligence databases across CENTCOM’s area of operations.42 Replacing one model required reconfiguring an entire operational stack. The Pentagon did it in weeks, under wartime conditions, because the alternative providers were standing by. The lesson for every ally watching was that the American AI stack is not optional. It is the infrastructure through which American military partnerships operate. Countries that participate in U.S.-led security arrangements will use U.S.-provided AI systems, under U.S.-determined governance terms, because the operational integration leaves no practical alternative.

The United States is making this explicit as policy. The Atlantic Council documented intensifying competition between U.S. and Chinese “AI stacks,” with the White House directing that the American stack be exported to partner countries as a component of security cooperation.43 The AI stack becomes the new NATO: a technological alliance structure that binds participants to a common infrastructure and, by extension, to the governance norms embedded in that infrastructure. Those norms, as the Anthropic case demonstrated, are set by the executive branch through procurement decisions, not by legislatures through law.

China offers the alternative. Its AI governance model is centralized, state-directed, and designed for control.44 DeepSeek and other Chinese frontier models are available to countries unwilling to accept American terms, and they come with a different set of governance assumptions: no pretense of corporate safety limits, no independent judiciary to challenge state deployment decisions, and no civil society organizations filing amicus briefs. For countries in the Global South that depend on external AI capability and face a binary choice between American and Chinese ecosystems, the governance terms are set by the provider, not the recipient. The choice of stack is a choice of governance regime, and it is being made by heads of state and procurement officers, not by citizens or parliaments.

The EU occupies the most uncomfortable position. It built the most ambitious regulatory framework in the world (the AI Act), but it depends on American and Chinese models for frontier capability.45 European citizens use American AI systems (ChatGPT, Claude, Gemini) that are governed primarily by American corporate decisions and American executive orders, not by the EU AI Act. The regulation applies most directly to European companies, which are the least capable of building frontier systems. The Brussels effect, the mechanism by which a single jurisdiction’s stringent rules become the global standard through market power, works for consumer products and data protection. It does not work when the jurisdiction lacks the technology it is trying to regulate.

International institutions have attempted to fill the coordination gap. The OECD AI Principles, adopted in 2019 and updated since, have been endorsed by over 40 countries. The G7 established the Hiroshima AI Process in 2023, producing a code of conduct for frontier AI developers. The UN Secretary-General’s AI Advisory Body issued recommendations in 2024 calling for a global AI governance framework. The Council of Europe adopted a binding AI convention in 2024. Each effort represents genuine diplomatic investment. None has produced enforceable constraints on frontier AI development or deployment. The OECD principles are voluntary. The G7 code of conduct has no compliance mechanism. The UN recommendations are advisory. The Council of Europe convention binds only its signatories, which do not include the United States or China. The pattern is that international AI governance produces documents, not institutions, and documents without enforcement mechanisms do not constrain the behavior of the entities that matter most.

Lancieri, Edelson, and Bechtold mapped four possible equilibria for international AI governance: local regulatory regimes operating independently, harmonization through mutual recognition, the Brussels effect, or the Splinternet, where incompatible regimes fragment the global technology infrastructure.46 The trajectory as of early 2026 points toward the fourth. The harmonization that would require cooperation is blocked by the competitive dynamic that rewards defection. The Brussels effect is weakened by the EU’s technological dependency. Local regimes are preempted by their own governments or undermined by regulatory arbitrage.

In every bloc, the same structural condition holds: citizens have no meaningful input into how AI is governed. In the American bloc, AI governance is conducted through executive orders, procurement decisions, and corporate self-regulation. In the Chinese bloc, it is conducted through state directive. In the EU, it is conducted through regulation that applies primarily to European companies and has limited reach over the American and Chinese systems that European citizens actually use. The technology that could enable the most inclusive governance in human history is instead producing a world where governance is fragmented across competing blocs, and democratic accountability is absent from all of them.

XI.

What determines which path a society follows is whether the structural forces that sustain the governance vacuum can be overcome before algorithmic governance becomes self-reinforcing.

Three conditions would need to be met simultaneously. First, democratic institutions must develop independent technical capacity to evaluate AI systems without depending on the companies that build them. This means funding, hiring, and retaining the kind of technical expertise that currently flows almost exclusively to the private sector, because that is where the compensation and the intellectual challenge are greatest. Second, the competitive selection against safety must be broken by binding rules that apply to all providers within a jurisdiction, and by international coordination that prevents regulatory arbitrage between jurisdictions. This requires exactly the kind of collective action that Essays 1 and 2 identified as being disabled by economic concentration and epistemic fragmentation. Third, the political constituency for AI governance, which polling shows is overwhelming, must be translated into legislative action over the opposition of the lobbying apparatus that currently ensures legislative paralysis.

Each condition is individually achievable. Together, they require a level of institutional coordination, speed, and political will that no democracy has demonstrated in response to any technology. The closest precedent is the New Deal response to the Great Depression, which built multiple regulatory agencies (the SEC, the FDIC, the NLRB) in a compressed period under Roosevelt’s first hundred days. What made that burst of institution-building possible was not visionary leadership alone. It was the fact that the crisis was universal, visible, and undeniable. Banks had failed. Unemployment was 25 percent. The pain was felt by voters who could punish legislators for inaction. The governance vacuum around AI produces no equivalent political pressure, because the crisis is distributed unevenly and experienced incrementally. The people most affected by algorithmic decisions, job applicants filtered by AI, borrowers scored by AI, communities surveilled by AI, civilians in the path of AI-targeted strikes, are not the people with the political power to demand institutional change. And the crisis, unlike a depression, does not announce itself. The hollowing of democratic governance is incremental, diffuse, and, at each individual step, defensible. There is no bread line for algorithmic governance. There is only a gradual transfer of authority that each individual deployment makes slightly more irreversible.

The window for building democratic AI governance is constrained by the same self-reinforcing logic that runs through the entire collection. The longer algorithmic governance operates without democratic oversight, the more entrenched it becomes. The more entrenched it becomes, the harder it is to subject to oversight, because the entities operating it have grown more powerful and the political constituency for constraining them has not. The vacuum feeds itself. The loop tightens.

XII.

The first essay in this collection described an economy learning to grow without workers. The second described an information environment learning to function without truth. This essay describes a governance system learning to operate without democracy.

In each case, the formal structure persists. The economy still has jobs. The information environment still has facts. The governance system still has elections. What is disappearing is the substance within the form: the link between labor and prosperity, the link between evidence and belief, the link between citizens and the decisions that shape their lives.

The Anthropic dispute will be resolved. The lawsuit will produce a ruling. The Pentagon will deploy AI systems with or without the restrictions Anthropic tried to maintain. What will not be resolved is the underlying question the dispute exposed: who governs AI? The answer, as of early 2026, is that no one does. Corporations make deployment decisions. The executive branch makes procurement decisions. The market makes selection decisions. Citizens make none of the decisions that matter.

The technology that could enable the most democratic governance in human history, that could facilitate deliberation at scale, aggregate preferences across millions of citizens, make expertise accessible to every voter, and connect people to the decisions that affect their lives, is instead producing the most consequential decisions in human history with no democratic input at all. Governance functions that once required legislatures, agencies, and courts are being performed by systems that operate faster, at larger scale, with more immediate impact, and with no accountability to anyone except their operators.

The exits from the economic transformation require legislation. The repair of the epistemic commons requires institutional design. Both require democratic governance that is capable of acting at the speed and scale the challenge demands. This essay has argued that the institution is structurally absent, that its absence is not an accident but an equilibrium sustained by identifiable forces, and that overcoming those forces requires building something that does not yet exist.

The question is whether the hollowing is recognized before it is complete. The form of democracy is remarkably durable. Elections continue in countries that have long since ceased to be democratic in any substantive sense. The form can persist indefinitely. The substance, the link between citizens and the forces that govern their lives, is what is at stake. It is being displaced, not by a dramatic seizure, but by a thousand individually reasonable deployments that collectively transfer the most consequential decisions from institutions accountable to the public to systems accountable to their operators.

The economic transformation has exits. The epistemic crisis threatens to make them invisible. The governance vacuum threatens to make them unreachable. The loop between the three essays tightens with each cycle. And the people who would need to break it are operating inside the same degraded governance system that this essay describes.


Notes

Footnotes

  1. CNN, “Anthropic sues the Trump administration after it was designated a supply chain risk,” March 9, 2026. Also TechPolicy.Press, “A Timeline of the Anthropic-Pentagon Dispute,” March 2026.

  2. Washington Post, “Anthropic sues Pentagon over being labeled a national security risk,” March 9, 2026. Center for American Progress compared the designation to Huawei-style sanctions. Dean Ball called the DPA threat “the quasi-nationalization of a frontier lab.”

  3. Fortune, “The fight between Anthropic and the Pentagon raises crucial questions about control over AI,” March 2026. Also Futurism, “Insiders Afraid the Government Will Nationalize the AI Industry,” March 10, 2026.

  4. CBS News, “How the military is using AI in war,” March 2026. OpenAI posted about language in their Pentagon deal honoring three red lines (autonomous lethal weapons, mass surveillance, high-stakes automated decisions) while agreeing to “any lawful purpose” terms.

  5. Axios, “Tech firms back Anthropic in Pentagon lawsuit,” March 16, 2026. CNN reported nearly 150 retired judges filed an amicus brief supporting Anthropic. Scientists from OpenAI and Google DeepMind filed a separate brief in personal capacities.

  6. Dean Ball, quoted in the New York Times, March 7, 2026. Also cited in TechCrunch, “A roadmap for AI, if anyone will listen,” March 7, 2026.

  7. TechPolicy.Press, “Expert Predictions on What’s at Stake in AI Policy in 2026,” January 8, 2026. The TAKE IT DOWN Act, signed in May 2025, criminalizes nonconsensual intimate deepfake images. It was the only AI-specific federal statute enacted in 2025.

  8. FTC enforcement actions on deceptive AI: see FTC guidance on AI and algorithms. EEOC guidance on AI in hiring: EEOC Technical Assistance Document, May 2023. CFPB fair lending and AI: CFPB guidance, 2022. OECD AI Principles adopted 2019, updated 2024, endorsed by 40+ countries. G7 Hiroshima AI Process launched 2023. UN AI Advisory Body recommendations, 2024. Council of Europe AI Convention adopted 2024.

  9. The supply-chain risk designation was issued under 10 U.S.C. § 3252. Center for American Progress, “The Trump Administration Is Trying To Make an Example of the AI Giant Anthropic,” March 2026. CAP noted that applying equivalent authority to Huawei required an act of Congress. CNBC reported the designation could reduce Anthropic’s 2026 revenue by “multiple billions of dollars.” Axios, “Tech firms back Anthropic in Pentagon lawsuit,” March 16, 2026.

  10. Sam Altman, AMA on X, March 2026. Also reported in Fox Business.

  11. As of 2025, approximately 82-88% of companies using AI in hiring use it for resume screening. Resume Builder survey, October 2024 (948 business leaders surveyed). World Economic Forum, “Hiring with AI doesn’t have to be so inhumane,” March 2025, reports 88% of companies use AI for initial candidate screening. Only 16% trust AI to reject candidates without human input at later stages.

  12. Automation bias is well-documented across domains. For a comprehensive review, see Parasuraman and Manzey, “Complacency and Bias in Human Use of Automation,” Human Factors 52(3): 381-410, 2010. For military-specific evidence, see Cummings, “Automation Bias in Intelligent Time Critical Decision Support Systems,” AIAA, 2004. The Chatham House and UCL concerns cited in Section IV of this essay draw on the same research tradition.

  13. The dynamic is illustrated in real time by the proposals the labs themselves produce. OpenAI, “Industrial Policy for the Intelligence Age: Ideas to Keep People First,” April 2026, proposes that “nongovernmental institutions should pilot new approaches, measure what works, and iterate quickly, then governments should reinforce successes by aligning incentives and scaling what works through procurement, regulation, and investment.” The document explicitly names the risk of regulatory capture (“not to entrench incumbents through regulatory capture”) while proposing a process structure that is indistinguishable from regulatory capture in its operational logic: the regulated entity designs the regulatory framework and the regulator ratifies it. This is not an accusation of bad faith. It is a structural observation that the company with the most technical knowledge about the systems naturally proposes the framework it understands, and the framework it understands is the one compatible with its business model.

  14. OpenSecrets, “The AI-fication of K Street,” February 2026. 774 organizations lobbied on AI issues in 2025, filing 3,500+ reports, increases of 423% and 505% from 2020.

  15. Public Citizen, “Generative Influence,” February 2026. 3,570 individual lobbyists (a quarter of all federal lobbyists) reported lobbying on AI issues in 2025. Of the top 100 entities hiring AI lobbyists, 91 represented corporate interests.

  16. OpenSecrets, ibid. Lockheed Martin reported $15.7 million in lobbying expenditures; Microsoft reported $10.1 million. The Leadership Conference on Civil and Human Rights reported $1.8 million.

  17. Pentagon spokeswoman Kingsley Wilson, quoted in Al Jazeera, March 11, 2026.

  18. Global Policy Watch, “President Trump Signs Executive Order to Block State AI Laws,” December 15, 2025. Also DLA Piper analysis and Sidley Austin analysis, December 2025.

  19. The Senate rejected a moratorium on state AI regulations 99-1 in July 2025. Global Policy Watch, December 15, 2025. Republican leaders subsequently abandoned an AI preemption provision in the NDAA.

  20. Lancieri, Edelson, Bechtold, “AI Regulation: Competition, Arbitrage & Regulatory Capture,” Theoretical Inquiries in Law 26(1): 239-262, 2025.

  21. Max Tegmark, cited in TechCrunch, March 7, 2026. Polling showing 95% of Americans oppose an unregulated race to superintelligence.

  22. Al Jazeera, March 11, 2026. 5,500 targets in 11 days. Also DefenseScoop, “Centcom commander touts use of AI in fight against Iran,” March 11, 2026. JNS reported 8,000+ targets and 15,000+ allied strikes by week three.

  23. Admiral Brad Cooper, CENTCOM, video posted to X, March 11, 2026. Quoted in Al Jazeera, France 24, NBC News, and The National.

  24. Retired Navy Admiral Mark Montgomery, quoted in CBS News, March 2026.

  25. Dr. Peter Bentley, UCL, quoted in The National, March 11, 2026.

  26. Nilza Amaral, Chatham House, quoted in The National, ibid.

  27. Al Jazeera, NBC News, Democracy Now!, and France 24 reported the Minab school bombing. The Pentagon opened an investigation. Democracy Now! reported that the Maven Smart System, incorporating Claude, was being investigated in connection with the strike.

  28. Kenneth Payne, “AI Arms and Influence: Frontier Models Exhibit Sophisticated Reasoning in Simulated Nuclear Crises,” King’s College London, arXiv, February 2026. 21 simulated nuclear crisis scenarios, 329 turns, ~780,000 words of strategic reasoning. Nuclear escalation in 95% of games. No model ever chose accommodation or surrender. 86% of conflicts featured unintentional escalation. Also: King’s College London press release, March 2026.

  29. The EU AI Act was proposed in April 2021, politically agreed upon in December 2023, and has transparency provisions under Article 50 phasing in from August 2026. EU AI Act, Article 50.

  30. As of early 2026, frontier AI development is concentrated among approximately five companies (OpenAI, Google DeepMind, Anthropic, Meta, and xAI) in the United States, with significant but less advanced competitors in China (ByteDance, Baidu, Alibaba, DeepSeek). See Atlantic Council, “Eight ways AI will shape geopolitics in 2026,” January 15, 2026, on U.S.-China AI stack competition.

  31. Gillian K. Hadfield and Steven C. Koh, “AI Agents and the Incomplete Contract,” NBER Working Paper, 2025. When AI agents transact at machine speed inside opaque processes, the assumptions of legibility, contestability, and deliberation that democratic governance requires no longer hold.

  32. The Flash Crash of May 6, 2010: the Dow Jones Industrial Average dropped approximately 1,000 points (about 9%) in minutes before recovering. SEC/CFTC joint report, “Findings Regarding the Market Events of May 6, 2010,” September 2010. On ongoing challenges governing high-frequency trading, see SEC Commissioner Robert Jackson, “Speech on Market Structure,” 2018, and subsequent reviews of Regulation NMS effectiveness.

  33. German AI welfare qualification system: cited in Fan (2025), “The role of artificial intelligence in the digital transformation of government,” PMC/Frontiers in Political Science. 83% of rejected applicants could not understand reasons for rejection; administrative lawsuits increased 300%.

  34. Global Policy Watch, December 15, 2025, ibid. The Senate rejected a moratorium on state AI regulations 99-1 in July 2025. Republican leaders abandoned an AI preemption provision in the NDAA. California Attorney General Rob Bonta stated his office would examine the legality of the executive order. Florida Governor DeSantis stated that “an executive order doesn’t/can’t preempt state legislative action.”

  35. As of July 2025, the FDA had authorized more than 1,250 AI-enabled medical devices. Bipartisan Policy Center, November 2025. The FDA issued draft guidance on AI credibility assessment in January 2025, launched an agency-wide AI assistant (Elsa) in June 2025, and deployed agentic AI capabilities in December 2025.

  36. The FDA was established in 1906 (Pure Food and Drug Act) but did not receive pre-market approval authority until the 1938 Federal Food, Drug, and Cosmetic Act, passed in response to the sulfanilamide disaster (107 deaths). The thalidomide crisis of 1961-1962 led to the Kefauver-Harris Amendment (1962), which required drug manufacturers to prove efficacy and safety before marketing. The institutional capacity that the FDA deploys today was built over decades through successive crises.

  37. vTaiwan. 28+ cases discussed, 80% leading to government action. Also People Powered case study, December 2025, and Freiheit Foundation analysis, 2025.

  38. “A tiger without teeth”: quoted in Democracy Technologies, July 2024. vTaiwan is volunteer-run since 2023, without direct government support.

  39. Meta released Llama 2 as open-source in July 2023 and Llama 3 in April 2024, making frontier-class model weights available for download, modification, and deployment by any developer. Yann LeCun, Meta’s chief AI scientist, has argued that open-source AI is essential to preventing the concentration of AI power in a small number of companies.

  40. The Pro-Human Declaration, March 2026. Signed by former Trump advisor Steve Bannon, former Obama National Security Advisor Susan Rice, former Joint Chiefs Chairman Mike Mullen, and hundreds of experts and public figures. Covered in TechCrunch, March 7, 2026.

  41. CBS News, March 2026, ibid. The Pentagon signed deals with OpenAI, xAI (Elon Musk’s Grok), and Google in rapid succession. Google announced AI agents for non-classified military uses in a blog post.

  42. DefenseScoop, “Centcom commander touts use of AI in fight against Iran,” March 11, 2026. Also Democracy Now!, which reported that the Maven Smart System, incorporating Claude, was being investigated in connection with the Minab school strike. CBS News reported on Palantir’s role as the platform integrator.

  43. Atlantic Council, “Eight ways AI will shape geopolitics in 2026,” January 15, 2026. Documents intensifying competition between U.S. and Chinese AI stacks, with the White House making it policy to export the American stack to partner countries.

  44. China has enacted multiple AI governance regulations since 2021, including rules on algorithmic recommendations (2022), deep synthesis/deepfakes (2023), and generative AI services (2023). These regulations are enforced by the Cyberspace Administration of China and operate within a state-directed framework. See DigiChina Project, Stanford University, for English translations and analysis.

  45. No European company operates a frontier large language model competitive with GPT-5, Claude, or Gemini. European citizens and businesses rely primarily on American models for frontier AI capability. The EU AI Act’s strongest provisions (high-risk classification, transparency requirements, conformity assessments) apply most directly to companies deploying AI within the EU, which are disproportionately European firms using American foundation models. See Lancieri et al., ibid., on regulatory arbitrage and the Brussels effect’s limitations in AI.

  46. Lancieri, Edelson, Bechtold, ibid. Four equilibria: local regulatory regimes, harmonization, Brussels effect, Splinternet.