In February 2026, a defense secretary gave the CEO of an AI company a deadline. By Friday, Anthropic would let the Pentagon use its technology for any lawful purpose, or face cancellation of its contract, designation as a supply chain risk to national security, and possible invocation of the Defense Production Act to seize the technology. Anthropic’s red lines were two: no mass surveillance of American citizens, no fully autonomous weapons without human decision-making. Three of the four frontier AI labs with Pentagon contracts had already accepted unrestricted access. Anthropic refused.
The dispute dominated headlines for weeks. What it revealed mattered more than what it resolved. Neither side of the confrontation represented democratic governance. A company’s board deciding that its technology would not be used for autonomous weapons is corporate ethics, applied unilaterally, revocable at any time by a change in leadership. An executive branch coercing compliance through procurement threats is executive power exercised without legislative authorization. Congress, the institution designed to make exactly this kind of decision, had not acted. In the entirety of 2025, Congress passed one standalone AI-specific federal statute: the TAKE IT DOWN Act, which criminalized nonconsensual intimate deepfake images. No comprehensive AI governance framework. No regulatory agency with a mandate over frontier systems. No legislation defining the conditions under which AI could be deployed in military operations. The most consequential technology since nuclear weapons was being governed by a contract dispute between a corporation and the executive branch, with the judiciary asked to referee.
The conventional framing of AI and democracy treats AI as a technology to be governed, like nuclear energy or pharmaceuticals. Can democratic institutions regulate it fast enough? The answer, delivered by hundreds of reports and conferences, is that they cannot, and that they must try harder. This framing misses what has already happened. AI is already governing.
AI systems now decide whether loan applications are approved, which job applicants reach a human recruiter, what news a citizen sees, which targets a military commander is presented for strikes, how employee productivity is monitored, how credit is scored, and which neighborhoods receive police attention.
These are governance functions. They allocate resources, distribute opportunity, impose consequences, and determine the practical conditions of daily life for millions of people. Each decision is consequential. Taken together, they constitute a system of allocation and control that operates alongside formal democratic governance and, in many domains, has already superseded it in speed, scale, and practical impact. None of it was specifically authorized by democratic process. The legal frameworks under which AI operates were designed for human decision-makers operating at human speeds with human accountability. They authorize a loan officer to approve a loan, not an algorithmic system to approve ten thousand loans per hour using criteria no human can fully articulate. The authorization exists but is misaligned with what it now authorizes.
The gap between what AI systems do and what democratic institutions authorize matters because it determines who is accountable when something goes wrong, and who can change the rules when the rules need changing. When an AI system denies a loan, the applicant has limited recourse: the decision was made by a process no individual at the bank may fully understand, using criteria the bank may consider proprietary, at a speed that makes human review a bottleneck the institution has an economic incentive to eliminate. When an AI system recommends military targets, the human “in the loop” operates under time constraints, information overload, and institutional pressure that make overriding the system’s recommendation the exception rather than the norm. The human remains formally in control. The practical dynamics of the interaction mean that control is increasingly nominal.
The pattern is a structural consequence of a technology that performs governance functions without being incorporated into the governance structure, rather than a failure of any individual institution. Companies deploy AI because it is more efficient than human decision-making, and they are right about the efficiency. But the deployment decisions are made by executives and engineers, not by legislators or voters, and the systems operate under terms of service, not under law.
The regulatory response, such as it exists, confirms the structural problem rather than solving it. The EU AI Act is the most comprehensive attempt: a risk-tiered framework with prohibited practices, transparency requirements, and conformity assessments. In the United States, federal agencies have applied existing authorities to AI within their domains. The FTC pursues deceptive practices. The EEOC issues guidance on algorithmic discrimination. The FDA authorizes AI-enabled medical devices. But each agency can only govern AI within its existing mandate, using authorities designed for pre-AI technologies. The FTC can pursue deception but cannot require pre-deployment safety testing. The EEOC can issue guidance but cannot mandate algorithmic audits. No agency has the mandate, technical capacity, or legal authority to govern frontier AI systems as such. The result is a patchwork of domain-specific interventions, each addressing a fragment of the problem within boundaries drawn before the problem existed.
A structural equilibrium has formed, and it is stable precisely because every actor with power benefits from it. The companies building AI systems benefit from continued deployment without legislative constraint. The governments competing for AI advantage benefit from minimizing restrictions on domestic labs. Legislators benefit from avoiding votes on complex technology issues where any position alienates some constituency. Consumers benefit from AI services that are cheaper, faster, and more convenient than the human alternatives. The population that would benefit from democratic governance of AI, the future population that inherits the consequences, has no representation in the decision-making process. The equilibrium sustains itself because the actors who could change it have no incentive to, and the actors who need it changed lack the power to force the change.
The concentration of AI capability in a small number of companies deepens the structural problem. When three or four firms control the most capable AI systems, and when those systems are embedded in government, military, financial, and information infrastructure, the firms acquire a form of structural power that is distinct from market power. Market power means you can set prices. Structural power means you can shape the conditions under which everyone else operates. When a government depends on your technology for military intelligence, financial regulation, and public communication, the normal mechanisms of democratic accountability, legislation, regulation, antitrust, operate under a new constraint: the entity being regulated provides the infrastructure the regulator runs on.
Three paths lead from here. In the first, the current trajectory continues: AI systems perform an expanding share of governance functions, democratic institutions lag further behind, and the gap between what is decided by algorithm and what is authorized by democratic process widens until it becomes permanent. Governance is automated in practice while remaining democratic in name. This does not require conspiracy. It requires only that current incentives continue operating.
In the second path, democratic institutions catch up. The specific reforms this requires include a federal regulatory agency with jurisdiction over frontier AI systems, pre-deployment assessment requirements for systems that perform governance functions, mandatory algorithmic auditing with results accessible to affected populations, international coordination on standards that prevent regulatory arbitrage, and liability regimes that make developers responsible for the harms their systems cause. Each of these reforms faces opposition from the entities that benefit from the current arrangement, which is why none has been implemented at federal level in the United States and why the EU Act, for all its ambition, relies on enforcement mechanisms that have not yet been tested against the concentrated power of the companies it regulates.
The third path acknowledges that AI can improve governance: faster processing of evidence, more consistent application of rules, identification of patterns that human bureaucracies miss. The question is whether AI-assisted governance operates under democratic authority or replaces it. A government that uses AI to process benefit claims faster is extending democratic governance. A government that uses AI to identify and target citizens who criticize its agencies, which is what DHS administrative subpoenas to social media platforms in February 2026 accomplished, is using AI to contract it. The technology is the same. The institutional design is everything.
The previous essay described what happens when a population loses the capacity to agree on facts. This essay describes what happens when the decisions that shape people’s lives are made by systems that no population authorized. Together they form a compound failure: the public cannot evaluate what is being done to it, and the institutions that should be doing the evaluating on its behalf have been outpaced by the technology they are supposed to oversee. The governance deficit is a structural condition, and it will persist until someone builds the institutions that the transition requires and that no powerful actor currently has an incentive to build.