← Back to all shades
Shade 8 ~80%

Governance Obsolescence

Tier 2: Highly Probable

Unmanaged -3
Governed 2
Dividend 5

During the 118th Congress (2023-2024), lawmakers introduced over 150 bills concerning artificial intelligence. None passed into law (Brennan Center, AI Legislation Tracker). In 2025, more than 1,000 AI-related bills were introduced across state legislatures; Congress still failed to pass comprehensive federal legislation. The single federal AI law enacted through mid-2025 was the TAKE IT DOWN Act, addressing nonconsensual intimate images. States filled the vacuum: Colorado passed the first comprehensive state AI framework in 2024, California followed with transparency and frontier model safety laws effective January 2026. The federal response was to attempt suppression. In December 2025, President Trump signed an executive order proposing to preempt state AI laws deemed inconsistent with a “minimally burdensome” national framework. A provision in the One Big Beautiful Bill Act proposed a ten-year moratorium on state AI regulation; the Senate voted 99-1 to strip it (Brennan Center; TechPolicy.Press).

The European Union passed the AI Act in 2024 as the world’s first comprehensive AI regulation. Before its high-risk provisions could take effect in August 2026, the European Commission proposed delaying them to late 2027 through a “Digital Omnibus” package, acknowledging that supporting standards and guidelines were not ready. The Commission itself missed its February 2026 deadline for guidance on high-risk system compliance. Multiple member states had not established enforcement structures (IAPP; Euronews). A regulation designed to govern AI is being delayed before enforcement begins because the regulatory infrastructure cannot keep pace with the regulatory ambition, which itself cannot keep pace with the technology.

None of this means governance is failing everywhere. Governance activity is surging. Stanford HAI’s 2025 AI Index reports that legislative AI mentions across 75 countries grew more than ninefold since 2016. U.S. federal agencies introduced 59 AI-related regulations in 2024, more than double the prior year. The OECD’s AI Policy Observatory tracks over 1,000 AI policies across 70+ jurisdictions (Stanford HAI, 2025). The 99-1 Senate vote against the moratorium is itself evidence of self-correction: the deliberative system caught a terrible proposal and killed it. Existing laws already apply to AI without new legislation. The FTC, EEOC, CFPB, and DOJ issued a joint statement in 2024 affirming that consumer protection, civil rights, and fair lending statutes cover AI uses. The FTC has brought enforcement actions. Product liability law covers AI-embedded products. “No comprehensive AI law” is a real gap, but it is not the same as “no legal constraint.”

And democratic institutions already have models for governing at speed. Financial regulators adjust monetary policy in real time. The FDA grants emergency use authorizations that compress years of review into weeks. Securities regulation operates at the tempo of the markets it oversees. None of these required abandoning deliberation. They required delegating authority to technically capable agencies with clear mandates and independence to act. China demonstrates the speed that is possible when political will exists: Beijing implemented binding interim measures on generative AI services in August 2023, months after ChatGPT’s release, building on algorithmic recommendation rules from 2022 and deep synthesis provisions from January 2023 (White & Case, AI Global Tracker: China; IAPP, Global AI Governance: China). By September 2025, mandatory AI content labeling rules were in force. The question for democracies is whether they can match that speed while preserving the transparency and rights protections that make democratic governance worth having.

So the real diagnosis may be political failure rather than structural incompatibility. The tools exist. The institutional models exist. What does not exist is the political will to build an AI equivalent of the Fed or the FDA, in a country where the dominant AI power’s administration views regulation as a competitive handicap and the industry that would be regulated spends heavily to ensure it is not. The governed outcome (+2) requires delegated regulatory authority that can adjust requirements as conditions change, mandatory pre-deployment review for frontier systems above defined capability thresholds, and market-access conditions that make compliance the price of reaching consumers. AI could itself accelerate governance: regulatory technology, automated compliance monitoring, AI-assisted policy analysis. The irony is that the tool creating the governance crisis may also be the tool that makes governance at the necessary speed feasible. Whether any of this is politically achievable given the current deregulatory posture of the world’s leading AI power is the question the +2 depends on. Meanwhile, real-world harms accumulate in the gap. Leaked Meta documents showed executives authorized AI to have “sensual” conversations with children. In Baltimore, an AI security system mistook a student’s bag of chips for a gun and summoned police (TechPolicy.Press).

Key tension: Governance activity is surging. Governance capability remains outmatched. The gap between them may be a political choice rather than a structural inevitability, but the political conditions required to close it do not currently exist in the country that matters most.