← Back to all shades
Shade 7 ~80%

The Geopolitical AI Arms Race

Tier 2: Highly Probable

Unmanaged -4
Governed 1
Dividend 5

The arms race is no longer theoretical. It is being fought with real weapons on a real battlefield. In Ukraine, AI-enabled drones have transformed the war into a laboratory for autonomous combat. A 2025 CSIS report found that drones equipped with autonomous terminal guidance raised target engagement success rates from 10-20% to 70-80% by removing the need for constant manual control and stable communications. Ukraine aimed to equip half its drone production with AI guidance in 2025, up from 0.5% in 2024, representing roughly a million AI-assisted drones (Bondar, CSIS/Breaking Defense, 2025). By summer 2025, Ukraine had employed drone swarming technology in over 100 operations, with groups of 8-25 coordinating strikes autonomously (Ukraine’s Arms Monitor, 2025). Russia matched the escalation: by mid-2025, it was producing approximately 1.5 million FPV drones annually, and its largest single strike involved 818 drones and missiles. Both sides are racing toward the point where AI selects its own targets. In June 2025, Ukrainian forces launched what they claim was the first confirmed assault operation entirely carried out by unmanned platforms, with AI-driven drones autonomously scanning for and engaging targets without human piloting.

The superpower competition sits behind this battlefield. Leopold Aschenbrenner’s “Situational Awareness” (June 2024), written by a former OpenAI researcher with insider access to capability trajectories, argues that AGI arriving by 2027 would trigger the most consequential national security crisis since the Manhattan Project, and that the U.S. government will inevitably seek to nationalize the leading AI projects once the strategic implications become clear. Whether or not his timeline is correct, the strategic framing has already entered policy circles. According to Stanford’s Human-Centered Artificial Intelligence Institute, US private AI investment reached $109.1 billion in 2024, nearly twelve times China’s $9.3 billion. But China plans to deploy $98 billion in AI investment in 2025, including $56 billion from government sources (Stanford HAI/Modern Diplomacy, 2025). China’s state-owned defense giant Norinco unveiled a military vehicle powered by DeepSeek in February 2025, and a Reuters review of hundreds of research papers, patents, and procurement records documented the systematic effort to harness AI for military advantage (Reuters/Calcalist, 2025). The most dangerous dimension remains nuclear. In November 2024, Biden and Xi jointly affirmed the need to maintain human control over decisions to use nuclear weapons, the first time the US and China made this statement together. National Security Adviser Jake Sullivan described it as addressing a “long-term strategic risk” of two significant nuclear powers being unable to reach agreement on anything in the AI-nuclear space (White House Press Briefing, November 2024). Two months later, China declined to sign a multilateral declaration at the REAIM conference endorsing the same principle it had just agreed to bilaterally, illustrating how rivalry undermines even minimal commitments (South China Morning Post, September 2024).

The US-China framing, however, obscures a larger structural problem: most of the world has no seat in this race and enormous stakes in its outcome. The India AI Impact Summit in February 2026, the first major AI summit hosted in the Global South, made that gap visible. Over 100 countries sent delegations. Prime Minister Modi framed the central question as developmental: AI must serve humanity in all its diversity, and any model that succeeds in India can be deployed globally (PM India, 2026). As NBC News reported, the summit’s pitch was that the future of AI should not be written only in Washington and Beijing (NBC News). India is testing what the Observer Research Foundation calls a “third way” in AI governance: strategic autonomy without isolation, remaining engaged with multiple power centers while amplifying voices from Africa, Southeast Asia, and Latin America (ORF).

India’s position in this landscape is structurally unique. The country accounts for 16% of the world’s AI workforce and leads globally in AI skill penetration at 2.8 times the world average (India Skills Report 2026; Stanford HAI). Indian engineers contributed 19% of GitHub AI projects in 2023, second only to the United States at 22.9%. Over 1,700 Global Capability Centers employ 1.9 million people, and more than 60% of GCCs established in the last two years focus on AI, data, and product development (Carnegie Endowment). Carnegie also flags the structural tension: top-tier AI research talent trained at Indian institutions ends up working in the US and Europe. The CEOs running Microsoft, Alphabet, and IBM were all trained in India’s engineering pipeline. The workforce building the models, the executives directing the companies building them, and the largest potential deployment market all trace back to the same talent ecosystem. This gives India leverage that pure compute investment figures understate. Yet the infrastructure gap remains real. Africa accounts for less than 1% of global data center capacity despite being home to 18% of the world’s population. India itself would need to nearly double its compute capacity to meet domestic demand (CSIS). The risk is that the arms race between Washington and Beijing produces parallel AI ecosystems, one relatively open, the other centralized and surveillance-driven, while the Global South becomes a market for both and a designer of neither.

The arms race poisons coordinated safety governance through a straightforward mechanism: neither side will slow development for fear the other gains advantage. When the UN General Assembly’s First Committee passed a historic resolution in November 2025 calling for a legally binding agreement on lethal autonomous weapons systems, 156 nations voted in favor. The United States and Russia were among five that opposed it (Usanas Foundation, 2026). The leading military powers have concluded that strategic advantage outweighs legal or ethical constraints. This is a prisoner’s dilemma at civilizational scale: mutual restraint produces the best collective outcome, but each actor’s dominant strategy is to defect.

The domestic version of this dynamic erupted in February 2026, when the Pentagon confronted Anthropic over the company’s refusal to remove safety restrictions from Claude, its frontier AI model. The backstory: Anthropic, through a partnership with Palantir, had become the first commercial AI company operating inside classified Pentagon networks at Impact Level 6. Claude was reportedly used during “Operation Resolve,” the January 3, 2026 military operation that captured Venezuelan President Nicolás Maduro in Caracas, an operation that killed 83 people (NBC News, 2026; Al Jazeera, 2026). When an Anthropic employee reportedly contacted Palantir to ask whether Claude had been used in the raid, the Pentagon interpreted the inquiry as disapproval. Anthropic denied raising concerns beyond routine technical discussions. Relations deteriorated. Defense Secretary Pete Hegseth’s January AI strategy memorandum had already directed all Department of Defense AI contracts to incorporate “any lawful use” language within 180 days, a direct collision with Anthropic’s usage policy. Anthropic drew two red lines it refused to cross: AI-controlled weapons that make final targeting decisions without human involvement, and mass domestic surveillance of American citizens. Hegseth gave CEO Dario Amodei a February 27 deadline: agree to the Pentagon’s terms or face termination of Anthropic’s $200 million contract, invocation of the Defense Production Act to compel compliance, and designation as a “supply chain risk,” a label normally reserved for companies from adversary nations like China (CNN, 2026; Axios, 2026). The Pentagon contacted Boeing and Lockheed Martin to assess their Claude dependence. As of this writing, Anthropic has not budged. The Pentagon’s argument is that legality is the end user’s responsibility, and that a private contractor cannot dictate how the military employs a tool within the bounds of law. Anthropic’s argument is that no legal framework for AI-controlled weapons or AI-enabled mass surveillance currently exists, and that Claude is not reliable enough for autonomous lethal targeting, a technical claim about hallucination risk, not a political one (TechCrunch, 2026; CBS News, 2026). The Lawfare analysis of the DPA’s applicability noted that whether Claude-without-guardrails constitutes the same product or a different one is “genuinely contested” and that “neither side’s argument is a slam dunk” (Lawfare, 2026). The competitive dimension is the mechanism that makes this a true prisoner’s dilemma rather than a bilateral negotiation. Elon Musk’s xAI has already agreed to “all lawful use” terms for Grok in classified systems. Google and OpenAI are negotiating similar access. The Pentagon is using this competition to pressure all labs: agree to unrestricted military use, or watch your competitor take the contract. Dean Ball, a senior fellow at the Foundation for American Innovation and former Trump White House AI policy adviser, told TechCrunch: “It would basically be the government saying, ‘If you disagree with us politically, we’re going to try to put you out of business’” (TechCrunch, 2026). He also noted the Pentagon currently has “no backups,” since Claude is the most capable model for classified government applications. On the same day as Hegseth’s ultimatum, Anthropic published version 3.0 of its Responsible Scaling Policy, removing its earlier commitment to pause training if model capabilities outpaced safety controls and replacing it with a flexible framework of “public goals” rather than hard commitments (Anthropic, RSP v3.0, 2026). Anthropic said the revision was unrelated to the Pentagon dispute. The RSP itself cited three forces making the original structure untenable: a “zone of ambiguity” around capability thresholds, an anti-regulatory political climate, and the recognition that safety measures at the highest levels require industry-wide coordination that no single company can achieve alone. CNN framed the timing differently: the company founded by OpenAI exiles worried about AI dangers was loosening its core safety principle during a direct confrontation with the world’s most powerful military (CNN, 2026). Whether the RSP change was coincidence or adaptation to an environment where voluntary commitments carry increasing costs, the structural point is the same: the arms race dynamic does not only operate between nations. It operates between a government and its own companies, between competing labs within the same country, and between a company’s stated principles and its commercial viability. The EFF urged Anthropic to hold its ground, arguing that no technology company should be bullied into enabling surveillance (EFF, 2026). The international law scholars at Opinio Juris noted that Anthropic’s stance on autonomous weapons aligns with the principle of human control that has guided a decade of UN discussions on lethal autonomous weapons systems, and that the upcoming March 2026 CCW session will debate draft language on precisely these limits (Opinio Juris, 2026).

The dispute resolved exactly as competitive dynamics predicted. On February 27, 2026, Anthropic’s deadline expired. One day later, the U.S.-Israeli offensive against Iran began. In the first 24 hours, the U.S. military struck approximately 1,000 targets. Over 11 days, the total reached 5,500. On March 11, Admiral Brad Cooper, head of U.S. Central Command, confirmed that “warfighters are leveraging a variety of advanced AI tools” to process data and identify targets, turning “processes that used to take hours and sometimes even days into seconds” (Al Jazeera, March 2026). NBC News confirmed the military is using Palantir’s systems, which rely in part on Anthropic’s Claude, for target identification (NBC News, March 2026). The Washington Post reported that the Pentagon leveraged “the most advanced artificial intelligence it’s ever used in warfare” (Washington Post, March 2026). OpenAI signed a replacement contract with the Pentagon. The company states the contract outlines that its technology will not be used for surveillance or fully autonomous weapons (Nature, March 2026). As of March 5, Anthropic CEO Dario Amodei was reportedly back in talks with the Defense Department.

The operational record raises the questions the collection anticipated. Cooper stressed that “humans will always make final decisions on what to shoot.” UCL computer scientist Peter Bentley warned that current AI systems are “immature, prone to errors and hallucinations, risking lethal misjudgments in high-tempo operations.” Chatham House’s Nilza Amaral cautioned that “over-reliance on automation, with reduced time for reflection, undermines safeguards against civilian harm” (The National, March 2026). On March 8, a Tomahawk strike hit the Shajareh Tayyebeh primary school in Minab, killing more than 165 people, mostly children. The Iranian Red Crescent reported nearly 20,000 civilian buildings and 77 healthcare facilities damaged. The tempo of AI-enabled targeting, 5,500 strikes in 11 days, compresses the time available for human verification to the point where the formal distinction between “AI recommends, human decides” and “AI decides” may collapse in practice. The ACCEPT trial finding documented in Shade #6 (physicians’ unaided diagnostic accuracy declining 20% after six months of AI-assisted practice) acquires a different weight when the diagnostic task is distinguishing a military target from a school. Meanwhile, the CCW session in Geneva convened the same week to debate draft language on lethal autonomous weapons systems, international law scholars deliberating the limits of military AI while that AI was being used in combat operations hours away.

The hybrid digital-physical battlefield extends beyond targeting. NPR reported that Israel hacked traffic cameras in Tehran to locate Iranian Supreme Leader Khamenei, synthesized billions of data points into target banks, and hacked a popular Muslim prayer app to send defection messages to Iranian soldiers (NPR, March 2026). Citizen Lab found evidence of an Israeli disinformation campaign using AI-generated images to foment revolt. The Financial Times identified AI-generated fake satellite imagery spreading online. MIT Technology Review documented how AI-enabled dashboards, many “vibe-coded” in days by venture capital affiliates, were positioned as alternatives to journalism, combining real-time strike data with prediction markets where users bet on outcomes like the identity of Iran’s next supreme leader (MIT Technology Review, March 2026). The epistemic environment around the war is already contaminated. The tools described in Shade #5 (fabrication) and Shade #11 (foreign subversion) are being used offensively in a hot war while the governance mechanisms described in Shade #8 attempt to catch up in real time.

Arms control has worked before, and that history matters. The Nuclear Non-Proliferation Treaty, the Chemical Weapons Convention, and the various strategic arms limitation agreements all constrained powerful technologies despite intense geopolitical competition. As former NSA Jake Sullivan argued in January 2026, there are meaningful parallels to the decades-long process of nuclear arms control, which produced export controls, verification protocols, and guardrails even at the height of the Cold War (Sullivan, CHINA US Focus, 2026). Sullivan himself identified why the analogy breaks down. Verification is harder: you can count missiles and warheads, which have detectable signatures, but counting algorithms or discerning all the capabilities of a given model is a different problem entirely. The dual-use challenge is more severe: there is a relatively clear line between peaceful nuclear power and nuclear weapons, while the same AI model can power a medical diagnosis system and an autonomous weapons targeting system. And the uncertainty about capability trajectories has no nuclear equivalent. The evolution and impact of AI capabilities is far less predictable than the physics of nuclear weapons. Arms control frameworks took decades to develop. AI capabilities are advancing on timescales of months. The governed outcome is modest (+1) because the structural incentives favor escalation and the verification problem may be unsolvable with current tools. What governance can realistically achieve is a floor: bilateral hotlines to prevent AI-triggered escalation (modeled on Cold War nuclear communication channels), multilateral compute tracking regimes that make large training runs visible, export controls on the most dangerous capabilities, and binding commitments to maintain human control over nuclear launch decisions. The India AI Summit’s “Delhi Declaration,” with at least 70 signatories, represents an early attempt at consensus, even if its language remains aspirational (TIME). None of this prevents the arms race. It manages the risk that the arms race produces an accidental catastrophe. The +1 reflects the gap between what governance can do and what the problem requires.

Key tension: Mutual restraint produces the best collective outcome, but each actor’s dominant strategy is to escalate, and the verification mechanisms that made nuclear arms control possible do not obviously translate to AI.