← Back to all shades
Shade 15 ~55%

Digital Authoritarianism as Global Norm

Tier 3: Plausible

Unmanaged -4
Governed 1
Dividend 5

China’s AI-powered governance is no longer an experiment. It is a mature export product with a growing customer base. A December 2025 report from the Australian Strategic Policy Institute documented how the CCP uses large language models and AI systems to automate censorship, enhance surveillance, and pre-emptively suppress dissent, describing AI as the backbone of a pervasive and predictive form of authoritarian control (ASPI, 2025). The infrastructure is total: a criminal suspect in China may be identified through the world’s largest AI-powered surveillance network, prosecuted in courts that use AI to draft indictments and recommend sentences, and incarcerated in a facility where AI systems monitor emotions, facial expressions, and movements. Shanghai district documents detail plans for AI-powered cameras and drones programmed to “automatically discover and intelligently enforce the law,” including flagging crowd gatherings for police response. China’s Supreme Court has directed all courts to develop AI systems for use in legal proceedings, and a Shanghai system already recommends whether judges should arrest or grant suspended sentences to defendants.

The export mechanism operates through what Beijing calls “Safe City” packages. The CSIS Reconnecting Asia Project identified 73 Safe City agreements across 52 countries, concentrated among non-liberal, middle-income governments in Asia and Africa. The National Endowment for Democracy’s 2025 analysis found that China has exported surveillance technology platforms for policing and public safety to more than 80 countries, deploying strategies including free trials, subsidized pricing, and state financing to ensure even the poorest authoritarian regimes can afford the infrastructure. Cameroon illustrates the pattern: beginning with 70 Huawei-supplied CCTV cameras in 2014, the country’s surveillance system has expanded through several phases, with government borrowing reaching $270 million by December 2025, financed partly through China CITIC Bank. The dynamic goes beyond hardware. As the Lowy Institute documented, these exports are instruments of norm diffusion: they lock governments into Chinese hardware, software, and servicing ecosystems, and they reshape institutional practices to normalize constant monitoring (Lowy Institute/TechPolicy.Press, 2025).

Beijing’s 2025 launch of City Brain 3.0, built on the DeepSeek-R1 model, integrates AI, big data, cloud computing, and IoT into a unified urban governance platform. The ambition extends well beyond policing. China is also ensuring its AI systems understand minority languages, including Uyghur, Tibetan, and Mongolian, so that surveillance and censorship tools operate effectively across all populations within its borders (ORF, 2025). ASPI found that Chinese LLMs display stronger censorship behaviors for politically sensitive imagery than their American counterparts, with censorship mechanisms embedded across multiple layers of the model ecosystem.

The most important finding in the data is that the threat to democracy does not require importing Chinese technology. Freedom House’s Freedom on the Net 2025 report documented the fifteenth consecutive year of global internet freedom decline. Of the 72 countries assessed, conditions deteriorated in 28 while only 17 improved. Citizens in at least 57 of the 72 countries covered were arrested or imprisoned for online expression during the coverage period, a record high. Half of the 18 countries rated “Free” suffered score declines, including the United States, which dropped 3 points. Over the fifteen-year tracking period, Russia’s score fell from 48 to 17, and the U.S. score declined from 87 to 73 (Statbase/Freedom House, 2025). Democratic backsliding requires only domestic incentives for control, and AI provides those incentives to every government regardless of ideology.

In the United States, the drift is driven by commercial technology adoption. No top-down decree is required. Surveillance technology companies including Axon, Motorola, Flock Safety, and Genetec are deploying AI-powered predictive policing and “real-time crime center” tools across municipalities. In Massachusetts alone, more than 80 police departments have adopted AI-powered automatic license plate reader systems with no state law regulating their use (ACLU of Massachusetts, 2026). Video analytics software can automatically identify and track individuals across multiple camera feeds, transforming scattered surveillance cameras into integrated tracking networks. Chicago’s Strategic Decision Support Centers integrate gunshot detection, surveillance feeds, and predictive models. The Brennan Center for Justice found that police data fusion systems have expanded to include social media monitoring, sentiment analysis, risk scoring, and relationship mapping, functions only tangentially related to criminal investigation (Brennan Center, 2025). The January 2025 revocation of the Biden-era AI Executive Order, followed by a new order titled “Removing Barriers to American Leadership in Artificial Intelligence,” shifted federal policy toward capability expansion with fewer governance constraints. In September 2025, a National Security Presidential Memorandum instructed the Department of Justice to investigate civil society organizations and activists, with companies like Palantir providing the surveillance infrastructure to identify, track, and act against these targets.

The frontier AI labs’ relationship with the Pentagon crystallizes the dynamic. In July 2025, the Department of Defense awarded $200 million contracts to four AI companies: Anthropic, OpenAI, Google DeepMind, and xAI. The Pentagon’s requirement was simple: models must be available for “all lawful purposes” with no company-imposed restrictions. Google had already reversed its 2018 internal prohibition on AI for weapons and surveillance, a ban originally forced by employee protests over Project Maven. Amnesty International described the reversal as enabling technologies including mass surveillance, semi-autonomous drone strikes, and AI-generated target lists (Sovereign Magazine, 2026). OpenAI modified its mission statement, removing “safety” as a core value, and agreed to deploy ChatGPT through the Pentagon’s GenAI.mil platform, which already serves 1.1 million unique users across all three military service departments. Elon Musk’s xAI signed a deal for Grok to enter classified military systems without conditions (Axios/Semafor, 2026). Three of four frontier labs accepted. Anthropic refused. The company maintained two positions: no mass surveillance of American citizens, and no fully autonomous weapons without human oversight. Defense Secretary Pete Hegseth issued a February 27, 2026 ultimatum: comply or face cancellation of the contract, designation as a “supply chain risk” (which would force every defense contractor to choose between Anthropic and the Pentagon), or invocation of the 1950 Defense Production Act to appropriate the technology outright (AP/PBS, 2026; Center for American Progress, 2026). Hegseth’s January 2026 AI strategy document requires all military AI contracts to eliminate company-specific guardrails within 180 days. The episode reveals the mechanism by which democratic governments acquire surveillance capabilities. The technology already exists. The open question is whether any institution, including the companies that build it, can maintain limits once the state decides it wants unrestricted access. Three of four said no institution can. One is still deciding.

The surveillance capability is already being turned on domestic speech. In February 2026, the New York Times reported that the Department of Homeland Security had issued hundreds of administrative subpoenas to Google, Meta, Reddit, and Discord, demanding names, email addresses, phone numbers, and other identifying details for social media accounts that criticized ICE or reported the locations of immigration agents (New York Times/TechCrunch, 2026). Administrative subpoenas bypass the judiciary entirely: DHS issues them on its own authority, with no judge involved. The tool was previously reserved for time-sensitive investigations like child abductions. Google, Meta, and Reddit complied with at least some of the requests (Gizmodo, 2026; Military.com, 2026). The ACLU challenged several subpoenas in court; in each case, DHS withdrew before a judge could rule on legality, then issued new subpoenas to different targets. The pattern is by design, as ACLU attorney Steve Loney told the Times: the pressure falls on the individual to hire a lawyer and get to federal court within ten days, or their platform identifies them to the government. Amazon’s Ring division announced it would begin sharing doorbell camera footage with Flock, an AI-powered network feeding information to local and federal law enforcement. The Electronic Frontier Foundation, in an open letter to ten major platforms, urged companies to require court orders before complying, noting that DHS has already demonstrated its own subpoenas cannot survive legal scrutiny (IBTimes, 2026).

The convergence is worth stating plainly. In China, citizens use VPNs to access information their government does not want them to see. In the United States, approximately 75 million Americans already use VPNs, with 42% of users citing privacy as their primary motivation (NordVPN, 2025). Wisconsin and Michigan legislators have proposed banning VPNs to prevent circumvention of age verification laws (EFF, 2025). In France, a court ordered VPN providers to block 203 domain names. Tom’s Guide observed that Western countries have begun exhibiting VPN hostility previously associated with regimes like Russia and China (Tom’s Guide, 2025). The tools are different. The trajectory is recognizable. When a democratic government subpoenas social media companies to identify citizens who criticize a law enforcement agency, and the companies comply, the functional difference between that system and a state that monitors online dissent is one of degree.

The research on democratic resilience complicates the picture in important ways. MIT economist Martin Beraja’s data, analyzed in the Bulletin of the Atomic Scientists, found that mature democracies did not experience democratic erosion when importing surveillance AI, even from China. Weak democracies, however, exhibited backsliding regardless of whether the technology originated from China or the United States (Bulletin of the Atomic Scientists, 2024). This suggests that institutional strength determines outcome. Technology origin matters less than the resilience of the institutions receiving it. India, the world’s largest democracy, deploys extensive biometric and digital identity infrastructure without becoming authoritarian. The Indian state of Maharashtra’s 2025 expansion of its MARVEL AI policing system illustrates the pattern: democracies adopt these tools through the same efficiency logic that drives authoritarian uptake, and the outcome depends on whether institutional constraints hold (CIGI, 2025). The EU’s AI Act, which took effect in February 2025, prohibits AI systems that predict the probability of someone committing a crime, though with broad exceptions for terrorism, murder, and other major offenses.

The distinction between authoritarian adoption and democratic drift is real, and it matters. Democracies retain institutional mechanisms, courts, legislatures, press freedom, civic organizations, that authoritarian regimes systematically dismantle. The question is whether those mechanisms can keep pace with the technology. Freedom House’s 15-year trend says they are losing ground. The ACLU found that constitutional protections against warrantless surveillance are being eroded through commercial technology adoption that is technically legal because existing law never anticipated it. The governed outcome (+1) remains low because even well-designed governance can only slow the spread of surveillance capabilities. The 5-point dividend reflects the difference between a world where democratic institutions adapt quickly enough to constrain algorithmic governance and one where the convenience of AI-powered control gradually normalizes practices that would have been unthinkable a generation ago.

Key tension: The efficiency of algorithmic governance is genuine. Democratic alternatives must match that efficiency while preserving the structural constraints on power that distinguish open societies from closed ones. Tyranny has never been the likeliest mechanism. Convenience is.