Despite all predictions, AI capabilities plateau short of superintelligence. The evidence for this possibility is not speculative. It is emerging. Toby Ord’s “The Scaling Paradox” (2025) demonstrated that the scaling laws celebrated as proof of inexorable progress actually show extreme diminishing returns: on a linear (rather than log-log) scale, the relationship between compute and performance reveals that lowering test loss by a factor of two requires increasing compute by a factor of one million. Accuracy is extraordinarily insensitive to resource input. A 2025 analysis from HEC Paris noted that inside frontier labs, a consensus is growing that simply adding more data and compute will not produce the breakthroughs once promised, and that the disappointment surrounding GPT-5 made this ceiling visible. Ilya Sutskever, speaking at NeurIPS 2024, declared that “pretraining as we know it will end” and that “the 2010s were the age of scaling, now we’re back in the age of wonder and discovery.” The field may be approaching a transition from scaling existing architectures to searching for fundamentally new ones. A January 2026 paper from MIT FutureTech, “Meek Models Shall Inherit the Earth” (Gundlach, Lynch, and Thompson), formalized this quantitatively: diminishing returns to compute scaling mean that low-budget models will converge toward frontier performance. In their simulation, a model increasing its compute budget 3.6x annually initially outperforms a $1,000-budget model, but the gap peaks after roughly five years and then steadily narrows. The policy implication is significant: the “governance window” during which large organizations hold a decisive capability advantage is closing, and future oversight should focus on data, algorithms, and safeguards rather than compute controls alone. Test-time compute scaling (reasoning chains, self-prompting, extended inference) represents a new axis that partially circumvents the pretraining plateau, but early evidence suggests it faces its own diminishing returns: a 2025 study found that adding more computational steps at inference no longer delivers proportionate improvements, and the cost of extended reasoning scales faster than the accuracy gains it produces.
Yann LeCun has been the most prominent voice for this position. He argues that current LLM architectures are a dead end for general intelligence: brilliant at pattern matching, capable of impressive fluency, incapable of genuine world modeling, causal reasoning, or planning over novel situations. His departure from Meta to found AMI Labs (discussed in Shade #21) represents a bet that AGI requires entirely new approaches. The economic evidence supports a stasis-adjacent reading. Daron Acemoglu’s “The Simple Macroeconomics of AI” (NBER, 2024; published in Economic Policy, 2025), written before he received the Nobel Prize in economics, projected that generative AI will produce no more than a 0.53-0.66 percent increase in total factor productivity over the next decade. Goldman Sachs estimated 9 percent. McKinsey estimated $17-26 trillion. Acemoglu’s model is grounded in task-level analysis of which jobs are actually exposed to AI and which exposed tasks can be profitably automated. His central finding: only about 5 percent of tasks economy-wide are candidates for profitable AI automation in the near term. The gap between Acemoglu and the bullish forecasts is not about disagreement over AI’s ultimate potential. It is about timing, adoption friction, and the difference between “easy-to-learn tasks” (where AI excels and early evidence is strong) and “hard-to-learn tasks” (context-dependent, no objective outcome measures, where AI’s performance may plateau).
The stasis scenario does not mean AI stops improving. Capabilities continue to advance in specific domains: coding, scientific reasoning, multimodal processing, agentic task completion. February 2026 alone saw seven major model releases with benchmark records broken across multiple categories. What stalls is the pathway to superintelligence: the recursive self-improvement, autonomous goal-pursuit, and general reasoning that the singularity scenarios require. In this world, AI is the next electricity or the next internet: enormously consequential, deployed unevenly, generating both productivity gains and disruption, but not a rupture in the human condition. This is, by many measures, the most plausible single outcome.
It is also the most dangerous framing, because it invites the conclusion that the institutional response can wait. Every Tier 1 harm documented in this collection (labor displacement, power concentration, surveillance infrastructure, epistemic collapse, democratic erosion) is already happening with sub-superintelligent AI. A world where AI capabilities plateau at roughly current levels while deployment accelerates is a world that still displaces hundreds of millions of workers, still concentrates economic power in a handful of firms, still enables mass surveillance, and still corrodes shared epistemic foundations. The argument for institutional design does not depend on the singularity arriving. It depends on the changes already underway.
The adversarial point has genuine force: if capabilities plateau, resources spent on alignment research for superintelligent systems were misallocated. They should have been spent on labor market policy, antitrust enforcement, privacy regulation, and democratic governance of actually existing AI. This criticism reinforces rather than undermines the collection’s central argument. It means the institutional response is even more urgent than the singularity scenarios suggest, because the harms are arriving on the current capability curve, and the excuse that “everything will change when AGI arrives” is the primary obstacle to addressing them.
Key tension: Whether or not the singularity arrives, the institutional response to current AI capabilities is already overdue. The stasis scenario removes the excuse for delay. It does not remove the need for action.