Essay I

On the End of Work as We Know It

What happens when AI breaks the link between production and prosperity

~74 min read


I.

In the third week of February 2026, the Bureau of Labor Statistics revised its estimate of American job growth for the prior year.1 The initial reports had suggested 584,000 new payroll positions, a modest number that nonetheless indicated a labor market still functioning, still absorbing workers, still doing what labor markets are supposed to do. The revision cut that figure to 181,000. The economy had created fewer than a third of the jobs originally reported.

The revision would have been alarming on its own. What made it significant was what accompanied it: the economy had continued to expand. Fourth-quarter GDP came in at 3.7 percent.2 Output was solid. Corporate earnings were strong. The stock market had delivered a third consecutive year of double-digit gains. By every measure that tracks production, the American economy was healthy. By the measure that tracks whether that production requires human beings, something had changed.

Erik Brynjolfsson, the Stanford economist who has studied the relationship between technology and productivity for three decades, called it the moment the fog lifted. His analysis of the revised data showed U.S. productivity growth at roughly 2.7 percent for 2025, nearly double the 1.4 percent annual average of the prior decade.3 Jason Furman, who had been skeptical that AI was showing up in aggregate economic data, examined the same numbers and reversed his position. The current productivity cycle, measured from the fourth quarter of 2019 through the fourth quarter of 2025, was now the second strongest since 1973, trailing only the dot-com era.4

The significance was not the productivity number itself. Productivity growth is generally good. More output per hour of labor is, in the abstract, how living standards improve. The significance was the mechanism: output had risen while labor input had fallen. The economy had learned to produce more with fewer people. Brynjolfsson framed this through what economists call the J-curve hypothesis. General-purpose technologies suppress measured productivity during an initial investment phase, when firms are spending heavily on the technology but have not yet reorganized their operations around it. Eventually the curve turns upward as the investments begin to pay off. His argument was that the United States had entered the harvest phase.

Not everyone agreed. Torsten Slok, the chief economist at Apollo, looked at the same period and saw no clear macro signal at all.5 Employment data, productivity data, inflation data: none of it showed the kind of structural break that would indicate AI was transforming the economy. A survey of thousands of C-suite executives across the United States, Britain, Germany, and Australia, published by the National Bureau of Economic Research, found that nearly 90 percent reported AI had had no impact on workplace employment over the preceding three years.6 Martha Gimbel at the Yale Budget Lab, using Current Population Survey data, found no significant differences in occupational churn or unemployment duration for AI-exposed workers through November 2025.7 The skeptics had a point that went beyond pessimism: the payroll revision could reflect immigration enforcement (the National Foundation for American Policy documented a decline of 881,000 foreign-born workers since January 20258), a correction from pandemic-era overhiring, or measurement artifacts in the establishment survey. There was also the interest rate cycle. The near-zero rates of 2020-2021 had fueled a hiring binge across technology and professional services, adding headcount that was never sustainable at normalized capital costs. When rates rose, firms pruned. Much of the “missing” employment in the 2025 revision may have been redundant headcount from that bubble being shed through capital discipline rather than machine substitution.9 Disentangling these effects from AI-specific displacement is, at this stage, difficult in ways that the data cannot yet resolve. Brynjolfsson himself framed the AI attribution as suggestive rather than definitive. The causal link between AI adoption and the aggregate productivity data remained contested even among economists who agreed on the numbers.

Both assessments were defensible. They were also, in a way that matters for everything that follows, simultaneously true. At the aggregate level, the economy had not yet tipped. The displacement was not showing up in headline unemployment or in the broad occupational statistics that policymakers rely on. At the leading edge, it was already visible. The question was which signal to weight. The answer depended on whether one believed aggregate data would eventually reflect what the micro-level evidence already showed, or whether the micro-level findings would remain localized. The argument of this essay is that the former is more likely, for reasons the subsequent sections develop. But it is an argument, not a settled fact, and the reader should hold that distinction through what follows.

II.

The leading edge was visible in the data on young workers.

In August 2025, Brynjolfsson and his co-authors published what would become the most cited empirical study of AI’s labor market effects. Using high-frequency payroll records from ADP, the largest payroll software firm in the country, covering millions of American workers, they documented a 13 percent relative decline in employment for early-career workers in occupations with high AI exposure.10 The effect was concentrated among workers aged 22 to 25. It was statistically significant beginning in 2024, using firm-time fixed effects that controlled for broader company-level disruptions. Older, more experienced workers in the same occupations saw their employment hold steady or grow.

A Stanford Digital Economy Study confirmed the pattern in a specific sector: employment for software developers aged 22 to 25 had declined nearly 20 percent from its late-2022 peak by July 2025.11 Entry-level tech hiring dropped 25 percent year-over-year in 2024,12 and a survey by the Cengage Group found that only 30 percent of 2025 college graduates had secured full-time employment in their field.13 The unemployment rate for 20-to-24-year-olds with bachelor’s degrees reached 9.7 percent by September, converging with the rate for those holding only high school diplomas.14 The college wage premium, one of the most stable relationships in labor economics, was compressing.

The pattern extended beyond the United States. At the Indian Institute of Information Technology, Design and Manufacturing, fewer than 25 percent of the graduating class of 2026 had secured job offers by December 2025.15 A tech recruiter in Dubai reported that off-the-shelf technical hiring, which had accounted for 90 percent of placements a few years earlier, had dropped below 5 percent.16 In the United Kingdom, graduate-level tech roles fell 46 percent in a single year.17 The World Economic Forum documented a 29 percent year-over-year decline in entry-level openings globally.18

What made the entry-level collapse structurally significant, as opposed to merely painful for the individuals experiencing it, was the mechanism driving it. Companies were not eliminating junior positions because the economy was contracting. They were eliminating them because AI had made the work those positions performed available at lower cost and higher speed. The economic logic was clear: when an AI agent can generate an SQL query, summarize a legal brief, or debug a code block at near-zero marginal cost, the business case for paying a junior employee $70,000 to do the same work collapses, even if that employee might eventually develop into a senior contributor. IBM’s chief human resources officer acknowledged as much in February 2026 when she announced the company would triple its entry-level hiring, while acknowledging that AI was indeed automating junior work.19 The company had recognized that eliminating the bottom rung of the ladder meant there would be no one to fill the middle rungs a decade later. Korn Ferry reported that 37 percent of organizations planned to replace early career roles with AI.20 Forrester’s 2026 predictions found that 55 percent of employers who had already done so regretted it.21

The regret was instructive. Forrester’s 2026 predictions found that 55 percent of employers who had laid off workers for AI capabilities that did not yet exist regretted it. The pattern, as the later sections of this essay will show, was rarely AI replacing workers cleanly. It was AI creating the justification for restructuring that served other objectives.

In February 2026, Brynjolfsson published a companion study through the National Bureau of Economic Research that extended the squeeze to the opposite end of the labor market. Analyzing the relationship between minimum wage increases and industrial robot adoption, he and his co-authors found that rising labor costs at the bottom of the wage spectrum accelerated the deployment of machines on factory floors.22 His summary described a labor market being compressed from both directions: AI encroaching from the top, where it automated cognitive work, and robotics advancing from the bottom, where it replaced manual tasks as soon as the cost of automation dropped below the cost of the worker. Two distinct technologies. Two distinct segments of the workforce. Two distinct mechanisms. Operating at the same time.

The common assumption that blue-collar and manual work is safe from AI disruption vastly underestimates how AI and robotics compound each other. AI does not merely coexist with robotics. It accelerates robotics. Machine vision, navigation, dexterity, and adaptive behavior are all AI problems, and they are improving on AI timescales. Each advance in foundation models makes robots cheaper to train, faster to deploy, and more capable in unstructured environments. Boston Dynamics began commercial production of its Atlas humanoid in January 2026.23 Chinese manufacturers were producing functional humanoids at a fraction of Western costs, with manufacturing costs declining 40 percent year-over-year.24 At scale production over a five-year operating lifespan, the effective hourly cost of a humanoid robot falls to three to five dollars, below the minimum wage in every U.S. state.25 The jobs most likely to survive longest are those where the cost of building a robot to do the work still exceeds the cost of paying a human. Dishwashing. Agricultural picking. Home cleaning. Caregiving in cramped apartments. The jobs that survive are the ones that are too cheap, too irregular, or too physically constrained to justify the capital expenditure of a machine. The worst jobs are the most durable. The comfortable, well-paid, cognitively demanding positions that an entire generation was educated to fill are, by the economics of it, the ones most efficiently replaced.

III.

The aggregate statistics that reassured policymakers concealed a structural shift that was already legible in the national accounts. In the third quarter of 2025, the Bureau of Labor Statistics reported that the labor share of income, the portion of the economy’s output that goes to workers as wages and salaries, fell to 53.8 percent. It was the lowest figure since the BLS began recording the statistic in 1947.26

The decline had been underway for decades. In 1980, the labor share stood at 58 percent. By 2020, it had drifted to approximately 56 percent. The average for the 2020s was 55.6 percent. Then the decline accelerated. Each percentage point represented roughly $300 billion in national income that had shifted from workers’ paychecks to corporate balance sheets. Fortune 500 companies posted a record $1.87 trillion in profits in 2024. The corporate profit share of national income rose from 7 percent in 1980 to 11.7 percent.27 The numbers told a simple story: the economy was producing more, and workers were receiving a smaller share of what it produced.

The European data pointed in the same direction. A study published in the European Economic Review, examining AI patenting across European regions, found that for every doubling of regional AI innovation, the labor share declined by 0.5 to 1.6 percent.28 The effect was driven primarily by wage compression among medium- and high-skilled workers. The researchers were precise about the mechanism: AI was not merely raising efficiency. It was reallocating the gains from innovation, amplifying the share of income that flowed to capital.

A paper by David Autor, published in early 2026, identified the paradox at the center of this shift. Using data from twelve industrialized countries, Autor and his co-authors demonstrated that automation which reduces the labor share can, under specific conditions, increase average wages.29 The mechanism is technical but the implication is not: when the labor share sits above the level that maximizes wages (because there is relatively too much labor relative to capital), shifting production toward capital raises the marginal product of the remaining workers, which raises their pay. Average wages go up. The labor share goes down. Both things happen at once.

Autor was careful to note what the paradox did not resolve. “A fall in the labor share that increases average wages may nevertheless raise inequality and reduce wages in some sectors,” the paper stated. “Moreover, a falling labor share necessarily means increased income and wealth inequality due to the highly uneven distribution of capital ownership.”

This was the analytical key to the economy of early 2026. GDP was growing. Productivity was surging. Average wages were not collapsing. A policymaker looking at the headline numbers would see an economy performing well. The structure beneath those headlines was fracturing along a fault line that averages could not detect. The top of the distribution was pulling away. The bottom was sinking. The middle was thinning. And the mechanism driving all three movements was the same: the gains from AI-driven productivity were accruing to the owners of capital, because capital was doing an increasing share of the work.

The Federal Reserve’s distributional financial accounts made the fracture visible. By the third quarter of 2025, the wealthiest 1 percent of American households held a record 32 percent of total net worth. The bottom 50 percent held 2.5 percent.30 Mark Zandi, the chief economist at Moody’s Analytics, calculated that the top 10 percent of households by income were responsible for approximately 49 percent of all consumer spending, up from 44.6 percent in 2019.31 The top 40 percent controlled 85 percent of the nation’s wealth and accounted for 60 percent of consumer spending.32 Their consumption patterns were increasingly driven by stock market performance rather than wage income, which meant that the health of the economy depended on the continued appreciation of assets that most Americans did not own.

Zandi described the resulting structure in terms that would recur throughout the year: “It doesn’t feel like the economy’s perched on a strong foundation. It’s perched on a few poles that are sticking up. If one of those poles gets knocked out, then the whole economy gets knocked down.”

The poles were narrow. Consumer spending, which accounts for more than two-thirds of GDP, was driven mostly by the highest earners. The stock market, which had delivered three years of double-digit gains, was dominated by a handful of technology companies: the Magnificent Seven accounted for 32.6 percent of the S&P 500 by February 2026, up from 12.5 percent a decade earlier.33 The top ten companies represented roughly 39 percent of total market capitalization, exceeding the concentration peak of the dot-com bubble.34 AI firms had captured 61 percent of all global venture capital in 2025, according to the OECD,35 and the hundred firms that dominated global AI research and development accounted for 40 percent of worldwide corporate R&D spending, according to UNCTAD.36

For the bottom of the distribution, the picture was different. Lower-income households were relying on debt to maintain consumption. Delinquency rates on auto loans were rising. Food pantry visits were increasing. Buy-now-pay-later services were surging in popularity among those who could not afford purchases outright. Companies that served lower-income consumers were cutting prices to stimulate demand. Deflation was not the goal. Their customers could no longer afford their products at existing prices.37 At the same time, luxury segments were booming. Airlines were racing to expand first-class offerings. Ralph Lauren became a cultural touchstone, trending at record levels during the 2025 holiday season while social media filled with tutorials on how to achieve the look on a budget, a small cultural signal of a large economic reality: aspiration was alive, purchasing power was not.38

The economists had a term for this. They called it the K-shaped economy, after the letter whose two arms diverge from a single point: one rising, one falling. The term had been circulating since the COVID-era recovery, when asset owners rebounded quickly while wage workers struggled. By 2026, the K had sharpened. The divergence was no longer a pandemic artifact. It was structural.

The structural shift has a gendered dimension that aggregate statistics, organized by industry rather than occupation, tend to obscure. Anthropic’s own labor market research found that workers in the most AI-exposed occupations are 16 percentage points more likely to be female than those in the zero-exposure group.39 The roles with the highest theoretical exposure to large language models, administrative support, HR coordination, educational administration, healthcare scheduling, paralegal work, and middle management, are disproportionately held by women. These are organizational roles that require judgment, communication, and contextual awareness, precisely the capabilities that language models have recently become competent at performing.

And the shift extends well beyond wealthy nations. India’s IT services industry employs over five million people directly and supports roughly twelve million more.40 The Philippines’ business process outsourcing sector, which accounts for approximately 7 percent of GDP and employs 1.3 million workers, is structurally exposed to the same automation. Africa, home to 18 percent of the world’s population, accounts for less than 1 percent of global data center capacity and an even smaller fraction of AI research output.41 The United Nations Development Programme warned in December 2025 that AI could reverse decades of narrowing development inequality between nations. A hundred and eighteen nations, mostly in the Global South, were absent from global AI governance discussions entirely.42 The mechanism by which billions of people climbed out of poverty over the previous three decades, offering cognitive labor to foreign firms at lower cost, is the mechanism that AI most directly undermines.

IV.

The individual dimensions of this transformation, labor displacement, capital concentration, financial fragility, creative extraction, the neo-feudal trajectory, are typically discussed as separate risks, each with its own literature, its own policy prescriptions, its own constituency of concerned experts. Seen in isolation, each looks manageable. Entry-level hiring is down, but retraining programs can be expanded. Wealth inequality is growing, but progressive taxation can redistribute. The labor share is falling, but average wages are still rising. The reflex is to treat each symptom and move on.

The reflex is wrong. These are not separate problems. They are stages of a single process, and the process has a direction.

It begins with the task. AI automates a cognitive task, or performs it faster and cheaper than the human who previously did it. The firm that adopts the AI sees a productivity gain. Output rises per hour of labor. This is the micro-level finding that dozens of studies have now confirmed: a 15 percent improvement in customer service resolution,43 an 80 percent reduction in task completion time in real-world AI assistant usage,44 a quadrupling of productivity growth in AI-exposed industries according to PwC’s barometer.45 The task-level gains are large and real. They are not in dispute.

The question is where the gain goes.

In a competitive labor market with strong worker bargaining power, the gain would be shared: some captured by workers as higher wages or shorter hours, some retained by firms, some passed to consumers as lower prices. This is what happened, eventually, in previous technological transitions. The word “eventually” carries weight. The British Industrial Revolution produced approximately 60 to 80 years of what economic historians call Engels’ Pause, in which productivity gains accrued almost entirely to capital owners while workers’ living conditions stagnated or declined.46 Three generations lived and died in immiseration before institutional pressure redirected the gains broadly.

The current labor market does not resemble one with strong worker bargaining power, and the competitive dynamic has a feature that previous transitions lacked: AI can, for an expanding set of tasks, substitute for the worker entirely. The firm becomes less dependent on the worker with each iteration of the technology. The productivity gain that is not shared flows to capital. Corporate profits rise. Stock prices rise. The owners of capital see their net worth increase. They spend more, which props up GDP. The economy continues to grow. The K splits wider. This is not speculation. It is a description of what the data already shows: the labor share at a 77-year low, corporate profits at record highs, the top 10 percent driving half of consumer spending, the stock market concentrated in seven companies, the entry-level pipeline drying up.

The strongest counter-argument deserves engagement on its own terms. A substantial body of economic research holds that task-level substitution and occupation-level displacement are different phenomena, and that the historical record consistently shows the former without the latter. A February 2025 study published through the National Bureau of Economic Research, using a framework that measured AI adoption directly at the firm level, found that despite strong task-level substitution, overall employment effects were modest, because reduced demand in exposed occupations was offset by productivity-driven increases in labor demand at AI-adopting firms.47 Brookings published a review in March 2026 titled “Research on AI and the Labor Market Is Still in the First Inning,” cautioning that results are sensitive to which AI exposure measure is chosen, that AI exposure could be correlated with other factors (pandemic-era overhiring, remote work suitability, tariff exposure, immigrant labor reliance), and that even strong findings should be read skeptically given how early the evidence is.48 The Federal Reserve’s Governor Barr, in a May 2025 speech, invoked the standard economic rejoinder to automation fears: the “lump of labor fallacy,” the presumption that there is a fixed amount of work to be done, so if machines do it, humans will not.49 New technologies have eliminated occupations for two centuries without rendering humans obsolete. This is the position of mainstream economics, and three years of rapid AI adoption with no aggregate labor market disruption is itself evidence in its favor.

The essay takes a different position, for two reasons. First, the task-level evidence is building in a specific direction. Brynjolfsson’s entry-level findings, the convergence of college and high-school unemployment, the global contraction in junior hiring: these are concentrated in the cohorts and occupations that serve as the economy’s intake mechanism. The aggregate may be stable, but the pipeline is thinning. Second, the labor share data represents something the “lump of labor” rebuttal does not address. Even if employment holds, the distribution of income is shifting from labor to capital in ways that the standard adjustment story, in which displaced workers move to new occupations at comparable wages, cannot explain. The question is whether the economy is experiencing normal technological adjustment or the early stages of a structural transformation in which the relationship between production and human labor is changing in kind. The data does not yet settle the question. The precautionary logic is that by the time the data does settle it, the window for institutional response may have narrowed.

V.

Now add the second-order dynamics. Economists will cite the Jevons Paradox, the historical pattern in which making a resource cheaper through efficiency gains increases total demand enough to offset the savings. When ATMs made cash dispensing cheap, banks opened more branches. When spreadsheets automated arithmetic, demand for financial analysis exploded. The logic is sound and the historical record is clear.

The logic was sound because of a structural feature of every previous automation wave that is easy to overlook in retrospect: prior technologies were narrow. They automated specific tasks within specific contexts while leaving the underlying human skills intact and portable. The power loom conquered weaving, but not the manual dexterity, coordination, and visual attention that weaving required, and those abilities found new applications in factories, workshops, and trades. When computers eliminated routine clerical calculation, they did not eliminate the judgment, communication, and contextual reasoning that defined professional work; they merely freed workers to concentrate on those higher-order functions, and demand for them grew. This is precisely why the Jevons dynamic held: cheaper production of a service made human judgment within that service more valuable, and the workers displaced from the automated portion had somewhere to go. Each wave of automation was also, by the same logic, skill-biased in a predictable direction: routine and middle-skill tasks bore the heaviest burden, while non-routine cognitive work remained largely protected. The escape route was reliable enough that economists stopped questioning whether it would remain open. Generative AI has closed it. Unlike every prior automation wave, large language models show their highest exposure among high-wage, high-skill, non-routine cognitive occupations, the precise category that absorbed workers displaced from everything that came before. Eloundou and colleagues, writing in Science in 2024, found that roughly 80 percent of the U.S. workforce has at least 10 percent of their tasks affected by LLMs, and that higher-income jobs show greater exposure than lower-income ones, the inverse of what prior automation research consistently found.50 The IMF’s 2024 analysis of generative AI and global labor markets reached the same conclusion, noting explicitly that unlike previous automation waves, which concentrated displacement among middle-skilled workers, generative AI extends its reach into non-routine cognitive work across writing, analysis, legal reasoning, coordination, and communication simultaneously.51 The Brookings Institution, analyzing task exposure across more than a thousand occupations, observed that the industries now most exposed to generative AI were ranked at the bottom of automation risk just a few years ago.52 The escape route was never guaranteed by some natural law; it was a structural artifact of how narrow previous technologies happened to be. Generative AI is not narrow. When the same capability (language, reasoning, synthesis, communication) is under pressure across legal work, financial analysis, software development, administrative coordination, and content creation at the same time, the workers displaced from any one of those fields cannot seek shelter in the others. The Jevons Paradox applies to the service. It no longer applies to the worker.

It does not apply in the way it previously did.

The Jevons Paradox worked in prior transitions because cheaper production of a service made human judgment within that service more valuable. ATMs freed tellers for relationship banking. Spreadsheets freed accountants for strategic analysis. Demand for the service increased, and demand for the human at the center of the service increased with it. AI breaks that coupling. When legal discovery becomes 90 percent cheaper through AI, law firms will do more discovery. They will do it with fewer lawyers, because the AI performs the cognitive work, reading documents, identifying relevance, flagging patterns, that previously required a human professional. Demand for the intelligence-intensive service increases. Demand for human labor to provide it does not follow.

A fair objection: this has been said about every general-purpose technology, and the new jobs that eventually emerged were in categories that contemporaries could not foresee. “Social media manager” was unimaginable in 1995. “Prompt engineer” did not exist in 2020 and was already becoming obsolete by 2025. Over a 10-to-20-year horizon, it is entirely possible that AI creates demand for human roles that do not yet have names, and that the Jevons dynamic reasserts itself at a higher level of abstraction. The World Economic Forum’s projection of a net 78 million new roles by 2030 reflects this possibility.53 The argument here is about the transition period, which may last a decade or more, during which the old jobs disappear faster than the new ones materialize, and during which the people losing the old jobs are not the same people gaining the new ones. That transition period is where the structural damage occurs, and it is the period for which policy must be designed.

The strongest version of this objection has a name: recursive demand. The theory holds that AI creates such a dense and complex information environment that it generates its own demand for human arbitration. More AI-generated legal analysis means more need for human “verification architects” to assess it. More AI-generated code means more need for human “bias arbitrators” to audit it. More AI-generated content means more need for human editors to determine what is trustworthy. The demand for human judgment, in this framing, does not decline. It shifts upward, to the meta-level of evaluating AI output rather than producing the output directly.54 The theory is elegant and may prove partially correct. Its limitation is that the verification work it describes is itself the next target of automation. The BCG study discussed later in this essay documents what happens when humans are asked to verify AI output at machine speed: cognitive overload, decision fatigue, rising error rates. The economic pressure is to automate the verification, which means using AI to check AI. Recursive demand generates recursive automation. The meta-level jobs that the theory predicts are real, but they carry the same structural vulnerability as the jobs they replace: they are cognitive tasks performed through digital interfaces, which is exactly the category that AI automates most efficiently.

There is a related fallacy that deserves naming. Much of the public discourse around AI and employment is framed around the question of when AI will reach human-level general intelligence, as if that were the threshold at which displacement begins. It is not. Displacement does not require artificial general intelligence. It requires only that AI be cheaper than the human it replaces, for the task at hand, at a quality level the market will accept. The quality level the market will accept turns out to be lower than most people assume. Customer support is the proof case. Over the past several years, companies across industries have replaced human support agents with chatbots and AI systems that perform measurably worse by most service quality metrics. Resolution rates are lower. Satisfaction scores decline. The companies are aware of the decline. They maintain the systems because the cost savings are large enough to absorb the quality loss. The service has been degraded, and the degradation has become the new baseline. The same logic applies across a wide range of cognitive work. Imperfect humans will be replaced by imperfect AI, because imperfect AI is cheaper. The relevant comparison is never “AI versus an expert at their best.” It is “AI at five dollars an hour versus a human at fifty dollars an hour, both making mistakes, in an organization that has decided which mistakes it can live with.” By the time AI reaches the quality level of the average human worker in a given role, the economic argument for keeping the human is already gone. The threshold is not perfection. It is price-adjusted adequacy.

The quality-degradation dynamic compounds with a pattern already visible in the data. Klarna replaced 700 customer service employees with AI, watched quality decline, and rehired humans.55 The reversal is often cited as evidence that AI replacement has limits. What is less often noted is that Klarna did not rehire the same workers at the same wages in the same country. It rehired offshore, at significantly lower cost. Forrester’s 2026 predictions found this pattern generalizing: half of all AI-attributed layoffs would be quietly rehired, often in lower-wage countries.56 The AI served as the mechanism of the transition, the cover story for a restructuring that moved work across borders. This is not a coincidence. The fleet commander model described later in this essay, in which a worker directs AI agents rather than collaborating with human colleagues, makes the worker’s physical location irrelevant. If the job is issuing prompts, reviewing outputs, and making judgment calls through a screen, it can be done from Bangalore or Manila as easily as from Boston. AI and offshoring are not separate forces. They compound each other. AI reduces the number of humans needed, and the humans still needed can be sourced from wherever labor is cheapest. The remaining roles that require a domestic presence, the ones too physical or too regulatory-bound to offshore, are the low-wage roles described earlier in this essay: the dishwashers, the caregivers, the agricultural workers. The portrait that emerges is troubling. Western economies may be evolving toward a structure in which the domestic population serves primarily as consumers, with production handled by a combination of AI systems, robots, and low-wage offshore workers. The demand side of the economy depends on purchasing power that the supply side no longer generates domestically. This is the demand paradox in its most concrete form, and it connects directly to the fork that the next section describes.

There is a credible school of thought that views all of this as familiar, manageable, and likely to resolve through the same market adjustments that resolved previous technological disruptions. Its strongest version runs as follows. Three years of rapid AI adoption have produced no macro-level displacement. The unemployment rate has not spiked. Wages have not collapsed. The entry-level effects, while real, are concentrated in a small number of occupations and may reflect cyclical overhiring corrections rather than structural displacement. AI’s task-level productivity gains are large, but economy-wide effects are modest because most firms use the technology for narrow applications, and the complementary investments required to reorganize production around AI (management restructuring, workflow redesign, cultural change) take years to complete. The historical base rate for technological unemployment is zero: every previous wave of automation, including waves that seemed far more disruptive to contemporaries than this one, produced short-term dislocation followed by long-term absorption. The burden of proof is on those claiming this time is different. This argument deserves respect precisely because the empirical record supports it so far. What it does not address is what happens if the base rate breaks, and by the time the data confirms the break, the institutional response is already years behind.

VI.

The preceding sections describe the economy from the outside: who gets hired, who gets displaced, where the money flows. There is a transformation underway inside the workplace itself, among the people who still have jobs, that the macro data does not capture and that changes the nature of work in ways that may prove as consequential as the displacement.

The productivity narrative assumes that AI makes workers more productive and that the gains scale smoothly. A BCG and University of California Riverside study published in Harvard Business Review in March 2026, surveying nearly 1,500 full-time U.S. employees, identified a biological constraint on that assumption. They called it “brain fry.”57 AI generates dense, sophisticated output in seconds. Verifying that output requires the same deep cognitive engagement that producing it would have required, at a pace set by the machine rather than the human. The study found 14 percent more mental effort, 12 percent more mental fatigue, and 19 percent more information overload among workers under high AI oversight conditions. Productivity peaked at two to three AI tools and declined beyond four. Decision fatigue rose 33 percent. Major errors rose 39 percent. The bottleneck is not the AI. It is the human being asked to verify what the AI produces at the speed at which the AI produces it.

This bottleneck has a logical consequence that the productivity optimists rarely follow to its conclusion. If human oversight is the constraint on AI productivity, and if that constraint is biological rather than organizational, then the pressure is to remove the human from the oversight loop. AI overseeing AI is the plausible next step. It has not yet been documented at scale, but the economic incentive is overwhelming: the entire value proposition of AI-driven productivity depends on operating at machine speed, and the human in the loop is the component that cannot keep up. Current regulatory frameworks, including the EU AI Act, explicitly require human oversight for high-risk AI systems. The economic pressure to minimize that oversight will collide with these legal requirements, and the collision will be a defining policy battleground of the next decade. The most likely resolution is not the elimination of the human overseer but the formalization of a role that is human in name only: a single person sitting atop multiple layers of AI systems, checking boxes on a compliance form, providing the legal fiction of human oversight without the cognitive capacity to evaluate what the systems beneath them are actually doing. The regulatory requirement is satisfied. The biological constraint is circumvented. The human’s role in the productive process becomes ceremonial.

Inside the firms that are adopting AI aggressively, the experience of work is already changing in a specific and under-discussed way. The worker’s primary relationship shifts from human colleagues to AI systems. A marketing manager who once collaborated with a team of copywriters, designers, and analysts now directs a fleet of AI agents that produce drafts, generate images, analyze data, and assemble reports. The manager’s role becomes that of a fleet commander: setting objectives, reviewing outputs, making judgment calls about quality. The work is more productive by any efficiency measure. It is also more isolated. The daily interactions that once constituted the social fabric of work, the hallway conversations, the collaborative problem-solving, the mentorship that happened informally over shared tasks, are replaced by a human-to-machine interface. The fleet commander is efficient and alone.

The isolation compounds. Fewer workers means fewer managers. Fewer managers means fewer directors. The organizational pyramid that once required a broad base of junior employees to support each ascending layer of management thins from the bottom up. The elimination of entry-level roles does not simply remove those workers. It removes the management roles that supervised them, the coordination roles that connected them, and the administrative roles that supported the whole structure. The effect trickles upward through the hierarchy. Each layer of the organization becomes harder to justify when the layer below it has been automated or removed.

There is a gap in enthusiasm about this transformation that tracks almost perfectly with organizational altitude. Leadership, particularly C-suite executives and senior management, tends to be excited about AI. They see productivity gains, cost reduction, competitive advantage, the ability to do more with fewer people. Surveys consistently find that executives view AI as a growth tool. The workers being asked to use AI view it differently. Most are comfortable with tools that help them summarize documents, automate tedious workflows, and reduce busywork. They do not want to be replaced, and they do not want their jobs restructured around machine-speed verification of AI output. The gap between what leadership wants from AI (transformation) and what workers want from AI (assistance) is large, and it is being resolved in favor of leadership, because leadership makes the decisions. The parallel to the return-to-office mandates that proliferated in 2024 and 2025 is instructive. Remote work was popular with employees, measurably effective by most productivity studies, and strongly opposed by senior management for reasons that were largely about control and culture rather than output. Management won. Employees returned. The pattern repeats with AI: the transformation is imposed top-down, over the objections of the people most affected, by the people least affected.

VII.

The dynamics described above converge on a question that economics, as a discipline, is poorly equipped to answer. It concerns what happens when the primary mechanism for allocating resources to people stops working.

Wages are not just a price for labor. They are the means by which a market economy distributes purchasing power to its population. People work. They receive wages. They spend those wages on goods and services. The spending creates demand. The demand sustains production. The production creates jobs. The cycle is so fundamental to modern economic life that it is easy to mistake it for a natural law rather than a contingent institutional arrangement. It is contingent. It depends on a specific condition: that production requires enough human labor to distribute enough income to sustain enough demand to keep the system running.

That condition is weakening.

It is not gone. The economy still employs 160 million Americans. Most adults still earn most of their income from wages. The aggregate unemployment rate remains within what economists consider full employment. The condition is weakening at the margin, which is where structural shifts begin. The margin, right now, is the entry-level worker who cannot get hired. The mid-career professional whose role is being “augmented” in ways that reduce headcount on the next project. The team of ten that was twelve last year and will be eight next year, each member doing more with AI assistance, until the question becomes whether the team is necessary at all. The hiring freeze that never quite ends. The restructuring that eliminates a layer. The contractor who replaces the employee. The offshore team that replaces the contractor. The AI agent that replaces the offshore team.

Oxford Economics projected that job gains in 2026 would average fewer than 40,000 per month, a figure so low that it represents effective stagnation in the labor market.58 The firm’s chief U.S. economist drew a parallel with the early 2000s, when the economy emerged from a period of overhiring while technological advances drove a productivity surge. “This leaves the economy vulnerable to shocks,” he wrote, “because the labor market is the main firewall against a recession.” The firewall was thinning.

The question that the thinning labor market forces is not whether AI will create new jobs. It will. The World Economic Forum projects a net gain of 78 million roles by 2030. Goldman Sachs anticipates that AI-driven productivity will eventually add 7 percent to global GDP.59 These projections may well prove correct. The question is whether the people displaced from the old economy can access the jobs created by the new one, and whether those jobs will be distributed broadly enough, geographically and demographically, to sustain the wage-based distribution mechanism on which the entire system depends.

The disagreement between Acemoglu and the bullish forecasters is not primarily about AI’s potential. It is about methodology. Acemoglu’s task-level approach distinguishes between “easy-to-learn tasks,” where AI excels because outcomes are measurable, feedback is fast, and training data is abundant, and “hard-to-learn tasks,” where the work is context-dependent, outcomes lack objective metrics, and the data needed to train an AI system does not exist in structured form. His central finding, that roughly 5 percent of tasks are candidates for cost-effective full automation in the near term, rests on this distinction. The bullish forecasts (Goldman Sachs’s 7 percent GDP uplift, McKinsey’s $17 to $26 trillion in annual value) assume that task-level automation translates smoothly into economy-wide productivity gains, which requires assumptions about adoption speed, integration costs, and complementary organizational investment that historical evidence from previous technology transitions does not support. Acemoglu’s estimate is conservative and may prove too low if capabilities improve faster than his model assumes. The bullish estimates are optimistic and may prove too high if the integration costs are as stubborn as they have been for every previous general-purpose technology. The collection’s argument does not depend on which timeline is correct. Even Acemoglu’s conservative estimate implies significant displacement concentrated in specific sectors and populations, producing the K-shaped dynamics the essay describes. If the bullish forecasts are right, the displacement is faster and the institutional response is more urgent. If Acemoglu is right, the window for building institutions is wider, but the institutions are still needed, because the structural dynamics of concentration, epistemic degradation, and governance capture operate at any scale of displacement. The difference between the forecasts is a question of urgency, not direction.

The evidence on this point is not encouraging. The new jobs that AI creates cluster in high-skill, high-education roles concentrated in a small number of metropolitan areas. Brookings found that 30 metro areas accounted for two-thirds of all AI-related job postings in the United States.60 The retraining system that is supposed to bridge the gap between displaced workers and new opportunities has a dismal empirical track record. The National JTPA Study, one of the few true randomized controlled trials of federally funded job training, found no statistically significant improvement in employment or earnings for participants.61 A national evaluation of the Workforce Investment Act found no positive impact within 30 months. Roughly 40 percent of current participants in the Workforce Innovation and Opportunity Act are trained into roles paying less than $25,000 per year. The United States spends approximately 0.1 percent of GDP on active labor market policies, second-to-last among OECD nations, where comparable countries spend up to five times that amount.62

This is the fork in the road. The economy is producing more. Workers are capturing less of what it produces. The primary distribution mechanism, wages earned through labor, is weakening. Something has to replace it, or the system loses the demand that sustains it. What replaces wages is the question on which the entire economic trajectory pivots.

Three paths lead from this point. They are not predictions. They are structural possibilities, each internally consistent, each supported by a portion of the current evidence. What determines which path the economy follows is not technology. It is politics.

VIII.

The first path requires no decisions, no legislation, no political will. It is what happens if current trends continue under current policies with current institutional arrangements. It is the path of inertia.

Under this trajectory, AI-driven productivity gains continue to flow predominantly to the owners of capital. Corporate profits rise. Stock valuations climb. The top 10 percent of households, whose consumption is increasingly tied to asset prices rather than wage income, continue to spend. GDP growth remains positive. Headline economic indicators look acceptable. The labor share continues its decline. The entry-level pipeline continues to thin.

The K widens gradually enough that no single quarter produces a crisis. This is essential to understanding why the default path is the default: it does not announce itself. There is no moment when the economy breaks. There is a slow, persistent divergence that is always explicable in the near term. This quarter, it was tariff uncertainty. Last quarter, it was interest rates. Next quarter, it will be something else. The structural cause, the redistribution of economic output from labor to capital, is always present and never quite the headline.

Lower-income households, whose consumption depends on wages rather than asset returns, compensate with debt. Auto loan delinquencies rise. Credit card balances grow. Buy-now-pay-later usage expands. These are the early-warning signals, visible in the data by late 2025, that the bottom half of the economy is consuming beyond its income. The consumption is real, which is why GDP holds up. The financing is unsustainable, which is why it cannot hold up indefinitely.

Companies that serve lower-income consumers begin cutting prices as a response to demand weakness, a reaction to weakening purchasing power rather than a competitive strategy. Homebuilders discount. Consumer packaged goods companies reduce prices after two years of post-pandemic increases. Fast-food chains lean on value meals. These are rational firm-level responses to the customers in front of them. In aggregate, they describe an economy in which production capacity is expanding while the ability of most households to consume what the economy produces is contracting. The technical term is a demand shortfall. The colloquial description is an economy that works for the people who own it and slowly stops working for everyone else.

A fair objection to the K-shaped framing: if AI also drives down the cost of goods and services, falling wages matter less because living costs fall too. The objection has force for digital services and mass-manufactured goods, where AI-driven cost curves are steepest. It has much less force for the categories that dominate household budgets and determine quality of life: housing (driven by land costs, zoning, and construction labor), healthcare (driven by regulation, liability, and physical care that AI has not yet displaced), childcare, and education. These costs are anchored in physical scarcity, regulatory structures, and human labor that deflates on a slower curve than software. And it has no force at all for positional goods, the desirable neighborhoods, elite schools, and status markers whose value derives from scarcity and cannot be produced in greater quantity without destroying what makes them valuable.63 AI-driven deflation reaches the categories that matter least to quality of life first (entertainment, information, consumer electronics) and the categories that matter most (housing, healthcare, education, social standing) last, if at all. The K-shape is felt most acutely in the gap between what deflates and what doesn’t.

The entry-level collapse compounds over time. The 2025 graduates who could not find professional employment do not simply wait and try again next year. They take lower-skill work, accept underemployment, or leave the labor force. By 2027 and 2028, the graduates from those years face the same compressed market, now with an additional cohort of underemployed predecessors competing for the same diminishing pool of positions. The pipeline problem identified by IBM’s HR chief, that eliminating junior roles today creates a middle-management shortage in a decade, spreads across industries. The labor market develops a missing generation: workers who were never given the chance to develop the skills that only come from doing the work that AI can now do faster and cheaper.

Geographically, the concentration deepens. The 30 metropolitan areas that already account for two-thirds of AI job postings attract more talent, more investment, more infrastructure. Secondary cities and rural areas, which had built economic development strategies around the kinds of cognitive work that AI automates (call centers, back-office processing, data management), lose their economic rationale. The geographic pattern mirrors the income pattern: a few nodes of intense activity surrounded by a widening periphery of stagnation.

Internationally, the dynamic is harsher. The countries that built development strategies around providing cognitive labor at lower cost, India’s IT sector, the Philippines’ BPO industry, Kenya’s emerging tech services, find that cost arbitrage is no longer sufficient when AI can perform the same tasks at a fraction of the price. The mechanism by which these nations integrated into the global economy and lifted hundreds of millions out of poverty is precisely the mechanism that AI undermines. The UNDP’s warning about a “next great divergence” between nations begins to materialize as a slow withdrawal of the foreign investment and outsourcing contracts on which these economies depend.

There is a concentration dynamic that operates at the level of firms themselves, and an older parallel illuminates its structure. In the 1990s and 2000s, American and European manufacturers sent production to China with enthusiasm, capturing immediate cost savings, expanding margins, and reporting strong quarterly earnings. The long-term consequences were different. The companies that offshored most aggressively often found that they had transferred their manufacturing knowledge to suppliers who became competitors. The cost savings attracted every competitor to the same low-cost production base, compressing margins back toward where they started. The short-term gain in profitability came at the cost of long-term competitive differentiation.

AI adoption by companies that rely on a handful of external AI labs carries a structurally similar risk. As companies replace human workers with fleets of AI agents provided by a small number of frontier labs, the question of competitive moat becomes urgent. If a consulting firm’s advantage once rested on the quality of its people, and those people are replaced by AI workflows built on the same foundation models available to every competitor, what differentiates one firm from another? Custom prompts and proprietary workflows are the current answer, but they are thin advantages. The labs that provide the underlying models have full visibility into how their systems are being used and every incentive to productize the most successful patterns. The workflow a company spends months building, the lab can deploy as a feature in the next release. The supplier gains the knowledge and eventually competes with the customer. The firms that move most aggressively to replace human workers with AI may discover that they have transferred their core competency to the AI provider, and the AI provider has no obligation to keep that competency exclusive. The negotiating power shifts accordingly. When every worker in a company is an AI agent provided by one of three labs, those labs hold extraordinary power over pricing, terms, and access. The company’s dependence on the lab becomes structural, and the lab’s ability to raise prices, restrict features, or favor competitors becomes an existential risk to the company. A few AI labs become worryingly dominant in the productive economy, occupying a position that no individual company, and very few governments, has ever held: the provider of the cognitive infrastructure on which the entire commercial system depends.

Hadfield and Koh formalize the underlying economic logic. When a firm delegates decisions to an AI agent, it faces a principal-agent problem that is structurally worse than the human version: the agent’s decision-making process is opaque, its preferences are inferred rather than stated, and the firm cannot write a complete contract specifying behavior in every contingency.64 The incomplete contracting literature in economics has long recognized that such gaps create power for whichever party controls the residual decisions. In an economy where AI agents handle an increasing share of commercial transactions, the labs that build and operate those agents hold the residual control rights. The firms that deploy the agents become, in economic terms, the weaker party in an incomplete contract with the provider of the intelligence their operations depend on.

The political economy of the default path is self-reinforcing. As wealth concentrates, the political influence of wealth concentrates with it. The policy changes that could redirect the trajectory (progressive taxation of capital gains, expanded social insurance, public ownership of AI infrastructure, antitrust enforcement against platform monopolies) require legislative action. Legislative action is influenced by the same concentrated wealth that benefits from the status quo. Each year that the K widens without intervention makes future intervention less likely, because the constituency that would block intervention grows more powerful with each increment of concentration.

This is the neo-feudal trajectory described in the research literature, though it arrives through the accumulated weight of decisions not made, without any dramatic seizure of power. Daron Acemoglu, the 2024 Nobel laureate, has noted that the benefits of the British Industrial Revolution took more than a century to reach workers, and that diffusion was never automatic.65 It was won through political struggle. On the default path, the struggle does not happen, or happens too late, or happens and loses.

The fragility of the structure is its most dangerous feature. Mark Zandi’s metaphor of the economy perched on a few poles is not a rhetorical flourish. It is a structural description. Consumer spending depends on asset prices. Asset prices depend on expectations of AI-driven earnings growth. Those expectations are priced into equity valuations that assume the massive capital expenditures of the hyperscalers (Google alone plans $175 to $185 billion in 202666) will generate proportional returns. If the returns disappoint, or if a geopolitical shock disrupts the supply chains on which AI infrastructure depends, the asset prices that sustain the top arm of the K correct downward. The bottom arm, already debt-financed and fragile, has no buffer. The few poles holding up the economy are knocked out simultaneously.

The default path does not end in collapse. It ends in something more durable and in some ways more troubling: a stable equilibrium in which a productive, growing economy coexists with a large population that has no meaningful claim on what the economy produces. GDP is healthy. Markets are strong. And a growing share of the population is economically superfluous, sustained by whatever combination of transfer payments, informal work, and family support they can assemble. The medieval parallel is imprecise but structurally instructive: the lord’s estate was productive. The serf’s claim on that productivity was determined by the lord’s discretion, not by the serf’s bargaining power.

IX.

The second path requires political action on a scale and timeline that democratic systems have occasionally achieved, though not often and not easily. It is the social democratic response: using the tools of the state to redirect AI-driven productivity gains toward broad-based prosperity before the default trajectory locks in.

The policy toolkit is not mysterious. Economists across the political spectrum have identified its components. Progressive taxation of AI-generated wealth through direct taxation of the value that AI creates, since income taxes miss capital gains and corporate retained earnings. Expanded social insurance that is portable across employers and decoupled from specific jobs, recognizing that the employment relationship itself is becoming less stable. Active labor market policies funded at levels comparable to peer nations: the Nordic countries spend five times what the United States spends as a share of GDP on helping displaced workers find new footing. Public ownership stakes in frontier AI systems, so that the productivity gains accrue in part to the public that provided the research funding, the infrastructure, and the regulatory environment that made the technology possible. Antitrust enforcement to prevent the platform monopolies documented elsewhere in this collection from capturing permanent rents.

The strongest historical analogy is the post-World War II social contract in Western democracies: a period in which governments, responding to the twin crises of depression and war, constructed institutional frameworks that distributed the gains of a productivity surge broadly enough to sustain mass consumption and political stability for three decades. The New Deal, the GI Bill, the expansion of public universities, the construction of the interstate highway system, progressive taxation with top marginal rates above 90 percent, and the implicit bargain between capital and organized labor all served the same structural function: they ensured that the people doing the consuming had enough income to consume. The mechanisms were various. The principle was singular: a productive economy that does not distribute its gains broadly enough will eventually destroy its own demand base.

The managed transition faces three challenges that previous social contracts did not.

The first is speed. The Industrial Revolution unfolded over generations. The digital revolution compressed the timeline to decades. The AI transition is compressing it further. Capability is compounding on timescales of months. New model releases arrive quarterly. Agentic systems that can perform multi-step workflows autonomously are being deployed in 2026. The policy cycle operates on timescales of years. Legislation takes months to draft, more months to debate, and years to implement. The mismatch between the speed of technological change and the speed of institutional response is not a temporary friction. It is a structural feature of the current political system.

The second challenge is the retraining problem. The managed transition assumes that displaced workers can be redirected into new roles through education and training. The evidence, as noted above, is weak. The few rigorous evaluations of federally funded training programs have produced discouraging results. The deeper problem is structural: Georgetown’s Center for Security and Emerging Technology found that technical skills now become outdated in fewer than five years on average,67 which means that a worker retrained in 2026 for an AI-adjacent role may find that role automated by 2031. The retraining system is designed for an economy in which skills have long shelf lives and career ladders are stable. Neither condition holds.

The third challenge is political. The managed transition requires redistribution from those who are benefiting from the current trajectory to those who are not. The beneficiaries of the current trajectory are accumulating both wealth and political influence at an accelerating rate. Each year that passes without redistributive policy makes the constituency for redistribution weaker, because the concentrated class gains more capacity to block it. This is not a conspiracy theory. It is the observable operation of political influence in a system where campaign spending, lobbying, and regulatory capture are legal and well-documented. The feudal parallel again: it is difficult to legislate the redistribution of wealth when the wealthy write the legislation.

There is a fourth challenge that the policy literature tends to acknowledge without resolving: the fiscal base problem. The managed transition requires government revenue. Government revenue in every advanced economy depends heavily on taxing labor income, through payroll taxes, income taxes, and the consumption taxes that labor income funds. If AI erodes the labor share of income, it erodes the fiscal base that funds the response. The tools of redistribution become harder to pay for precisely as the need for redistribution grows. This is the sovereign debt paradox of the AI transition: the governments that need to act most urgently are the ones whose fiscal capacity is being most directly undermined by the transformation they need to address.68 The candidate replacements for labor-based taxation are identifiable: taxes on compute usage, levies on AI-generated revenue, broadened value-added taxes that capture machine-produced output, financial transaction taxes on the high-frequency trading that AI enables, and direct taxation of corporate profits at rates that reflect the substitution of capital for labor. Each has design problems. Compute taxes risk driving AI development offshore. Revenue-based levies are difficult to attribute when AI operates as an embedded input rather than a discrete product. VAT broadening is regressive unless offset by transfers. None of these has been implemented at the scale required, and the political economy of introducing new tax categories in systems that struggle to reform existing ones is discouraging. The fiscal mechanism for the managed transition is not a detail to be worked out later. It is a structural prerequisite, and it does not yet exist.

There is a fifth constraint that compounds all four. No country can pursue managed transition in isolation. A government that raises taxes on AI capital, mandates profit-sharing with displaced workers, or imposes transparency requirements on frontier labs creates an immediate arbitrage opportunity. Labs and capital relocate to jurisdictions that impose fewer costs. Lancieri, Edelson, and Bechtold formalize this dynamic in a 2025 analysis of AI regulatory competition: governments and companies play a multilevel game in which regulatory arbitrage, corporate capture of low-regulation regimes, and competitive fragmentation interact to produce outcomes that no single jurisdiction intended.69 The four equilibria they identify (multiple local regimes, international harmonization, unilateral imposition, and outright fragmentation) each carry different welfare implications, and the current trajectory favors fragmentation. The pattern is familiar. Corporate tax rates have fallen globally for four decades as countries compete for investment. Environmental regulation faces persistent pressure from jurisdictions willing to absorb pollution for economic growth. Labor standards erode when production can move to where protections are weakest. AI adds a new dimension: the product being regulated is also the tool that makes regulatory arbitrage faster, cheaper, and harder to detect. The EU AI Act is already buckling under this pressure, with implementation delays driven partly by fear that compliance costs will drive AI development to the deregulated American market. A managed transition that works in one country while the rest of the world races to the bottom is not a transition. It is a competitive disadvantage with a conscience.

There is a sixth development worth noting, because it changes the shape of the conversation even if it does not resolve the constraints above. In April 2026, OpenAI published a thirteen-page document titled Industrial Policy for the Intelligence Age: Ideas to Keep People First, proposing a Public Wealth Fund that would give every citizen a stake in AI-driven growth, tax base modernization to shift government revenue from payroll toward capital-based sources, 32-hour workweek pilots, portable benefits decoupled from employment, and adaptive safety nets with automatic triggers tied to displacement indicators.70 The document’s existence is itself evidence for this essay’s thesis: the economic displacement is real enough that the company most aggressively building the displacing technology now feels compelled to propose mitigation at civilizational scale. The conversation has moved beyond “we’ll need UBI” as a throwaway line. The proposals are substantive, and several are compatible with the institutional responses the managed transition would require. But the document redistributes money without redistributing control over the systems that produce it, and its proposed governance mechanism places nongovernmental institutions (including the labs themselves) in the position of piloting approaches that governments should then “reinforce,” which places the regulated entity in the design seat and the government in the ratification seat. This is the governance dynamic Essay 3 documents, appearing inside a proposal for its own remedy.

There are reasons for qualified optimism. In February 2026, a British investment minister told the Financial Times that the government was weighing the introduction of a universal basic income as a direct response to AI-driven displacement, funded in part by taxes on technology companies.71 The political framing was notable: a growth-oriented industrial policy argument, presented by a minister appointed by a Labour government, with none of the defensive posture typically associated with welfare proposals. Over 150 mayors in the United States had joined the Mayors for a Guaranteed Income coalition by late 2025, normalizing the concept of unconditional cash transfers as a standard tool in the municipal policy toolkit.72 Stanford’s Basic Income Lab documented more than 160 pilot programs conducted worldwide over four decades, with generally positive results on poverty reduction, health outcomes, and educational attainment.73

The managed transition is possible. It has precedent. It requires, however, a speed of political response, a quality of institutional design, and a tolerance for redistribution that the evidence of the last four decades does not strongly support. It is the path for which the case is strongest in theory and weakest in demonstrated political will.

X.

The third path is the most distant from current reality and the one that the architects of the technology most frequently invoke. It imagines a world in which AI-driven productivity becomes so abundant and so cheap that the fundamental goods required for a high standard of living, food, shelter, energy, healthcare, education, and information, are available to all at minimal cost. The marginal cost of production for these essentials approaches zero. The need for wages as a distribution mechanism disappears, because the things that wages were meant to purchase are effectively free.

The production-side evidence for this trajectory is not trivial. Solar energy costs have fallen 99 percent since 1976.74 Battery storage costs have declined over 97 percent since 1991. AI inference costs are dropping roughly tenfold per year for equivalent performance.75 Boston Dynamics began commercial production of its Atlas humanoid in January 2026, with manufacturing costs declining 40 percent year-over-year. Chinese manufacturers were producing functional humanoids at a fraction of Western costs. At scale production over a five-year operating lifespan, the effective hourly cost of a humanoid robot falls to three to five dollars, below the minimum wage in every U.S. state. If these cost curves continue, the physical capacity to provide material abundance exists within a generation. That “if” carries weight. Cost curves for energy technologies have historically followed learning curves that flatten as the technology matures and the easy gains are captured. Robotics faces materials constraints (rare earths, precision manufacturing) that may impose floors well above the projections. AI inference costs depend on continued scaling of compute, which faces its own physical and economic limits.76 The post-scarcity production case is plausible on the evidence of the last two decades. It is not guaranteed by extrapolation.

The mechanism that makes a post-work society conceivable, rather than merely a thought experiment, is the economic logic of comprehensive deflation. If AI and robotics drive the cost of essentials (food production, energy, housing construction, healthcare delivery, education, transportation) low enough, the amount of income a person needs to live well drops with it. At some point, the gap between what people need and what modest public provision or shared infrastructure can cover becomes small enough to close. Wages stop being the primary mechanism by which prosperity is distributed, because the things wages were meant to purchase have become cheap enough to provide through other channels. Work stops being necessary for survival, which means its absence stops being a catastrophe. This is the bridge between “AI displaces labor” and “that displacement could be a liberation rather than a disaster,” and it depends entirely on whether the deflation reaches the categories that matter: not just digital services and consumer goods, but housing, food, energy, and healthcare. The cost curve evidence above suggests the production capacity is arriving. Whether it translates into actual deflation of living costs, or whether institutional bottlenecks (land use regulation, healthcare administration, intellectual property regimes, energy grid constraints) prevent the cost compression from reaching consumers, is the question on which the entire third path depends.77

The institutional barriers are at least as significant as the technical ones.

An economist named Agisilaos Papadogiannis, writing on SSRN in December 2025, developed the most rigorous version of the counter-argument.78 Even when AI delivers what he called “technological abundance,” a cost compression and capability expansion that makes production nearly free, it does not eliminate scarcity. It reorganizes scarcity by shifting which constraints bind. His five-layer framework identified persistent bottlenecks in physical resources and space, infrastructure (energy grids, data centers, fabrication capacity), organizational capabilities that require high-trust human knowledge, institutions that govern access, and jurisdictions that supply enforcement capacity. AI may drive the cost of producing a kilowatt-hour toward zero while the land, permits, and grid connections required to build the solar farm remain scarce. AI may make medical knowledge free while the regulatory approvals, clinical trials, and manufacturing capacity required to deliver treatments remain bottlenecked. The production layer is the one where cost curves are most dramatic. The institutional layers, which determine whether production translates into access, are the ones where progress is slowest.

Fred Hirsch, the economist who coined the concept of “positional goods” in 1976, identified a deeper constraint.79 As material needs are satisfied, economic competition shifts to goods whose value derives from their scarcity: elite education, desirable locations, prestigious occupations, social status. These goods cannot be produced in greater quantity without destroying what makes them valuable. Not everyone can live on the beachfront. Not everyone can attend the most selective university. Not everyone can hold a leadership position. Material abundance, even if achieved, does not resolve the competition for relative standing, which is by definition zero-sum. Post-scarcity for material goods may coexist with intensified scarcity for everything else that humans find meaningful.

There is also the question of what work means in a society that no longer requires it. The relationship between employment and identity runs deeper than economics. Work provides structure, social connection, a sense of contribution, a public identity. The loss of these things, documented extensively in the literature on unemployment, long-term disability, and forced retirement, produces measurable declines in mental health, physical health, and life expectancy. A society that solves the material problem while leaving the meaning problem unaddressed has not achieved post-scarcity in any sense that matters to the humans living in it. It has created a comfortable purposelessness, which the psychological evidence suggests is not comfortable at all.

The strongest version of the post-work argument bypasses the wage distribution problem entirely: rather than giving people money to buy services, AI provides the services directly. Free AI-generated healthcare, education, legal assistance, and information, with the costs borne by whoever owns and operates the systems. This is the Universal Basic Services case, and it has real force for digital services where the marginal cost is already near zero. It has much less force for physical services. A world where AI provides free medical advice is not the same as a world where everyone receives free surgery, because surgery requires operating rooms, trained hands, sterilized instruments, and recovery beds, none of which have near-zero marginal cost.

The deepest objection to the post-work vision comes from the political economy of who proposes it. A paper published in Frontiers in Artificial Intelligence in 2025 examined the advocacy of UBI by technology executives through the lens of Bourdieu’s concept of symbolic violence.80 The author argued that when the people who are building the systems that displace workers propose a stipend for the displaced, the gesture serves a legitimizing function: it secures public acceptance for the continued expansion of AI by framing the expansion as ultimately benign. The UBI becomes what Bourdieu would call a mechanism of symbolic violence, a structure that perpetuates inequality while presenting itself as a remedy for it. The division it reinforces is tripartite: those who own AI systems, those who are skilled enough to use them, and those who are merely recipients of whatever benefits the owners choose to distribute. The third group’s economic existence depends entirely on the continued generosity of the first.

Richard Freeman, the Harvard economist, captured the structural point in a phrase: “Who owns the robots rules the world.”81 If citizens do not own AI capital, then universal basic income is not a floor beneath the free market. It is feudal tribute, paid at the discretion of the lord, revocable when the lord’s interests change. The post-work society, in this framing, arrives only if the means of intelligence become broadly public, either through broad ownership of AI-producing firms, public utility models for AI infrastructure, or some mechanism yet to be designed that distributes the ownership of productive AI capital as broadly as the twentieth century distributed access to education, home ownership, and retirement savings. Without such a mechanism, the post-work society is a post-work aristocracy with a pacified, dependent populace.

The demand problem makes the political problem concrete. If workers are displaced or downgraded at sufficient scale, the purchasing power that sustains consumer demand contracts. Corporate revenues decline. Tax bases shrink. The fiscal capacity to fund any redistributive program, UBI included, weakens at precisely the moment it is most needed. This is the structural paradox of the post-work transition: the technology that creates the need for redistribution simultaneously undermines the economic base that funds redistribution. Universal basic income is the most commonly proposed solution. The political leadership required to implement it at scale, in any country, does not currently exist. The fiscal math is daunting: Yang’s original $1,000-per-month proposal would cost $2.8 to $3.0 trillion per year, roughly equivalent to the entire discretionary federal budget.82 The political coalition required to pass it would need to bridge constituencies that agree on almost nothing else. And in a global economy, national UBI creates competitive distortions: a country that funds universal income through taxes on AI-producing firms creates an incentive for those firms to relocate to jurisdictions that do not.

There is a version of the post-work society that is appealing on its own terms, and it requires honesty about how distant it is from current political reality. The closest historical analogue is not a modern welfare state. It is Athens in the fifth century BCE. Athenian democracy, with its rich culture of philosophy, art, theater, political participation, and civic life, was made possible by a productive base that freed citizens from manual and routine labor. That productive base was slavery. Approximately 80,000 to 100,000 enslaved people performed the agricultural, domestic, and commercial work that allowed the free citizenry to devote themselves to the pursuits that we now consider the pinnacle of human civilization.83 The uncomfortable parallel is this: AI and robotics could, in principle, serve the structural function that enslaved labor served in Athens, providing the productive foundation that frees humans for higher pursuits, without the moral catastrophe. The technology as substitute for the slave. If the productive base is automated, the material conditions for a society organized around learning, creativity, civic engagement, and shared living become achievable. The elements exist in fragmentary form: co-housing communities, maker spaces, the open-source movement, artists’ residencies, communal agricultural projects. They share a principle: that human flourishing can be organized around contribution and community rather than around wage labor. Scaling this from a lifestyle choice made by thousands to a societal model for hundreds of millions would require a radical shift in how societies value human life, one that decouples worth from productivity and identity from employment. No society has attempted this at scale. The Athenian model collapsed under its own contradictions (imperialism, exclusion, internal conflict). The kibbutzim that approximated it in the twentieth century eroded under market pressures within two generations. The post-work society has no successful precedent at the scale required. It would require political leadership of a kind that no major democracy has produced in the current generation. The technology may be ready for it within decades. The politics, the culture, and the institutions are not. Acknowledging that gap is the beginning of honest engagement with the question.

XI.

The three paths diverge from a single variable: ownership.

The productivity gains are coming. This is the one point on which the optimists and pessimists, the techno-utopians and the labor economists, the venture capitalists and the union organizers all agree. AI will make the economy more productive. It will enable more output with less human labor. The question that separates the K-shaped dystopia from the broadly shared prosperity is not whether the gains will materialize. It is who will own the systems that produce them.

Under current ownership structures, the answer is clear. Five companies dominate frontier AI development. Their combined market capitalization exceeds the GDP of most nations. Their capital expenditure budgets, $142 billion in a single quarter at the end of 2025,84 dwarf the fiscal capacity of all but a handful of governments. The top 20 percent of American households own approximately 70 percent of financial assets.85 Capital ownership is more concentrated than income, and income is more concentrated than at any point in the past six decades. The default path, Branch A, follows directly from this ownership structure. The gains flow to the owners. The owners are few. The distribution fractures.

Branches B and C require a redistribution of ownership. Income redistribution alone is insufficient. This is the distinction that most policy discussions elide. Progressive taxation, expanded safety nets, and active labor market policies can moderate the K-shape. They are necessary. They are not sufficient. As long as AI capital remains privately held and narrowly owned, the productivity gains will accumulate to the same small set of balance sheets, and the redistributive apparatus will always be playing catch-up with a system that generates concentration faster than policy can correct it.

The historical precedent for broad ownership of foundational capital exists, though it is imperfect and incomplete. The GI Bill and the expansion of public universities distributed access to human capital. The Federal Housing Administration and the 30-year mortgage distributed access to real estate, the principal asset class for middle-class wealth (with deep racial inequities in who was included). Social Security and employer pension systems distributed retirement savings. The interstate highway system and the rural electrification program distributed infrastructure. Each of these was a mechanism for ensuring that a foundational economic input, whether it was education, housing, energy, or transportation, was broadly accessible rather than privately captured. None emerged naturally from the market. Each required political action, often contentious, often opposed by the same concentrated interests that benefited from the prior distribution.

The equivalent for AI is not yet legible in policy, though its outline is discernible. Public ownership stakes in frontier AI systems, funded by the taxpayer investment in basic research (DARPA, the National Science Foundation, university laboratories) that made the technology possible. Public AI infrastructure operated as a utility, with access priced at cost rather than at the market rate that maximizes shareholder returns. Data dividends that recognize the public’s contribution to the training data on which AI systems depend. Sovereign wealth funds, modeled on Norway’s Government Pension Fund, that invest public capital in the AI-producing sector and distribute returns to citizens. Employee ownership structures that give workers a claim on the AI capital that is displacing their labor. These are not utopian proposals. Variants of each exist in functioning economies. They require political choices about the ownership of a technology that is, at present, owned by a very small number of very wealthy institutions.

The analogy to twentieth-century ownership mechanisms is imperfect in ways that matter. A house is a durable asset. A trained AI model depreciates within months as newer models supersede it. The value in AI is less in any particular model than in the organizational capacity to build and deploy the next one: the talent, the compute infrastructure, the data pipelines, the distribution channels. “Owning a share of OpenAI” is structurally different from owning a home or a pension, because the asset’s value depends on continued operational decisions by a small number of people rather than on a stable underlying good. This means that ownership-based solutions for AI cannot simply replicate the FHA or Social Security model. They must account for the speed at which the asset changes form. A sovereign wealth fund investing in the AI sector, or a public utility model for inference infrastructure, may prove more durable than direct equity stakes in individual companies. The design problem is real. It does not, however, invalidate the principle. The question of who captures the returns from AI-driven productivity is a question about the structure of ownership, and it will be answered one way or another. The default answer is: the people who already own the most.

The timeline is the binding constraint. Every year that passes under the default trajectory deepens the K, strengthens the political position of the concentrated ownership class, weakens the institutional capacity of the state to intervene, and extends the lead of the few over the many. The window for redistributive action is not infinite and its closing will not be announced. Previous general-purpose technologies gave societies decades to organize responses. The loom, the steam engine, the assembly line, the personal computer: each unfolded over a generation or more, allowing political institutions to observe, debate, experiment, and eventually act. AI capability is compounding on timescales of months. The distance between “this technology is interesting” and “this technology has restructured the economy” may be shorter than the distance between the introduction of a bill and its passage into law.

Daron Acemoglu, reflecting on the century-long lag between the Industrial Revolution’s productivity gains and their broad distribution, has noted that the lag was not an inevitability. It was a political failure.86 The technology to improve working-class lives existed decades before it was deployed for that purpose. What was missing was the institutional will to redirect the gains. The technology served those who controlled it until those who did not organized enough power to demand otherwise.

The same choice is in front of us. And there are signs, early and scattered, that some people are beginning to make it. A British government minister proposes taxing AI companies to fund universal income. Over 150 American mayors join a coalition to pilot guaranteed income programs. IBM reverses its entry-level hiring cuts after recognizing the pipeline it was destroying. The EU AI Act mandates human oversight of high-risk systems. Labor organizers at technology companies begin negotiating over AI deployment, not just wages. These are real movements. They exist. They matter.

They are also, measured against the scale and speed of the transformation this essay describes, radically insufficient. The UK proposal is a minister floating an idea in a newspaper interview. The mayoral coalition distributes small payments to a few thousand people. IBM’s reversal affects one company. The EU AI Act regulates deployment without addressing ownership. Labor organizing covers a fraction of the workforce at a handful of firms. Each of these is a correct impulse. None of them, individually or collectively, operates at the scale of the problem. The productivity-labor decoupling documented in this essay is structural, global, and accelerating. The policy responses are local, tentative, and slow. The gap between the speed of the transformation and the speed of the response is widening, and every month it widens, the default path becomes more entrenched and the alternatives become harder to reach.

The economy does not need most of its workers. That sentence is becoming true on a timeline measured in years. There is still time to decide what kind of society emerges on the other side. The tools exist: public ownership, progressive taxation of AI capital, active labor market investment at scale, antitrust enforcement, international coordination on AI governance, and institutional imagination that treats the current moment with the seriousness it demands. The question is whether any of this will happen fast enough. The window is open. It will not stay open. And the people who need to walk through it, the legislators, the regulators, the labor movements, the voters, the citizens who have not yet understood what is coming, are mostly still standing outside, watching the data accumulate, waiting for someone else to go first.

Notes

Footnotes

  1. Bureau of Labor Statistics, “Employment Situation Summary,” revised February 2026. Initial 2025 payroll estimate of 584,000 revised to 181,000.

  2. Bureau of Economic Analysis, GDP estimate, Q4 2025. Real GDP growth rate of 3.7 percent.

  3. Erik Brynjolfsson, “The AI Productivity Take-Off Is Finally Visible,” Fortune, February 15, 2026.

  4. Jason Furman, analysis of BLS revised data, February 2026. Nonfarm business sector labor productivity 2.2 percent above CBO’s pre-pandemic (January 2020) forecast. Annual growth: 2.8 percent over one year, 2.5 percent over two years, 2.2 percent over six years. Summarized in Alex Imas, “What Is the Impact of AI on Productivity?,” Substack, January 29, 2026 (updated March 2026).

  5. Torsten Slok, Apollo Chief Economist, commentary reported in Fortune, February 2026. “After three years with ChatGPT and still no signs of AI in the incoming data, it looks like AI will likely be labor enhancing in some sectors rather than labor replacing in all sectors.”

  6. Survey of C-suite executives across the U.S., U.K., Germany, and Australia, reported via Fortune, February 17, 2026. Nearly 90 percent reported AI had no impact on workplace employment over the preceding three years.

  7. Martha Gimbel et al., Yale Budget Lab, 2025. Current Population Survey analysis finding no significant differences in occupational churn or unemployment duration for AI-exposed workers through November 2025. See also Brookings, “Research on AI and the Labor Market Is Still in the First Inning,” March 2026.

  8. National Foundation for American Policy (NFAP), data reported via Fortune, January 2026. The number of foreign-born workers declined by 881,000 since January 2025.

  9. The post-pandemic interest rate cycle and its effects on tech/professional services hiring are documented across multiple sources. See Federal Reserve rate decisions 2022-2024; Challenger, Gray & Christmas layoff data showing 1.1 million announced job cuts January-October 2025 (reported in CNBC, December 8, 2025); Gusto senior economist Nich Tremper’s analysis of firms that “over-hired in the post-pandemic economy in 2021 and 2022” (CNBC, December 2025). The confounding of rate-driven headcount correction with AI-driven displacement is discussed in Brookings, “Research on AI and the Labor Market Is Still in the First Inning,” March 2026.

  10. Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence,” Stanford Digital Economy Lab, August 2025 (with 2026 companion note). Using ADP payroll records covering millions of workers.

  11. Stanford Digital Economy Study, reported in Stack Overflow blog, December 26, 2025. Employment for software developers aged 22-25 declined nearly 20 percent from late-2022 peak by July 2025.

  12. Stack Overflow, “AI vs Gen Z: How AI Has Changed the Career Pathway for Junior Developers,” December 26, 2025. Entry-level tech hiring decreased 25 percent year-over-year in 2024.

  13. Cengage Group survey, 2025, reported in CNBC, December 8, 2025. Only 30 percent of 2025 college graduates found full-time employment in their field. 76 percent of employers hired same or fewer entry-level employees in 2025 vs. 2024.

  14. Federal Reserve data, reported in CNBC, December 8, 2025. Unemployment rate for 20-to-24-year-olds with bachelor’s degrees reached 9.7 percent by September 2025.

  15. Rest of World, “AI Is Wiping Out Entry-Level Tech Jobs, Leaving Graduates Stranded,” December 19, 2025. Indian Institute of Information Technology, Design and Manufacturing: fewer than 25 percent of graduating class of 2026 had secured job offers.

  16. Vahid Haghzare, Silicon Valley Associates Recruitment, Dubai, quoted in Rest of World, December 19, 2025. Off-the-shelf technical hiring dropped from 90 percent of placements to below 5 percent.

  17. Rezi.ai, “The Crisis of Entry-Level Labor in the Age of AI (2024-2026),” January 15, 2026. UK tech graduate roles fell 46 percent in 2024, with projections for a further 53 percent drop by 2026.

  18. World Economic Forum, “The Rising Pressures for Gen Z in the Global Job Market,” November 2025. Entry-level openings down 29 percent year-over-year.

  19. Nickle LaMoreaux, IBM Chief Human Resources Officer, reported in Fortune, February 13, 2026. IBM tripling Gen Z hires after recognizing that reducing junior headcount risks creating a middle-management shortage.

  20. Korn Ferry, reported in Fortune, February 13, 2026. 37 percent of organizations plan to replace early career roles with AI.

  21. Forrester Research, Predictions 2026: The Future of Work, December 2025. 55 percent of employers report regretting AI-attributed layoffs. Predicts half of AI-attributed layoffs will be rehired offshore at lower salaries.

  22. Erik Brynjolfsson, J. Frank Li, Javier Miranda, Robert Seamans, et al., “Minimum Wage and Robot Adoption,” NBER Working Paper, February 2026. Reported in Fortune, March 4, 2026.

  23. Boston Dynamics, commercial production announcement, January 2026. Hyundai planning 30,000 units per year from a single factory by 2028.

  24. Goldman Sachs, humanoid robotics report. Manufacturing costs declining 40 percent year-over-year. Chinese manufacturer Unitree launched R1 humanoid at $5,900 (July 2025), G1 at $16,000, H1 at $90,000.

  25. Goldman Sachs and Morgan Stanley projections. At scale production costs over a five-year lifespan operating 20 hours per day, effective hourly cost of $3-5. Morgan Stanley projects over 1 billion humanoids in service by 2050.

  26. Bureau of Labor Statistics, Productivity and Costs report, Q3 2025. Labor share fell to 53.8 percent. Reported in Fortune, January 13, 2026.

  27. Fortune, January 13, 2026. Labor share down from 58 percent in 1980. Corporate profit share of national income rose from 7 percent to 11.7 percent. Fortune 500 profits: record $1.87 trillion in 2024.

  28. Antonio Minniti, Klaus Prettner, and Filippo Venturini, “AI Innovation and the Labor Share in European Regions,” European Economic Review 177 (2025). Also discussed in CEPR VoxEU column, 2025.

  29. David Autor et al., “Resolving the Automation Paradox: Falling Labor Share, Rising Wages,” arXiv:2601.06343, January 2026.

  30. Federal Reserve, Distributional Financial Accounts, Q3 2025. Reported in CNBC, January 30, 2026. Top 1 percent: record ~32 percent of net worth. Bottom 50 percent: 2.5 percent.

  31. Mark Zandi, Moody’s Analytics. Top 10 percent responsible for approximately 49 percent of consumer spending in Q2 2025, up from 44.6 percent in 2019. Reported in Fortune, November 7, 2025, and multiple subsequent sources.

  32. Morgan Stanley Wealth Management, “K-Shaped Economy: An Investor’s Guide,” 2025. Top 40 percent: ~85 percent of wealth, 60 percent of consumer spending.

  33. Magnificent Seven: 32.6 percent of S&P 500 as of February 2026, up from 12.5 percent a decade ago. The Motley Fool, February 2026.

  34. Columbia Threadneedle Investments, “The Rise of the Magnificent 7: Concentration Risk Versus Earnings Power,” September 2025. Top 10 companies: ~39 percent of market cap, above 27 percent dot-com bubble peak.

  35. OECD, February 2026 announcement. AI firms captured 61 percent of all global venture capital in 2025, with mega-deals above $1 billion representing roughly half of total AI investment value.

  36. UNCTAD, Technology and Innovation Report 2025. 100 firms account for 40 percent of global corporate R&D spending.

  37. CNN, “The K-Shaped Economy Reigned in 2025. It’s Not Going Away in 2026,” January 1, 2026. Companies cutting prices because customers could no longer afford products. Also Fortune, December 24, 2025, on corporate price cuts.

  38. Fortune, November 7, 2025. Ralph Lauren trending at record levels during 2025 holiday season. Social media tutorials on achieving the look on a budget.

  39. Anthropic, “Labor Market Impacts of AI,” March 2026. Workers in most AI-exposed occupations 16 percentage points more likely to be female than those in zero-exposure group.

  40. UNCTAD, Technology and Innovation Report 2025; NASSCOM industry data. India IT services: 5 million+ direct employees, ~12 million supported.

  41. UNDP and Center for Global Development data. Africa: less than 1 percent of global data center capacity. Fixed broadband costs: 1 percent of monthly income in wealthy nations, 31 percent in poorest.

  42. UNDP, “The Next Great Divergence,” December 2025. 118 nations absent from global AI governance discussions. AI could reverse decades of narrowing development inequality.

  43. Erik Brynjolfsson, Danielle Li, and Lindsey Raymond, “Generative AI at Work,” Quarterly Journal of Economics 140 (May 2025): 889-942. 15 percent improvement in customer service resolution (issues resolved per hour).

  44. Anthropic, “Estimating AI Productivity Gains from Claude Conversations,” November 2025. 100,000 conversations sampled; AI reduces task completion time by approximately 80 percent.

  45. PwC, Global AI Jobs Barometer, June 2025. Productivity growth nearly quadrupled in AI-exposed industries (7 percent from 2018-2022 to 27 percent from 2018-2024). Least-exposed industries: 10 percent to 9 percent.

  46. Robert C. Allen, “Engels’ Pause: Technical Change, Capital Accumulation, and Inequality in the British Industrial Revolution,” Explorations in Economic History 46, no. 4 (2009): 418-435. Also discussed in Daron Acemoglu and Simon Johnson, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (New York: PublicAffairs, 2023).

  47. Menaka Hampole, Dimitris Papanikolaou, Lawrence D.W. Schmidt, and Bryan Seegmiller, “Artificial Intelligence and the Labor Market,” NBER Working Paper 33509, February 2025.

  48. Brookings Institution, “Research on AI and the Labor Market Is Still in the First Inning,” March 10, 2026. Authors include Martha Gimbel and others from Yale Budget Lab and Brookings.

  49. Governor Michael S. Barr, “Artificial Intelligence and the Labor Market,” speech at the Federal Reserve, May 9, 2025. Published via Bank for International Settlements.

  50. Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock, “GPTs are GPTs: Labor Market Impact Potential of LLMs,” Science 384, no. 6702 (June 2024): 1306–1308. Also available open-access: arXiv:2303.10130. Higher-income jobs show greater LLM exposure than lower-income ones, the inverse of what prior automation research consistently found. Around 80 percent of the U.S. workforce has at least 10 percent of their tasks affected.

  51. Mauro Cazzaniga et al., “Gen-AI: Artificial Intelligence and the Future of Work,” IMF Staff Discussion Note SDN/2024/001, International Monetary Fund, January 2024. Unlike previous automation waves, which concentrated displacement among middle-skilled workers, generative AI extends into non-routine cognitive work across advanced economies, with approximately 60 percent of jobs in advanced economies exposed.

  52. Molly Kinder, Xavier de Souza Briggs, Mark Muro, and Sifan Liu, “Generative AI, the American Worker, and the Future of Work,” Brookings Institution, October 10, 2024. Industries now most exposed to generative AI were ranked at the bottom of automation risk just a few years ago. Generative AI excels at non-routine skills (programming, writing, creativity, communication, analysis) that prior automation could not reach.

  53. World Economic Forum, Future of Jobs Report 2025, January 2025. Projects net gain of 78 million roles by 2030. 40 percent of employers expect to reduce staff where AI can automate tasks.

  54. The recursive demand theory, the idea that AI-generated complexity creates demand for human verification and arbitration at the meta-level, is a variant of the broader “complementarity” argument in the automation literature. See Acemoglu and Restrepo, “Automation and New Tasks: How Technology Displaces and Reinstates Labor,” Journal of Economic Perspectives 33, no. 2 (Spring 2019): 3-30. The specific application to AI verification roles is discussed in the ICLE review, “AI, Productivity, and Labor Markets: A Review of the Empirical Evidence,” February 2026, which notes the emergence of “skill compression” as AI raises the floor of performance while potentially lowering the ceiling.

  55. Forrester Research, Predictions 2026, December 2025. Klarna replaced 700 customer service employees with AI; quality declined, customers revolted, company rehired humans (offshore at lower wages).

  56. Forrester Research, Predictions 2026. Predicts half of AI-attributed layoffs will be quietly rehired, often offshore at significantly lower salaries.

  57. BCG and University of California Riverside, “The Hidden Toll of AI on Workers,” Harvard Business Review, March 2026. Survey of 1,488 full-time U.S. employees. 14 percent more mental effort, 12 percent more mental fatigue, 19 percent more information overload under high AI oversight conditions. Productivity peaks at 2-3 tools, drops beyond 4. 33 percent more decision fatigue, 39 percent more major errors.

  58. Michael Pearce, Oxford Economics, reported in Fortune, February 13, 2026. GDP to expand 2.8 percent in 2026; job gains to average fewer than 40,000 per month.

  59. Goldman Sachs (Joseph Briggs and Sarah Dong). Full AI adoption could lead to 15 percent increase in U.S. labor productivity and 7 percent increase in global GDP. Reported in Fortune, March 14, 2025.

  60. Brookings Institution, “AI Seems Everywhere, but Regional Readiness Is Uneven,” 2025. 30 metro areas account for two-thirds of all AI-related job postings.

  61. Julian Jacobs, “AI, Labor Displacement, and the Limits of Worker Retraining,” Brookings Institution, May 2025. Reviews National JTPA Study (randomized controlled trial: no significant improvement), Workforce Investment Act national evaluation (no positive impact within 30 months), and WIOA (40 percent of participants trained into roles paying less than $25,000).

  62. Brookings Institution, “Should the Federal Government Spend More on Workforce Development?,” 2025. U.S. spends approximately 0.1 percent of GDP on active labor market policies, second-to-last among OECD nations.

  63. See Fred Hirsch, Social Limits to Growth (1976), discussed in Section VIII.

  64. Gillian K. Hadfield and Andrew Koh, “An Economy of AI Agents,” prepared for the NBER Handbook on the Economics of Transformative AI, arXiv:2509.01063, September 2025. The incomplete contracting framework applied to AI delegation draws on Hadfield-Menell and Hadfield, “Incomplete Contracting and AI Alignment,” Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 417-422. Both involve a principal who cannot fully specify desired behavior in advance, creating residual control rights that accrue to whichever party fills the gaps.

  65. Daron Acemoglu, lecture at the University of Zurich, January 2025. Also Daron Acemoglu and Simon Johnson, Power and Progress (2023). Benefits of Industrial Revolution took over 100 years to diffuse to workers.

  66. Google, Q4 2025 earnings report. Capital expenditure forecast for 2026: $175 billion to $185 billion. Hyperscaler capex hit $142 billion in Q4 2025.

  67. Georgetown University, Center for Security and Emerging Technology (CSET), “AI and the Future of Workforce Training.” Technical skills now become outdated in fewer than five years on average.

  68. The fiscal base problem for AI-era redistribution is discussed in general terms across several sources. On the declining labor share as a fiscal constraint: BLS data showing labor share at 53.8 percent (Q3 2025), down from 58 percent in 1980, reported in Fortune, January 13, 2026. On candidate replacement tax mechanisms: the UK’s Lord Stockwood proposed taxing technology companies to fund UBI (Financial Times, reported February 2, 2026); the Newsweek analysis of January 5, 2026 examined the fiscal math of UBI against current revenue structures. On compute taxes and data levies as potential mechanisms: European Parliament discussions on “robot taxes” and AI-specific levies are ongoing but no major economy has implemented them at scale.

  69. Filippo Lancieri, Laura Edelson, and Stefan Bechtold, “AI Regulation: Competition, Arbitrage & Regulatory Capture,” Theoretical Inquiries in Law 26(1): 239-262, 2025. The paper models how governments competing to attract AI investment trade off regulatory arbitrage against regulatory fragmentation, with companies strategically sharing arbitrage rents with low-regulation jurisdictions to induce favorable regulatory environments.

  70. OpenAI, “Industrial Policy for the Intelligence Age: Ideas to Keep People First,” April 2026. The document proposes a Public Wealth Fund giving every citizen a stake in AI-driven growth, tax base modernization from payroll to capital-based revenues, 32-hour workweek pilots, portable benefits, adaptive safety nets with automatic triggers, auditing regimes, incident reporting, model-containment playbooks, and international information-sharing networks. The document acknowledges concentration risk (“There is also a risk that the economic gains concentrate within a small number of firms like OpenAI”) and frames its proposals as “intentionally early and exploratory.”

  71. Lord Jason Stockwood, UK Minister for Investment, interview with Financial Times, reported by Allwork.Space, February 2, 2026. Government weighing UBI as AI response, funded in part by taxes on tech companies.

  72. Mayors for a Guaranteed Income coalition. Over 150 mayors by late 2025. Reported in GovFacts, December 2025.

  73. Stanford Basic Income Lab. Over 160 UBI pilot programs conducted worldwide over four decades. Generally positive results on poverty, health, education; employment effects less clear. Reported in LSE Business Review, April 29, 2025.

  74. Our World in Data, “Solar PV Prices.” Solar energy costs down 99 percent since 1976. International Energy Agency, World Energy Outlook 2020: solar is the cheapest source of electricity in history.

  75. LLM-Stats.com, AI inference cost trends. Inference costs dropping approximately 10x per year for equivalent performance.

  76. On scaling limits: Toby Ord, “Scaling Paradox” (2025); Ilya Sutskever, NeurIPS 2024 (“pretraining as we know it will end”); MIT FutureTech, “Meek Models Shall Inherit the Earth,” arXiv July 2025 / MIT IDE January 2026; Barclays Private Bank, “AI in 2026: Smarter, Not Bigger,” November 2025. Training cost growing 2.4x per year since 2016; human-generated high-quality text may be consumed between 2026-2032.

  77. See Papadogiannis, “Scarcity in an Age of AI Abundance” (2025), discussed below.

  78. Agisilaos Papadogiannis, “Scarcity in an Age of AI Abundance,” SSRN, December 2025.

  79. Fred Hirsch, Social Limits to Growth (Cambridge, MA: Harvard University Press, 1976). Recovered and distinguished from later dilutions in Economics & Philosophy (2025).

  80. Jean-Christophe Bélisle-Pipon, “AI, Universal Basic Income, and Power: Symbolic Violence in the Tech Elite’s Narrative,” Frontiers in Artificial Intelligence, 2025. Published in PMC, March 2025.

  81. Richard B. Freeman, “Who Owns the Robots Rules the World,” IZA World of Labor, 2015.

  82. Tax Foundation, costing of Andrew Yang’s $1,000/month UBI proposal: approximately $2.8 to $3.0 trillion per year (in 2019 dollars). Also Newsweek, January 5, 2026.

  83. Estimates of the enslaved population in classical Athens vary. The figure of 80,000 to 100,000 is drawn from standard histories including M.I. Finley, Ancient Slavery and Modern Ideology (1980), and Peter Hunt, Ancient Greek and Roman Slavery (2018).

  84. Hyperscaler capital expenditure, Q4 2025: $142 billion. Reported in Newsweek, January 5, 2026, and multiple financial sources.

  85. Federal Reserve distributional data; Cresset Capital, “The Twin Pillars of Fragility,” February 2026. Top 20 percent of U.S. households own approximately 70 percent of financial assets.

  86. Daron Acemoglu, lecture at the University of Zurich, January 2025. Also Acemoglu and Johnson, Power and Progress (2023). Diffusion of technological gains was never automatic; it was won through political struggle.