For most of modern history, there was a deal. You contributed your labor, and in return you received a share of what the economy produced. The share was never fair, but the link between work and prosperity was real enough that economic growth reliably meant more jobs, higher wages, and more people pulled into the middle class. Productivity went up, and living standards followed.
AI is breaking that link, though not in the way most people expect. The transformation isn’t mass layoffs and empty offices. It’s quieter. A company discovers that AI can handle 40 percent of what its customer service team does, so it stops hiring replacements when people leave. It absorbs the next expansion without adding headcount. A legal team that once needed twelve associates to prepare for discovery now needs four. A marketing department that employed a team of copywriters contracts with two editors who manage AI output. A consulting firm delivers the same analysis with half the junior analysts.
In each case, the work still gets done. The humans doing it are fewer, and the share of economic value flowing to labor shrinks. AI also makes some remaining workers dramatically more productive and more valuable, which is real, but the net effect on labor’s share of income points in one direction: down. The economy grows while employment doesn’t keep pace, and the productivity gains accrue to the owners of the systems rather than the workers those systems displaced.
None of this requires artificial general intelligence, or superintelligence, or whatever the latest frontier lab promises in its fundraising deck. Massive displacement doesn’t require a country of geniuses in a datacenter. It requires a system that performs at 70 percent of a human’s quality for 5 percent of the cost. At that ratio, every CFO in the world makes the same calculation, and they make it fast. And the capabilities are moving faster than most people’s mental models of them. Two years ago, an AI system that could autonomously traverse a codebase, write and test its own code, and ship production software was science fiction. Today it’s a product you can subscribe to for twenty dollars a month. The institutions we build, the labor policies we design, the safety nets we fund need to be calibrated to where capabilities are heading, not where they were when someone last checked. Anyone planning for the economy of 2030 based on the AI of 2025 is making the same error as someone in 2005 planning for a world without smartphones. The gap between “AI can do this in a demo” and “AI is doing this in production” used to be measured in years. Now it’s months, sometimes weeks, because software scales at the speed of distribution rather than the speed of manufacturing. Previous technology transitions gave workers decades to adapt. This one is giving them years.
The displacement isn’t limited to knowledge work, either. Robotics and AI are improving in a compounding loop: better AI writes better control software for robots, and robots operating in the physical world generate the training data that makes AI better at interacting with reality. The cost of capable humanoid robots has dropped from hundreds of thousands of dollars to tens of thousands in a few years, and the performance curve is steepening. Warehouse workers, delivery drivers, agricultural laborers, manufacturing workers: the physical economy is on the same trajectory as the knowledge economy, just a few years behind. The eventual shape of the labor market, if current trends continue, is a thin layer of high-level supervisors directing fleets of AI agents and robots, a floor of minimum-wage workers doing tasks that are simply cheaper to pay a human than to automate, and a hollowed-out middle where most people used to earn a living. Even the supervisor layer may be transient: legislation like the EU AI Act will try to keep humans in the oversight loop, but the economic pressure to automate oversight (using AI to verify AI) is relentless, and the “human in the loop” may become a compliance checkbox rather than a genuine exercise of judgment long before the legislation catches up. A society of CEOs and house cleaners isn’t stable, and everyone living in it knows it.
It doesn’t respect borders, either. The offshoring pipeline is the first chapter of the AI displacement story, not a separate one. Companies that learned to move work to lower-cost countries during the remote work era are now moving it from those remote workers to AI agents. The progression runs from in-house to offshore to automated, and each step makes the next one easier. A call center that moved from Ohio to the Philippines in 2019 is now replacing those Filipino workers with AI in 2026. By the time work moves from human-to-human interaction to human-to-agent interaction, physical location stops mattering entirely, and the countries that built service economies around cheaper labor find themselves in the same position as the workers they originally replaced.
The standard reassurance is that technology always creates more jobs than it destroys. The loom, the assembly line, the computer: every previous wave of automation was narrow. It automated one category of task and left the rest for humans to fill. The loom took weaving, but judgment, communication, and creativity remained. The computer took calculation, but persuasion, strategy, and care remained. Displaced workers moved into those untouched categories, which is why the job market always recovered. AI is different because it automates across cognitive categories simultaneously, and robotics is closing the physical escape route at the same time. There is no obvious safe harbor, no category of work that is protected by nature rather than temporarily protected by the current limits of the technology.
This progress cannot be stopped at the national level. Calling for an AI slowdown only works if every country agrees to it and no bad actor defects, which is a fantasy. The advantage of American AI companies over Chinese competitors is real but not so large that unilateral restraint wouldn’t simply hand the lead to someone else. Capable open-source models are freely available worldwide. The technology will advance regardless of what any single government decides, which means the only viable response is to channel it rather than resist it.
Three paths lead from here, and the fork between them has nothing to do with the technology, which is the same in all three. The fork is about ownership and institutional design: who captures the economic value AI produces, and what structures exist to distribute it.
The first path is the default, and it requires no decisions at all. AI makes companies enormously productive while needing fewer workers to do it. GDP rises while median incomes stagnate or fall, producing a K-shaped economy where the top of the income distribution accelerates and the bottom drifts. A fair objection: if AI also makes goods and services cheaper, falling wages matter less because living costs fall too. The objection has force for digital services and manufactured goods, where the cost curves are steepest. It has much less force for housing, healthcare, childcare, and education, where costs are driven by land, regulation, and human labor that AI has not yet displaced, and none at all for positional goods (desirable neighborhoods, elite schools, social status) whose value comes from scarcity. The K-shape is felt most acutely in the categories that deflation reaches last. None of this requires bad intentions, just the continuation of current incentives: companies adopt AI because it’s cheaper and faster, pass the savings to shareholders, and the people most affected have the least political power to change the trajectory. The concentration reinforces itself: more AI profit buys more AI capability, which produces more profit, which buys more political influence, which shapes the rules. We are already on this path, and how far down it we go depends entirely on whether anything intervenes.
The second path is managed transition, where societies decide through legislation that AI’s productivity gains belong partly to the public. Concretely, this means taxing AI-generated profits to fund income support and public services, updating labor law for a world where “employment” increasingly means supervising AI systems, and creating public ownership stakes in AI infrastructure so returns flow to citizens rather than exclusively to shareholders.
The policy tools for this path exist: progressive taxation, sovereign wealth funds, universal basic income, public investment. But listing them as though they’re a plan is where most of the conversation stops, and where it shouldn’t. In April 2026, OpenAI published a detailed industrial policy document proposing a Public Wealth Fund, tax base modernization, 32-hour workweek pilots, and adaptive safety nets. The conversation has moved beyond bumper stickers. The question is no longer whether the companies building the technology acknowledge the displacement. They do. The question is whether the institutional engineering they propose will be implemented by governments that have the authority to enforce it, or whether it will remain a document: ambitious, well-intentioned, and structurally incapable of constraining the entity that wrote it. How do you fund universal income when the tax base depends on payroll taxes and the payroll is shrinking? How does one country implement it without its industries relocating to jurisdictions that don’t? How do you sustain it politically when the people funding it (through taxes on AI profits) are the same people with the lobbying power to prevent it? The managed transition is achievable, but only if someone does the institutional engineering that turns a bumper sticker into a functioning system. Political will is the bottleneck, and political will requires a population that understands what is happening and agrees on the need to act, which is harder than it sounds for reasons the next essay in this collection explains.
The third path is the most ambitious, but it deserves to be treated as a goal, not a thought experiment. If AI and robotics automate most labor, cognitive and physical, the result could be radical abundance: goods and services produced at a fraction of current cost, distributed broadly, with human activity redirected toward whatever people find meaningful. The mechanism that makes this conceivable is deflation. If AI drives the cost of essentials (food production, energy, healthcare, education, housing construction) low enough, the amount of income a person needs to live well drops with it. At some point, the gap between what people need and what modest public provision can cover becomes small enough to close. Work stops being necessary for survival, which means its absence stops being a catastrophe. Something like the Athenian model becomes possible: a society where a class of citizens is freed from labor to pursue philosophy, politics, art, and science, except this time the labor is performed by machines rather than enslaved humans, and the citizen class can include everyone.
The productive capacity to make this real is arriving. What stands in the way is institutional, not technological: entrenched elites who benefit from the current distribution, global competition that punishes any country that redistributes too aggressively, and the absence of leadership willing to articulate the vision and build the structures that could make it work. A post-work society doesn’t arrive by accident. It arrives because someone builds the institutions that channel abundance toward people instead of letting it concentrate among the owners of the machines. That requires a clarity of vision and a willingness to fight for it that no major political leader in any country has yet demonstrated.
We built our entire civilization on the link between work and prosperity: our tax systems, our social contracts, our sense of identity, our definition of a good life. AI is severing that link, and the economy will keep growing regardless of what, if anything, we build in its place.