Access to frontier-equivalent AI is democratizing fast. Open-weight models from DeepSeek, Meta’s Llama, and others now match proprietary systems at 90% lower cost. DeepSeek claimed its V3 base model cost $5.6 million to train, and distillation can compress frontier reasoning into models small enough to run on a laptop. The cost of achieving comparable benchmark performance dropped from $4,500 to $11.64 per task over 2025 alone. But access is not control. Anyone can use frontier-equivalent intelligence. The question is who decides what that intelligence becomes. The cost of training frontier models has grown at 2.4x per year since 2016, and Epoch AI projects the largest runs will exceed a billion dollars by 2027. Every distilled model, every fine-tune, every downstream application is a derivative of someone else’s original training. In 2026, five hyperscalers (Amazon, Alphabet, Microsoft, Meta, and Oracle) plan a combined $700 billion in capital expenditures, roughly 75% of it directly tied to AI infrastructure. That figure represents 2.1% of U.S. GDP flowing through five boardrooms. The entities that define the capabilities ceiling number perhaps a dozen: OpenAI, Anthropic, Google DeepMind, xAI, Meta, and a handful of Chinese labs including DeepSeek and Alibaba’s Qwen team. They set the frontier. Everyone else builds on what they release.
The self-reinforcing loop is the core danger: wealth buys compute, compute generates intelligence, intelligence generates wealth. The gap does not close over time. It widens, because intelligence advantages compound in ways that material advantages do not. A company with the best AI does not merely produce more goods; it outthinks competitors in strategy, R&D, market positioning, and political influence simultaneously. The wealth data already reflects this dynamic. The top 0.1% of U.S. households hold 14.4% of national net worth as of Q3 2025, a share that has grown 59.6% since 1989. The bottom 50% holds 2.5%. The combined wealth of America’s top ten tech leaders reached $2.5 trillion by end of 2025, a figure rivaling the GDP of France. Seven of the ten richest people on Earth are technology executives.
The concentration of economic power translates directly into political power. The technology industry spent over $250 million on federal lobbying in 2024, deploying nearly 500 lobbyists across three Congresses, one for every two members. In the first nine months of 2025, seven major tech companies spent a combined $50 million on lobbying, hitting the highest quarterly total ever recorded. That spending purchased concrete outcomes: companies pushed a provision in the 2025 spending bill to strip states of the power to regulate AI for ten years. AI policy is now largely dictated from the White House down, favoring the companies doing the dictating.
The most recent phase of concentration moves beyond infrastructure into the application layer itself. For years, software companies sold tools that humans used. Then AI startups built “GPT wrappers” that added intelligence to those tools. Now the frontier labs are absorbing both layers. In early February 2026, Anthropic launched Claude Cowork with autonomous plugins targeting legal, financial, and engineering workflows; OpenAI launched Frontier, an enterprise platform positioning itself as the “operating system” of business. The market understood the implications immediately: $285 billion in SaaS market value evaporated in a single day. Salesforce and Workday are each down over 40% in twelve months. The dynamic is called “seat compression”: if ten AI agents can do the work of a hundred employees, you need ten software licenses instead of a hundred. The frontier labs are not just building the models; they are becoming the platform through which all business workflows execute. Gartner projects 40% of enterprise applications will embed AI agents by the end of 2026, up from less than 5% in 2024. Consider what this means in practice. When a company’s agents run on OpenAI or Anthropic infrastructure, those agents are trained by the lab, updated by the lab, and governed by the lab’s policies. The company’s “workforce” is loyal to its provider in a way that human employees never were to a staffing agency. The provider sees every workflow, understands the business logic, and could replicate it. Right now, as labs compete for enterprise adoption, the bargaining power favors the buyer. That window will close. Once agents are embedded across an organization’s operations, the switching costs become enormous: retraining workflows, rebuilding integrations, relearning institutional context that lives inside the provider’s system. Enterprise software has seen this pattern before. CIOs are already being warned that LLM pricing increases are coming and that vendors without their own models will pass those costs through. Constellation Research tracks emerging fights over “data tolls” and connector fees as the new leverage point. The step from “platform that runs your agents” to “competitor that deploys its own agents to do what your company does” is terrifyingly short. The logical endpoint is an economy where a handful of AI providers serve as a universal workforce layer, and every company that relies on them operates at their discretion.
The analogy to hereditary aristocracy is structural. When economic power translates directly into the ability to shape the rules governing that power, concentration becomes self-perpetuating. The “machine aristocracy” requires treating frontier AI compute as public infrastructure too consequential to be purely private, mandating transparency in lobbying and regulatory influence by AI developers, and establishing public ownership stakes in systems whose value derives from collectively generated data.
Key tension: AI is simultaneously the most democratizing tool ever invented (universal access to knowledge) and the most concentrating (ownership of the means of intelligence).