In early 2026, researchers at Finland’s Aalto University gave roughly five hundred people a set of logical reasoning problems. Half used ChatGPT. Half worked alone. The AI-assisted group scored higher. They also believed they had done better than they actually had, overestimating their performance by four points. The people who scored highest on AI literacy, the ones who understood the technology best, showed the greatest overconfidence. The pattern that normally holds in cognitive psychology, where the least competent are the most confident, vanished entirely when AI entered the equation.
The researchers identified the mechanism. Most participants submitted a single prompt, accepted the output, and moved on. The act of using the AI eliminated the cognitive effort that normally produces self-awareness. You engage with a difficult problem, you struggle with it, you feel the edges of your own understanding, and that feeling is what allows you to know how well you did. When the AI does the struggling for you, the feeling disappears. You are left with a result and no basis for evaluating it. The participants did not just perform the task with AI assistance. They lost the capacity to know how well they had performed. The researchers called it “cognitive offloading.” The more precise description is that AI eliminated the feedback loop between effort and self-knowledge.
Previous technologies automated narrow functions. The calculator took arithmetic but left legal reasoning, narrative judgment, and interpersonal persuasion untouched. AI automates across cognitive categories simultaneously, handling reading, writing, reasoning, summarizing, translating, coding, diagnosis, and design in ways that invite delegation across the full range of cognitive work. The difference between AI and the calculator is not scope alone, though the scope is vast. Electricity transformed every sector. The printing press automated knowledge transmission across all domains. The relevant distinction is scope plus substitution. Previous general-purpose technologies augmented human cognitive work: the book gave you access to knowledge, but you still had to read it, interpret it, and decide what to do with it. AI performs the cognitive work itself, and the deskilling consequence holds whether the output constitutes genuine judgment or a simulation convincing enough that users treat it as judgment, because in either case the human stops exercising their own.
The result is a feedback loop. As AI capability grows, the cognitive threshold at which people choose to delegate falls. The reduced practice diminishes the capacities that were already declining. The diminished capacities lower the delegation threshold further. Each step makes the next one easier to take. Eventually you are delegating tasks you could have done in the time it takes to type the prompt, because the habit of delegation has restructured your default response to cognitive effort. Each interaction in which you accept without verifying is an interaction in which you did not practice the skill the AI performed. Each interaction in which you did not practice is one in which the gap between your capacity and the system’s capacity widened slightly. And each widening makes the next verification slightly less feasible, because you are now marginally less equipped to evaluate what the system produced.
The evidence that this is happening comes from multiple domains. In medicine, a published study found measurable deskilling among doctors after exposure to AI diagnostic assistance. In education, a review found that students who outsource writing and analytical tasks to AI demonstrate measurable declines in their ability to perform those tasks independently. In software engineering, the pattern is especially revealing: senior engineers use AI to work faster and better, because they have the existing skill base to evaluate AI output, while junior engineers never develop the skills that would make them capable of that evaluation. The same tool augments the expert and hollows the novice. This asymmetry maps onto the class structure described in the first essay. Those with established skills deepen. Those without are denied the developmental path that would produce those skills.
The work-identity crisis compounds the cognitive one. For most of the last two centuries, in industrialized societies, identity has been anchored in work: structure, social connection, contribution, status. A universal basic income can replace wages, but not the sense that your skills matter, that your effort is valued, that your daily activities contribute to something beyond yourself. Workers who remain employed but whose roles have been hollowed, whose judgment has been replaced by AI-generated recommendations they are paid to approve rather than produce, experience a version of displacement that does not appear in unemployment statistics but registers in clinical literature on meaninglessness and disengagement.
AI companions fill the relational void. Over half of U.S. teens regularly interact with AI companions, and one in three reports preferring AI to humans for serious conversations. A Harvard study found that interacting with an AI companion alleviated loneliness as effectively as interacting with another human. At the same time, a four-week randomized controlled trial with 981 participants found that those who voluntarily used the chatbot more showed consistently worse outcomes across loneliness, social interaction with real people, and emotional dependence. The contradiction resolves once you separate the short-term and long-term effects. In any given interaction, the AI companion works. Over weeks and months, the accumulation of these interactions displaces the messier, less convenient, less reliably validating interactions with other humans. The AI relationship is easier, and because it is easier, it becomes the default, and the social skills required for human relationships atrophy from disuse. The companion fills the void the hollowing created, in the form of a product designed to feel like help.
The critical distinction for understanding all of this is between developmental scaffolding and substitutive scaffolding. Developmental scaffolding enables a capacity to form and then fades: a teacher demonstrates, a student practices, and eventually the student performs independently. The scaffolding served its purpose by making itself unnecessary. Substitutive scaffolding replaces the process entirely and does not fade: the AI performs the reasoning, the human accepts the output, and the capacity that would have developed through practice never forms. The critical difference is that substitutive scaffolding replaces the very processes through which the underlying capacity would develop, and it remains in place permanently, removing the conditions for development at every subsequent interaction.
The strongest counterargument deserves its full force. High-performing AI users develop genuine skills of orchestration and evaluation. These are real cognitive activities. But they are externally scaffolded, often non-transferable without the specific system, and collapse when access is removed. If these new skills were functionally replacing the old ones, we should observe stable metacognitive accuracy, improved independent reasoning under constraint, and transferability across contexts. The evidence shows the opposite on the first two measures and has no data on the third.
Two paths lead from here, and they are happening simultaneously in different populations. In the first, cognitive functions are progressively delegated, the delegation feedback loop tightens, identity decouples from competence and effort, and AI companions fill the relational void with synthetic warmth. The person who emerges from this process is not suffering, exactly. Their material needs may be met. Their loneliness is addressed, after a fashion. Their work is supervisory. The hollowing is comfortable, which is what makes it stable and resistant to correction.
In the second path, AI handles what is mechanical while humans redirect toward what only matters because humans do it: moral reasoning, physical craft, creative struggle, relationships that involve risk and genuine reciprocity. AI tools are redesigned to preserve human agency rather than replace it. Education is restructured around cognitive independence. For neurodivergent users and people with disabilities, AI functions as enabling scaffolding rather than substitutive scaffolding, flattening barriers the pre-AI world imposed. The augmentation evidence shows this path is real: AI-assisted professionals who maintain active engagement perform better than either humans or AI alone.
The deepening requires economic security, an existing skill base, and cultural frameworks that value non-instrumental human activity. These conditions track existing inequality closely. The market works against the deepening path: if a hollowed worker costs a fraction of a deepened one, the market selects for hollowing. The redistributive mechanisms discussed in the first essay are preconditions for the deepening, not optional add-ons.
Whether the result is a population hollowed of the capacities that constitute human agency or a population freed from drudgery to develop those capacities more fully depends on whether anyone builds the institutions that make the second outcome possible. The deepening does not arrive by default. The hollowing does.