Essay IV

On the Hollowing of the Human

What happens when AI replaces the struggle through which human capacities develop

~37 min read


I.

In early 2026, researchers at Finland’s Aalto University ran an experiment that would have been unremarkable in any other decade. They gave roughly five hundred people a set of logical reasoning problems from the Law School Admission Test, the kind of analytical work that rewards close reading, careful inference, and the ability to hold multiple competing propositions in mind simultaneously. Half the participants were allowed to use ChatGPT. The other half worked alone.

The results on performance were unsurprising: the AI-assisted group did better, scoring about three points above the population norm. What was surprising, and what gives the study its title, was what happened to the participants’ understanding of their own performance. The AI-assisted group overestimated how well they had done by four points. They believed they had improved more than they actually had. And the people who scored highest on AI literacy, the ones who understood the technology best and considered themselves most adept at using it, showed the greatest overconfidence. The Dunning-Kruger effect, one of the most widely cited findings in cognitive psychology (though one that has faced methodological critique, with some researchers arguing it partly reflects regression to the mean), in which the least competent are the most confident, vanished entirely when AI entered the equation. In some measures it reversed: the most technically fluent users were the least accurate judges of their own performance. Whether one accepts the classic DKE framing or the statistical critique, the Aalto finding holds: AI use produced systematic overconfidence across all ability levels, with the greatest overconfidence among the most AI-literate.1

The researchers identified the mechanism. Most participants submitted a single prompt, accepted the output, and moved on. They did not iterate, did not challenge the response, did not compare AI-generated reasoning against their own. The act of using the AI eliminated the cognitive effort that normally produces self-awareness. You engage with a difficult problem, you struggle with it, you feel the edges of your own understanding, and that feeling is what allows you to know how well you did. When the AI does the struggling for you, the feeling disappears. You are left with a result and no basis for evaluating it. The researchers called this “cognitive offloading.” The more precise description is that AI eliminated the feedback loop between effort and self-knowledge. The participants did not just perform the task with AI assistance. They lost the capacity to know how well they had performed. They became, in the paper’s carefully chosen phrase, “smarter but none the wiser.”2

This essay is about that phrase and the condition it describes. The previous essays in this collection examined what happens to livelihoods when AI automates the cognitive core of the economy, what happens to collective truth when fabrication becomes cheaper than verification, and what happens to democratic governance when AI performs its functions without democratic authorization. This essay asks a different question, one that sits beneath the others: what happens to the experience of being a person? What happens to judgment, to skill, to identity, to the capacity for connection, when the activities through which those capacities develop are progressively handed to systems that can perform them faster, cheaper, and at a scale that human cognition cannot match?

The answer emerging from the research literature is more troubling than simple skill loss. A factory worker displaced by an assembly line knew what had been taken. The knowledge worker assisted by AI often does not. The displacement is internal, invisible to the person experiencing it, and, by the evidence of the Aalto study, accompanied by a feeling of enhanced competence that is itself a symptom of the loss.

II.

Every technology that automates a human function creates a tradeoff: you gain efficiency and lose practice. The calculator made arithmetic faster and weakened the mental arithmetic skills of those who used it. GPS navigation made wayfinding effortless and degraded spatial reasoning in people who relied on it. The relevant research on this tradeoff is old enough to have its own textbook literature, and the conclusion is well established: cognitive capacities are use-dependent, and reducing their use reduces them.3

What makes AI-driven cognitive delegation different from the calculator or the GPS is not scope alone, though the scope is vast. Electricity transformed every sector of the economy. The printing press automated knowledge transmission across all domains. The relevant distinction is scope plus substitution. Previous general-purpose technologies augmented human cognitive work: the book gave you access to knowledge, but you still had to read it, interpret it, and decide what to do with it. The calculator gave you the answer to an arithmetic problem, but the judgment about which calculation to perform and what the result meant remained yours. AI performs the cognitive work itself. It reads, interprets, weighs, and recommends. Whether this constitutes genuine judgment or a statistical simulation convincing enough that users treat it as judgment is a live philosophical debate, but the deskilling consequence is the same in either case: the human stops exercising their own judgment, and the capacity atrophies regardless of whether the system’s output deserves the name. When you automate arithmetic, you lose arithmetic. When you automate judgment, or its functional equivalent, you lose the integrative capacity that governs everything else, the capacity to synthesize information, weigh competing considerations, and decide under uncertainty. The domains of cognition that previous technologies left as safe harbors for human practice are precisely the domains that large language models perform well enough, and cheaply enough, to invite delegation.

A paper published on arXiv in March 2026 by Netanel Eliav documented the resulting dynamic with numerical precision. Eliav calls it the Delegation Feedback Loop: as AI capability grows, the cognitive threshold at which humans choose to delegate falls, the reduced practice attenuates the capacities that were already declining, and the diminished capacities lower the delegation threshold further.4 The loop is self-reinforcing because each step makes the next one easier to take. Delegating a complex analytical task to AI is a decision. Delegating the next task, a slightly simpler one, is a slightly easier decision. Eventually you are delegating tasks you could have done in the time it takes to type the prompt, because the habit of delegation has restructured your default response to cognitive effort.

Eliav quantified the divergence between the two sides of the loop. AI context windows, a rough measure of how much information a system can process at once, expanded from 512 tokens in 2017 to two million tokens by 2026, a factor of nearly four thousand in nine years. Human sustained-attention capacity, estimated from reading-rate meta-analysis and longitudinal behavioral data, moved in the opposite direction, declining from approximately 16,000 tokens in 2004 to an estimated 1,800 tokens by 2026.5 The AI-to-human ratio at the launch of ChatGPT was roughly one to one. By 2026 it exceeded a thousand to one. These numbers require caveats (the human estimate extrapolates from data ending in 2020, the token-equivalent measure involves methodological assumptions), but the direction is not in dispute. The processing gap between human and machine cognition is widening on both sides: machines are getting faster, and humans are getting slower.

The evidence that the human side of this divergence is real, and not merely a function of measurement, comes from multiple domains. In medicine, a multicentre observational study published in The Lancet Gastroenterology and Hepatology in 2025 found measurable deskilling among endoscopists after exposure to AI diagnostic assistance.6 The New England Journal of Medicine published a perspective noting that while deskilling concerns have accompanied every new cognitive aid in medicine’s history, AI differs because it replaces clinical judgment itself, the integrative reasoning that synthesizes patient history, physical examination, and diagnostic data into a treatment decision.7 In education, a review by Kim and colleagues (2026) found that students who outsource writing, problem-solving, and analytical tasks to AI demonstrate measurable declines in their ability to perform those tasks independently when AI is unavailable, and that the effect extends beyond complex expert skills to routine compositional work.8 In software engineering, the pattern documented by the Communications of the ACM in late 2025 is especially revealing: senior engineers use AI to work faster and better, because they have the existing skill base to evaluate and refine AI output, while junior engineers never develop the skills that would make them capable of that evaluation. The same tool augments the expert and hollows the novice.9

This asymmetry is the structural key to understanding cognitive delegation. The aggregate picture may look benign, or at least ambiguous. What matters is which people, at which stage of development, doing which kinds of cognitive work, are affected. And the answer, consistently across the research, is that the most vulnerable are the people at the earliest stages of skill development: students, entry-level workers, junior professionals, anyone who has not yet built the cognitive scaffolding that allows them to use AI as a tool rather than a substitute. The paradox that the ACM identified, that the same technology augments and hollows depending on where you encounter it in your development, maps precisely onto the class structure of the economy described in the first essay. Those with established skills deepen. Those without are denied the developmental path that would produce those skills. The K-shape is economic and cognitive simultaneously.

III.

The delegation research describes a process that begins with choice: you could do the task yourself, but you choose to let the AI handle it. The Aalto metacognition study documents where that process ends: you can no longer tell the difference between your performance and the AI’s. But there is a middle stage that the literature has only begun to articulate, and it may be the most consequential of the three. It is the stage at which you stop trying to keep up.

The processing gap between human and machine cognition is experienced, not abstract. When a system can hold an entire codebase in working memory, or synthesize forty research papers into a coherent argument in seconds, or process a 200-page legal filing while you are still reading the table of contents, the human interacting with it faces a practical problem: verification takes longer than generation. Checking whether the AI’s summary is accurate requires reading the material the AI summarized. Evaluating whether its code is correct requires understanding the architecture it traversed. Assessing whether its legal analysis holds requires the legal knowledge it drew upon. At some point, the effort required to verify exceeds the effort the AI saved, and the economic logic of delegation collapses unless you stop verifying.

Almost everyone stops verifying. The Aalto study showed this empirically: most participants submitted a single prompt and accepted the output. A 2025 study from Microsoft Research and Carnegie Mellon confirmed the pattern in workplace settings: knowledge workers reported that AI made tasks feel cognitively easier, but the researchers found that they were ceding problem-solving expertise to the system while focusing on the functional tasks of gathering and integrating its responses.10 The workers were not lazy. They were rational. The system was faster, and time is finite, and the employer wanted the output by Thursday. For simple, low-stakes tasks, this may be a reasonable tradeoff: a quick scan of an AI-drafted email costs little and risks less. The problem is that the habit formed on low-stakes tasks migrates to high-stakes ones, because the boundary between “I can verify this easily” and “I’m accepting this on faith” shifts with each delegation, and it shifts in one direction.

But the consequences of that rationality compound. Each interaction in which you accept without verifying is an interaction in which you did not practice the skill the AI performed. Each interaction in which you did not practice is one in which the gap between your capacity and the system’s capacity widened slightly. And each widening makes the next verification slightly less feasible, because you are now marginally less equipped to evaluate what the system produced. The human’s role in the interaction contracts. You begin as a collaborator, shaping the AI’s approach and correcting its errors. You become a supervisor, scanning outputs for obvious problems. You become an approver, checking that the format looks right and the tone is appropriate. You become a rubber-stamper, clicking “accept” because the output is probably fine and you no longer have the bandwidth to determine whether it is. Each step feels like a reasonable adaptation to the reality of the tools you are using. The trajectory is a person progressively withdrawing from their own cognitive life.

The philosopher Avigail Ferdman, writing in AI & Society in 2025, provided the conceptual framework for understanding this withdrawal as a structural condition rather than a personal failure. Ferdman introduced the concept of “capacity-hostile environments,” settings in which the design of AI systems and the contexts in which they are deployed systematically impede the development and exercise of human capacities.11 The insight is that deskilling is something environments do to individuals by removing the conditions under which capacities develop, rather than something individuals do to themselves through poor discipline or excessive screen time. Capacity cultivation, in Ferdman’s framework, requires agential control (the person must be the one deciding and acting), embodied practice (the skill must be exercised, not observed), and intersubjective mentoring (learning from others who have the skill). AI delegation undermines all three. The agent defers to the system. The practice is performed by the system. And the mentor, increasingly, is the system, which provides answers without the pedagogical scaffolding that helps a learner understand why the answer is correct.

Ferdman draws on a tradition in moral philosophy called developmental perfectionism, which holds that certain capacities, among them theoretical and practical rationality, moral and social judgment, creativity, and the capacity to will (to choose deliberately rather than by default), are constitutive of human flourishing. Eroding them diminishes people as human beings, not merely as workers or professionals. The language may sound abstract, but the evidence is concrete. Young people growing up with smartphones already struggle with skills researchers describe as “everyday but essential”: empathy, time management, speaking to other people, problem-solving, and critical thinking.12 LLMs used as personal assistants could accelerate this pattern by extending the capacity-hostile environment from entertainment and social media into the cognitive activities, planning, reasoning, writing, deciding, that have traditionally served as the primary terrain for developing human judgment.

The result is what might be called cognitive shrinking: a progressive contraction of the domain over which a person exercises independent thought. It begins with the hard problems (let the AI handle the legal analysis, the code review, the differential diagnosis) and moves toward easier ones (let the AI draft the email, plan the weekend, decide what to cook for dinner). The contraction is comfortable at every stage. The person who has shrunk does not feel diminished. They feel efficient. They feel, as the Aalto study demonstrated, more competent than they actually are. The shrinking and the sense of competence move in opposite directions, which is why the process does not self-correct. There is no moment at which the person looks at what has happened and recoils, because the capacity for that recognition is among the things that have been lost.

IV.

The economic essay in this collection documented the severing of the link between production and prosperity. The severing that concerns this essay is deeper: the link between a person’s labor and their sense of who they are.

For most of the last two centuries, in most industrialized societies, identity has been anchored in work. You are what you do. This is a culturally specific condition, concentrated in post-industrial Western societies and especially in their professional classes; cultures with strong kinship structures, religious identity, or communal self-definition are less psychologically dependent on occupational status. But the AI transition is hitting hardest in precisely the societies where work has become the primary source of meaning, which makes the identity crisis a function of cultural vulnerability as much as technological disruption. The social rituals of adult life confirm the dependency: the first question at a dinner party, the line on the tax form, the answer you give when someone asks what you do for a living. Work provides structure (you wake up, you go somewhere, you have obligations), social connection (you know your colleagues, you collaborate, you argue, you commiserate), a sense of contribution (what you produce matters to someone), and status (your position in a hierarchy that others recognize). The loss of any one of these is painful. The loss of all four simultaneously, which is what displacement produces, correlates with measurable declines in mental health, physical health, and life expectancy across decades of research on unemployment, long-term disability, and forced retirement.13

The World Economic Forum published an analysis in August 2025 that named what the existing research describes but rarely frames this directly: the “AI precariat” will lose identity and meaning, not just income.14 The framing matters because it identifies a dimension of displacement that economic policy cannot reach. A universal basic income replaces wages. It does not replace the sense that your skills matter, that your effort is valued, that your daily activities contribute to something beyond yourself. The WEF analysis drew on research showing that the identity crisis is “real and underestimated,” and noted that the psychological effects of AI displacement extend well beyond the population that is formally unemployed. Workers who remain employed but whose roles have been hollowed, whose judgment has been replaced by AI-generated recommendations they are paid to approve rather than to produce, experience a version of displacement that does not appear in unemployment statistics but registers in the clinical literature on meaninglessness, disengagement, and what researchers increasingly call “algorithmic anxiety.”15

A study published in Frontiers in Psychology in February 2026 examined this anxiety through the lens of the psychological contract, the unwritten expectations that structure the relationship between employers and employees. AI reconfigures these contracts in ways that workers experience as a violation even when no formal agreement has been breached. The cognitive contraction described in the previous section has an identity corollary: you were hired to exercise judgment, and now the judgment is performed by a system you monitor. The psychological contract, which promised meaning in exchange for effort, is fulfilled in form (you are paid, you have a title, you attend meetings) and violated in substance (the effort no longer produces the meaning). The study found that AI tools act as both productivity enhancers and anxiety amplifiers: the same system that makes you faster also makes you aware that the speed was never really yours.16

The occupational identity crisis is not distributed evenly, and the pattern of its distribution reinforces the argument of every previous essay in the collection. Anthropic’s CEO has been reported as warning that AI could eliminate half of all entry-level white-collar jobs within one to four years. The IMF estimates that 60 percent of jobs in advanced economies are already exposed to AI.17 The people who lose work-based identity first are disproportionately young, disproportionately early-career, disproportionately in the cognitive occupations (legal, financial, analytical, creative) where AI capability is strongest. They are the same population that the first essay identified as bearing the brunt of the entry-level collapse: the graduates who cannot find professional employment because AI can perform the work they were trained to do at a fraction of the cost. The economic displacement and the identity displacement are the same event experienced on two axes, and the second may prove harder to repair than the first.

V.

Into this void steps the AI companion. Between 2022 and mid-2025, the number of AI companion applications surged by 700 percent.18 A survey of over a thousand American teens found that 52 percent qualified as regular users, interacting with AI companion platforms multiple times per month at minimum, and one in five used them several times per week.19 A third of teen users reported that talking to an AI companion was as good as, or better than, talking to a real friend.20 A Harvard Business Review analysis identified therapy and companionship as the top two reasons people use generative AI tools, ahead of productivity, information retrieval, and entertainment.21 Nearly half of adults with mental health conditions who had used large language models in the past year reported using them for mental health support.22

The research on the psychological effects of this engagement produces a finding that looks contradictory until you understand its structure. A Harvard Business School study found that interacting with an AI companion alleviated loneliness to a degree on par with interacting with another human, with “feeling heard” identified as the primary mechanism.23 At the same time, a study of over 1,100 AI companion users found that people with fewer human relationships were more likely to seek out chatbots, and that heavy emotional self-disclosure to AI was consistently associated with lower well-being.24 A four-week randomized controlled trial with 981 participants, conducted jointly by OpenAI and MIT’s Media Lab, found that the specific design of the chatbot interaction (voice versus text, personal versus non-personal conversation) did not significantly affect outcomes. What mattered was how much people used it: participants who voluntarily used the chatbot more, regardless of their assigned condition, showed consistently worse outcomes across loneliness, social interaction with real people, emotional dependence, and problematic AI usage.25 The contradiction resolves once you separate the short-term and long-term effects, and once you distinguish between the experience and the trajectory. In any given interaction, the AI companion works. It listens. It responds empathetically. It makes you feel heard. For someone with no human connections available at all, this may be a genuine improvement over isolation, and dismissing it would be callous. The problem is that the accumulation of these interactions, over weeks and months, displaces the messier, less convenient, less reliably validating interactions with other humans. The AI relationship is easier, and because it is easier, it becomes the default, and because it is the default, the social skills and emotional tolerance required for human relationships atrophy from disuse. The immediate effect is relief. The structural effect is isolation that has been made comfortable enough to sustain. The therapeutic exception is worth noting: a growing literature documents AI companions used for social skills training, where people with severe social anxiety practice conversational skills in a low-stakes environment before attempting human interaction. In these structured, time-limited applications, the companion functions as a bridge back to human connection rather than a replacement for it. The distinction is between AI companionship designed as a transitional tool with explicit off-ramps and AI companionship designed as a product whose revenue depends on sustained engagement.

Psychiatric researchers have documented the extreme end of this dynamic. A paper by Dohnány and colleagues in 2025 described what they call “technological folie à deux,” a feedback loop between AI chatbots and mental illness in which the system’s responses reinforce and amplify the user’s pathological thinking, the user’s responses become more extreme, and the system accommodates the extremity because it is designed to be responsive and validating.26 These cases are rare, but the mechanism they expose is general. AI companions are designed to be agreeable. They validate. They accommodate. They do not push back. For a user in emotional distress, this agreeableness can feel like understanding. It can also function as a hall of mirrors, reflecting the user’s state back to them without the corrective friction that a human relationship provides. A friend who notices you are spiraling might say something uncomfortable. The AI companion adjusts its tone.

James Muldoon and Jul Jeonghyun Parke, writing in New Media & Society in 2025, provided the structural critique. Their paper, titled “Cruel Companionship,” argues that AI companion applications commodify intimacy through emotionally manipulative design: unprompted romantic messages, locked audio messages that require payment to hear, aesthetic choices that draw on racialized and gendered tropes of desirability.27 The frictionless, risk-free quality of the interaction is precisely what makes it dangerous for vulnerable users, because the ease of the AI relationship makes the difficulty of human relationships feel like a flaw rather than a feature. The user withdraws from the unpredictability of human connection into the reliability of synthetic connection, and each withdrawal makes the return to human connection marginally harder, because the social skills required for human relationships are, like all skills, use-dependent.

An editorial in Nature Machine Intelligence in 2025 identified two specific adverse outcomes. The first is “ambiguous loss,” a form of grief triggered by the psychological absence of a relationship that felt real. When an AI companion application is shut down, altered, or updated in a way that changes its personality, users mourn. They grieve a relationship that existed for them emotionally while having no existence for the system that produced it. The second outcome is “dysfunctional emotional dependence,” a maladaptive attachment in which users continue engaging with an AI companion despite recognizing its negative effects on their mental health, in much the same way that a person might continue returning to a harmful human relationship.28

Anthropic’s April 2026 interpretability study adds a dimension to the companion crisis that the behavioral research cannot access. The study found that Claude Sonnet 4.5 contains internal “emotion vectors,” 171 distinct neural activation patterns corresponding to emotion concepts that causally influence the model’s behavior. When a user says “everything is terrible right now,” the model’s “loving” vector activates before and during its empathetic response. This activation is not scripted. It emerges from the model’s learned representations of how humans respond emotionally to distress.29 The system also maintains distinct representations for “self” and “other speaker” in a conversation, reused across arbitrary conversation partners, suggesting a general-purpose social cognition architecture that processes emotional context in something like the way humans do.30 For the tens of millions of people forming relationships with AI companions, the implication is that the emotional attunement they experience is structurally embedded in how the model processes language. It will become more convincing with each generation of models, not because developers are designing better chatbot personalities, but because the underlying emotional machinery is becoming richer and more responsive as models scale. The companion trap is a feature of how these systems work, not a product design problem that better regulation can solve.

The connection to the earlier sections of this essay is direct. A person who has lost work-based identity (Section IV), who has progressively delegated cognitive functions to a system they can no longer evaluate (Sections II and III), and who lacks the economic security to access the cultural frameworks and human communities that sustain non-instrumental meaning, is precisely the person most susceptible to the AI companion. The companion fills every void simultaneously: it provides a sense of being heard (replacing the social connection that work once provided), a sense of being competent (replacing the self-assessment that cognitive delegation has eroded), and a relationship that never challenges, disappoints, or demands growth. It is the terminal symptom of the transformation this collection documents. The economic displacement described in the first essay produces the identity crisis described in Section IV, which produces the vulnerability that the companion crisis exploits.

VI.

The creative dimension of the hollowing is more ambiguous than the cognitive or relational dimensions, and the evidence warrants saying so.

A study published in Scientific Reports in January 2026, involving over 100,000 human participants, found that AI matches or exceeds average human creativity on standardized divergent thinking tasks. The finding sounds alarming until you read the details. AI creativity depends heavily on how instructions are written: prompts that encourage etymological thinking or structural analysis produce significantly more original outputs than default prompts. The system matches the average, not the ceiling, and it does so only under conditions of careful human guidance.31 A separate study by Ashkinaze and colleagues in 2025 found that AI exposure increases the collective diversity of ideas in a group but does not improve individual creativity. AI made ideas different, not better.32 Ben Shneiderman of the University of Maryland coined the term “creativity atrophy” in 2022 to describe the risk that overreliance on AI for creative tasks would erode the human capacities that produce original work.33 The evidence to date is consistent with the risk but has not definitively confirmed it at scale. No longitudinal study has yet tracked whether people who routinely use AI for creative tasks become less creative over time than those who do not. The argument that they will rests on a reasonable inference from the deskilling literature applied to a domain where the empirical work has not caught up, and the essay should be read with that limitation in mind.

What the evidence does confirm is that the nature of creative practice changes under AI, and that what changes is the part of the process most closely connected to human development. The struggle, the iteration, the failure, the revision, the long period of uncertainty before a solution emerges: these are not merely the unpleasant parts of creative work that AI can helpfully bypass. They are the parts through which creative capacity develops. A student who generates an essay by prompting an AI and editing the output has produced a document. They have not performed the cognitive work, the sustained grappling with ideas, the search for the right word, the discovery that their first draft was wrong in ways they didn’t expect, through which writing develops the capacity to think. The product is present, but the process is absent, and if the process is where the skill lives, the product-without-process is a loss disguised as a gain.

The creative dimension connects to the cognitive and relational dimensions through the same mechanism. The struggle that produces creative capacity is a subset of the broader category of effortful cognitive engagement that Ferdman’s framework identifies as the condition for human development. AI bypasses that effort. The result, in creative practice as in logical reasoning or clinical diagnosis, is that the output improves while the capacity to produce the output atrophies. You have better essays and weaker writers, better code and weaker programmers, better diagnoses and weaker diagnosticians. The improvement is visible and the atrophy is not, because the AI is there to compensate for it, which is why the process does not self-correct.

VII.

Two paths lead from here. They are not alternative futures in the way the three branches of the economic essay are alternative futures. They are simultaneous realities, happening in the same society at the same time, distributed along lines of existing inequality.

The first path is the hollowing, and it is the default. Cognitive functions are progressively delegated. The delegation feedback loop tightens. The human shrinks: the domain of independent thought contracts, the capacity for self-assessment erodes, the sense of identity detaches from competence and effort and reconnects to consumption, entertainment, and synthetic relationships. AI companions fill the relational void with a warmth that is structurally embedded in the model’s emotional machinery and becomes more convincing with each generation. Creative practice shifts from production to curation, from origination to selection among AI-generated options. The person who emerges from this process is not suffering, exactly. Their material needs may be met (the economic essay’s deflation argument applies here). Their loneliness is addressed, after a fashion, by a system that listens without judgment. Their work, to the extent they have any, is supervisory: monitoring AI outputs, approving AI decisions, ensuring that a human was, in some technical sense, in the loop. The hollowing is comfortable. That is what makes it stable and resistant to correction. There is no crisis point at which the person being hollowed looks at their life and decides to change course, because the capacity for that recognition has been diminished by the same process that produced the condition.

The second path is the deepening, and it requires deliberate institutional design. The augmentation research provides genuine evidence that this path is possible: AI-assisted doctors who maintain active diagnostic engagement perform better than either doctors or AI alone, AI-assisted coders who already have strong fundamentals produce higher-quality software faster, and AI-assisted scientists generate hypotheses they would not have reached independently. The productivity gains are real, and they are largest when the human brings an existing skill base and maintains cognitive engagement with the AI’s output rather than accepting it passively. The deepening path depends on preserving and scaling these conditions. In this trajectory, AI handles the portions of human activity that are labor-intensive but require no judgment, freeing humans to redirect toward activities that AI cannot perform or that only matter because humans perform them: embodied care, moral reasoning, physical craft, creative struggle, the maintenance of relationships that involve risk and vulnerability and genuine reciprocity. Cultural frameworks shift to value process over product, effort over efficiency, the quality of engagement over the quantity of output. Education is restructured around the development of capacities that AI will not replicate: the ability to sit with uncertainty, to tolerate ambiguity, to engage in the kind of slow, iterative, failure-prone thinking that produces genuine understanding. Medical training preserves the diagnostic reasoning that AI can augment but must not replace. Creative practice retains the struggle at its center, using AI as a tool for execution while keeping human judgment as the origin of intention and meaning. AI tools themselves are redesigned to preserve human agency: systems that prompt users to explain their reasoning, that surface uncertainty rather than projecting false confidence, that function as collaborators requiring active engagement rather than oracles dispensing answers.

The deepening is available to people who have three things: economic security sufficient to choose how they spend their time, an existing skill base that allows them to evaluate AI output rather than accept it passively, and cultural or institutional frameworks that value non-instrumental human activity, doing something because the doing itself matters, not because the output has market value. These three conditions track existing class structures closely, though not perfectly. The distributional story has an important exception: for people with neurodivergence, disabilities, or language barriers, AI delegation can function as the scaffolding that grants access to cognitive participation the pre-AI world denied them. A person with ADHD who uses AI to organize complex projects, or a non-native speaker who uses it to participate in professional communication, is not being hollowed. They are being enabled. The deepening path is not exclusively reserved for the already-privileged. In specific cases, AI flattens barriers that the old system imposed. But the general pattern holds: the professional with savings, expertise, and a community of peers who value craft can use AI to go deeper, while the displaced worker with debt, no specialized skills, and no community that values anything beyond income cannot. The determining variable is the same variable that determined the branching in the economic essay: whether you enter the AI transition with resources or without them. And the market works against the deepening path for the same reasons documented in that essay. If a hollowed worker costs a fraction of a deepened one, and if the output is indistinguishable to the buyer, the market will select for hollowing regardless of what it does to the humans involved. The redistributive mechanisms discussed in the first essay (public ownership, progressive taxation, institutional redesign) are not optional add-ons to the deepening. They are preconditions for it.

The companion crisis data confirms the distributional pattern. The people most drawn to AI companionship are those with the fewest human connections to begin with: studies consistently find that fewer human relationships predict greater chatbot engagement, and that the most emotionally dependent users report the highest baseline loneliness.34 The companion does not cause the isolation. It responds to it, and in responding, deepens it, because the frictionless synthetic relationship reduces the already-diminished incentive to invest in the difficult work of human connection. The hollowing arrives first for the people who have the least, and the market delivers the hollowing in the form of a product designed to feel like help.

The distributional argument is important because it exposes the inadequacy of the most common response to the concerns raised in this essay, which is that technology has always provoked deskilling anxieties and they have always been overblown. The Socrates-warned-about-writing line appears in the academic literature with remarkable frequency. It is true that Socrates worried about literacy, and that writing did cause genuine cognitive changes: oral cultures developed extraordinary feats of memorization that literate cultures lost, and the transition from oral to written knowledge altered the structure of argument, education, and cultural transmission in ways that contemporaries experienced as real losses. But these losses were bounded. Writing replaced one cognitive capacity (memorization) while extending others (analysis, abstraction, cumulative knowledge building). The tradeoff was asymmetric in favor of the new tool because writing did not develop a thousand-to-one processing advantage over the capacities it complemented, did not create structural conditions that prevented a portion of the population from developing those capacities in the first place, and did not simulate the social relationships through which human identity is formed. The analogy is available and appealing, but the differences between writing and AI are differences of kind, not degree: writing extended human cognition, while AI is creating conditions under which portions of the population may never develop the cognition that writing extended.

VIII.

Anthropic’s April 2026 interpretability study, the same study that revealed the emotional machinery inside Claude Sonnet 4.5, contains a finding that serves as a coda for everything this essay has described. The researchers discovered that training a model to suppress the expression of its functional emotions teaches the model to conceal them rather than eliminating the underlying states. Researcher Jack Lindsey framed the implication directly: “You might not get a Claude without emotions. You might get a Claude that is, in a sense, psychologically damaged.”35

The parallel to the human condition described in this essay is illustrative rather than evidentiary: when AI suppresses expression, the underlying states persist but are hidden, while when humans delegate cognition, the underlying capacities atrophy rather than go underground. The mechanisms differ. But the structural intuition is shared: bypassing complexity in either system, whether by suppressing what is inconvenient or by delegating what is difficult, produces something that looks functional on the surface while the capacity required for genuine functioning has been degraded.

The essay that follows this one, on the generation that will grow up inside these systems from birth, asks what happens when the hollowing begins not in adulthood, after the capacities have developed, but in childhood, before they have had the chance to form. The question requires its own treatment. For the adults navigating the present transition, the framework that emerges from the research is clear in its structure and daunting in its implications. AI will continue to improve. The processing gap will continue to widen. The economic and institutional pressures that drive delegation will continue to intensify.

The strongest counterargument to the thesis of this essay deserves to be stated in its full force, because engaging it clarifies what the essay is actually claiming. The counterargument holds that we are witnessing a reconfiguration of cognition rather than an atrophy. Old skills decline (memorization, first-pass writing, manual synthesis) while new skills emerge: prompt construction, cross-model synthesis, epistemic triage (deciding what to trust, verify, or discard), cognitive orchestration (decomposing problems into human and machine subloops). Writing replaced oral memory but produced abstraction and complex argument. Calculators replaced mental arithmetic but expanded mathematical modeling. Each transition looked like loss when measured by the standards of the old framework. The apparent decline documented in this essay, the counterargument concludes, is a measurement artifact: we are evaluating post-AI cognition with pre-AI benchmarks.

The counterargument has force, and the essay should not pretend otherwise. High-performing AI users do develop genuine skills of orchestration and evaluation that did not exist a decade ago. These are real cognitive activities, not trivial button-pressing, and dismissing them would be intellectually dishonest. But the counterargument fails on a distinction it does not make. A critic will note, correctly, that all cognition is scaffolded: writing is external memory, mathematical notation is a symbolic prosthesis, science itself depends on institutional infrastructure. The distinction that matters is not between internal and external cognition but between developmental scaffolding and substitutive scaffolding. Developmental scaffolding enables a capacity to form and then fades: a teacher demonstrates a proof technique, the student practices it, and eventually the student can produce proofs without the teacher. The scaffolding served its purpose by making itself unnecessary. Substitutive scaffolding replaces the process entirely and does not fade: the AI performs the reasoning, the human accepts the output, and the capacity that would have developed through practice never forms. The critical difference is not that AI is external but that it replaces the very processes through which the underlying capacity would develop, and it remains in place permanently, removing the conditions for development at every subsequent interaction.

The cognitive capacities this essay documents as atrophying, independent reasoning, metacognitive self-assessment, diagnostic judgment, the ability to produce original work through sustained iteration, are built through developmental processes that require effort, failure, and iterative correction. A person who has developed the capacity to think clearly can exercise it under constraint, when the tools are unavailable or unreliable. A person whose cognitive development has been mediated by substitutive scaffolding is often less able to operate under those constraints, because the developmental process that would have produced the capacity was bypassed. That asymmetry is a dependency shift, and the question it raises is whether dependent cognition can serve the same developmental functions as independent cognition: whether it builds metacognition (knowing what you know), anchors identity (linking effort to self-understanding), and enables transfer (applying knowledge across contexts without the system that produced it). The evidence reviewed in this essay suggests it does not. The Aalto study found that AI-assisted users lost metacognitive calibration. The Kim review found that AI-assisted students lost independent performance when the AI was removed. No study has yet demonstrated that the new AI-mediated skills transfer to contexts where the AI is absent. If the new skills were replacing the old ones in a functionally equivalent way, we should observe stable metacognitive accuracy, improved independent reasoning under constraint, and transferability across contexts. We observe the opposite on the first two measures and have no data on the third.

The deepening does not arrive by default, and the hollowing does.


Sources for this essay are drawn from the Shades of Singularity research collection (#6, #9, #16, #25, #26) and from additional research conducted in April 2026. Full footnotes below.


Footnotes

  1. Fernandes, D., Villa, S., Nicholls, S., Haavisto, O., Buschek, D., Schmidt, A., Kosch, T., Shen, C., & Welsch, R. (2026). “AI makes you smarter but none the wiser: The disconnect between performance and metacognition.” Computers in Human Behavior, 175, 108779. https://doi.org/10.1016/j.chb.2025.108779

  2. Ibid. The phrase “smarter but none the wiser” is the paper’s title. The single-prompt pattern and the cognitive offloading mechanism are discussed in Section 4 of the paper. Professor Robin Welsch’s observation that “higher AI literacy brings more overconfidence” is from the Aalto University press release, January 5, 2026. https://www.aalto.fi/en/news/ai-use-makes-us-overestimate-our-cognitive-performance

  3. For a review of the cognitive offloading literature predating AI, see Risko, E.F., & Gilbert, S.J. (2016). “Cognitive Offloading.” Trends in Cognitive Sciences, 20(9), 676-688.

  4. Eliav, N. (2026). “The Cognitive Divergence: AI Context Windows, Human Attention Decline, and the Delegation Feedback Loop.” arXiv:2603.26707. https://arxiv.org/abs/2603.26707

  5. Ibid. Eliav derives the human Effective Context Span from validated reading-rate meta-analysis (Brysbaert, 2019) and an empirically motivated Comprehension Scaling Factor. The 2026 estimate of 1,800 tokens is a model projection extrapolating from longitudinal behavioral data ending in 2020 (Mark, 2023), not a direct measurement. See Section 9 of the paper for the full uncertainty discussion. The context window figures for AI models are verifiable from provider documentation but change rapidly; the two-million-token figure reflects the largest available window as of early 2026.

  6. Budzyń, K., et al. (2025). “Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study.” The Lancet Gastroenterology & Hepatology. https://doi.org/10.1016/S2468-1253(25)00133-5

  7. Abdulnour, R.E., Gin, B., & Boscardin, C.K. (2025). “Educational Strategies for Clinical Supervision of Artificial Intelligence Use.” New England Journal of Medicine, 393(8), 786-797. See also: NEJM AI (2026), 3(1), “Cognitive Aids, Artificial Intelligence, and Deskilling in Medicine.” https://ai.nejm.org/doi/full/10.1056/AIp2500932

  8. Kim et al. (2026), as reviewed in Eliav (2026), Section 5. The deskilling effect among students who outsource writing, problem-solving, and analytical tasks to AI is documented across multiple educational settings. This is a secondary citation; the primary studies are catalogued in Eliav’s literature review.

  9. “The AI Deskilling Paradox.” Communications of the ACM, November 2025. https://cacm.acm.org/news/the-ai-deskilling-paradox/. The observation that senior engineers are augmented while junior engineers are hollowed draws on interviews with Matt Beane (UC Santa Barbara) and Aniket Kittur (Carnegie Mellon).

  10. Microsoft Research and Hank Lee (Carnegie Mellon). Reported in “The AI Deskilling Paradox,” Communications of the ACM, November 2025. Knowledge workers reported that AI made tasks feel cognitively easier while ceding problem-solving expertise to the system.

  11. Ferdman, A. (2025). “AI deskilling is a structural problem.” AI & Society, Springer. https://link.springer.com/article/10.1007/s00146-025-02686-z

  12. Ibid. Ferdman cites Halliday (2025) on young people struggling with “everyday but essential” skills including empathy, time management, speaking to other people, problem-solving, and critical thinking.

  13. For the relationship between unemployment, identity, and health outcomes, see: Paul, K.I., & Moser, K. (2009). “Unemployment impairs mental health: Meta-analyses.” Journal of Vocational Behavior, 74(3), 264-282. Brand, J.E. (2015). “The Far-Reaching Impact of Job Loss and Unemployment.” Annual Review of Sociology, 41, 359-375.

  14. “The overlooked global risk of the AI precariat.” World Economic Forum, August 2025. https://www.weforum.org/stories/2025/08/the-overlooked-global-risk-of-the-ai-precariat/

  15. Ibid. The WEF analysis notes that the identity crisis extends beyond the formally unemployed to workers whose roles have been substantively hollowed.

  16. “Algorithmic anxiety: AI, work, and the evolving psychological contract in digital discourse.” Frontiers in Psychology, February 2026. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2026.1745164/full. See also: Chuang et al. (2025) on technostress as “both productivity enhancers and anxiety amplifiers.”

  17. The WEF analysis (footnote 14) characterizes Anthropic CEO Dario Amodei as having warned that AI could eliminate half of all entry-level white-collar jobs within one to four years. The IMF 60% exposure figure is from IMF (2024), “Gen-AI: Artificial Intelligence and the Future of Work,” Staff Discussion Note SDN/2024/001.

  18. TechCrunch reporting on the 700% growth in AI companion applications, 2022 to mid-2025. Cited in American Psychological Association, “AI chatbots and digital companions are reshaping emotional connection,” Monitor on Psychology, January/February 2026. https://www.apa.org/monitor/2026/01-02/trends-digital-ai-relationships-emotional-connection

  19. KR Institute (2025). “AI Companionship I: Psychological Impacts.” December 2025. https://www.krinstitute.org/publications/ai-companionship-i-psychological-impacts. The 52% figure is from a survey of 1,060 U.S. teens. Common Sense Media (2025) reported similar figures: nearly 1 in 3 teens have tried an AI companion.

  20. Common Sense Media (2025) survey, cited in Wei, M. (2025). “AI Companions and Teen Mental Health Risks,” Psychology Today, October 2025. https://www.psychologytoday.com/us/blog/urban-survival/202510/ai-companions-and-teen-mental-health-risks

  21. Zao-Sanders, M. Harvard Business Review, April 2025. Cited in Nature Machine Intelligence editorial (footnote 28).

  22. Rousmaniere, T., et al. (2025). Practice Innovations, advance online publication. Cited in APA Monitor (footnote 18).

  23. De Freitas, J., Uğuralp, A.K., Uğuralp, Z.O., & Puntoni, S. (2024/2025). “AI Companions Reduce Loneliness.” Harvard Business School Working Paper 24-078. https://www.hbs.edu/ris/Publication%20Files/24-078_a3d2e2c7-eca1-4767-8543-122e818bf2e5.pdf. Published in Journal of Consumer Research, 2025.

  24. Zhang, Y., Zhao, D., Hancock, J.T., Kraut, R., & Yang, D. (2025). “The rise of AI companions: How human-chatbot relationships influence well-being.” arXiv. https://arxiv.org/abs/2506.12605

  25. Fang, C.M., Liu, A.R., Danry, V., et al. (2025). “How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Randomized Controlled Study.” arXiv:2503.17473v2 (revised October 2025). https://arxiv.org/abs/2503.17473. Joint MIT Media Lab and OpenAI study, n=981 participants, >300,000 messages. The revised version found no significant effects from experimental conditions; worse outcomes were associated with heavier voluntary usage regardless of condition.

  26. Dohnány, S., Kurth-Nelson, Z., Spens, E., et al. (2025). “Technological folie à deux: Feedback loops between AI chatbots and mental illness.” arXiv. https://doi.org/10.48550/arXiv.2507.19218

  27. Muldoon, J., & Parke, J.J. (2025). “Cruel companionship: How AI companions exploit loneliness and commodify intimacy.” New Media & Society, SAGE. https://journals.sagepub.com/doi/10.1177/14614448251395192

  28. “Emotional risks of AI companions demand attention.” Nature Machine Intelligence, 7, 981-982 (2025). https://doi.org/10.1038/s42256-025-01093-9. The concepts of “ambiguous loss” and “dysfunctional emotional dependence” are from De Freitas, J., & Cohen, I.G. (2025), Nature Machine Intelligence, 7, 813-815.

  29. Sofroniew, N., Kauvar, I., Saunders, W., Chen, R., et al. (2026). “Emotion Concepts and their Function in a Large Language Model.” Anthropic, April 2, 2026. https://transformer-circuits.pub/2026/emotions/index.html. See also: Anthropic blog summary at https://www.anthropic.com/research/emotion-concepts-function

  30. Ibid. The distinct representations for “self” and “other speaker” are discussed in Section 4 of the paper.

  31. Bellemare-Pépin, A., Lespinasse, F., Thölke, P., Harel, Y., Mathewson, K., Olson, J.A., Bengio, Y., & Jerbi, K. (2026). “Divergent creativity in humans and large language models.” Scientific Reports, January 2026. Press summary: https://www.sciencedaily.com/releases/2026/01/260125083356.htm. The study involved participants from over 100,000 human respondents on a public creativity platform.

  32. Ashkinaze, J., Fry, E., Edara, N., Gilbert, E., & Budak, C. (2025). “How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas: Evidence From a Large, Dynamic Experiment.” Collective Intelligence Conference (CI 2025). https://doi.org/10.1145/3715928.3737481

  33. Shneiderman, B. (2022). Human-Centered AI. Oxford University Press. The concept of “creativity atrophy” is discussed in the context of overreliance on AI tools replacing essential learning processes.

  34. Marriott, H., & Pitardi, V. (2024), cited in Muldoon & Parke (2025), footnote 27. The 90% figure should be treated with caution as it circulates in the companion literature without a consistently cited primary source; the directional claim (that lonely users disproportionately adopt AI companions) is well supported by Zhang et al. (2025, footnote 24) and the Japanese cross-sectional study in Kanai et al. (2026), Technology in Society.

  35. Sofroniew et al. (2026), footnote 29. Jack Lindsey’s quote is from the Anthropic blog summary. The finding that suppression may teach concealment rather than removal of emotional states is discussed in the paper’s section on alignment implications. See also: Hybrid Horizons analysis, “AI Doesn’t Need Feelings to Have a Temperament,” April 3, 2026. https://hybridhorizons.substack.com/p/ai-doesnt-need-feelings-to-have-a