Short Essay V

On the Inheritance We Choose

What happens when a society must reinvent how it passes on what it knows

~6 min read


In 2025, RAND surveyed over 1,200 American students and found that AI homework use rose from 48 to 62 percent over the course of the year, driven largely by middle and high schoolers. Over the same period, the percentage who believed AI was harming their critical thinking skills rose by more than ten points, to 67 percent. Two out of three students believed the tool was degrading their capacity to think. Three out of five were using it anyway, and the rate was accelerating. Over 80 percent reported that their teachers had never explicitly taught them how to use AI for schoolwork.

The previous essay in this collection documented what happens to adults who delegate cognitive work to AI: they become more productive but less able to evaluate their own performance, more efficient but less able to operate without the tool. This essay asks a different question. What happens when the delegation begins during the developmental period in which the capacity to think independently was supposed to form in the first place?

The distinction matters because the consequences differ in kind. An adult who delegates judgment loses a capacity built through years of practice. That loss is atrophy: the cognitive muscle weakens from disuse but retains a structural memory that makes rebuilding possible. A child who delegates from the start never builds the capacity at all. The brain remains plastic through the mid-twenties, so the difference is developmental and institutional rather than strictly neurobiological: the adult has a baseline to return to, established habits of effortful engagement, a structural memory of what independent reasoning feels like. The child has none of these. At population scale, the difference between “harder to develop later” and “never developed” may be academic, because the conditions for late development, remedial education, mentorship, structured re-skilling, are themselves under pressure from the same forces that prevented the development in the first place.

A sophisticated counterargument holds that this generation has simply optimized for a world in which independent reasoning is an obsolete metabolic expense, the way oral memorization became less important after writing. If AI will always be available and reliable, the optimization is rational. The argument of this essay, and of the collection it belongs to, is that AI will not always be available or reliable, that systems fail, access is unevenly distributed, and democratic governance requires capacities the technology cannot supply. Dependency on systems you cannot maintain, repair, or evaluate independently is a civilizational vulnerability, not an adaptation.

Every society has mechanisms for transmitting capability from one generation to the next: education, apprenticeship, mentorship, junior professional roles, the relationships in which a novice works alongside someone more experienced and learns by doing. The relevant feature of all these mechanisms is that they require the learner to struggle. The student writes the essay badly, gets feedback, writes it again less badly, and develops the capacity to write. The junior engineer ships broken code, gets it reviewed by a senior, and eventually develops the judgment to catch the flaw before shipping. The struggle is the transmission mechanism. Remove it, and the transmission stops.

AI is removing it, across every transmission channel, at the same time.

In the classroom, the Brookings Institution assessed AI’s threat to students as “qualitatively different from challenges posed by previous educational technologies.” Previous tools removed specific sub-tasks while leaving the core cognitive work intact. You could use a calculator for arithmetic and still have to understand the concept. AI performs the core cognitive work itself: it writes the essay, structures the argument, evaluates the evidence. The student’s remaining role is to provide the prompt and assess the output, and the assessment requires the very capacity the tool was supposed to help develop.

The entry-level pipeline, documented in the first essay as collapsing, functioned as the primary post-education transmission mechanism. Entry-level postings in the United States dropped 35 percent from 2023 to mid-2025. The causes are multiple (post-pandemic correction, rate shocks, tech over-hiring reversal), but AI’s contribution is distinct: companies are eliminating roles because AI can do the work, which means those positions are unlikely to return when the cycle turns. Hospitals that cut residency programs in the 1990s faced physician shortages in the 2000s. Airlines that reduced pilot training after 2001 spent the 2010s dealing with pilot shortages. The same dynamic is underway now, operating faster and across more sectors simultaneously.

What makes the current moment different from these historical examples is that all three transmission channels, education, entry-level employment, and social development, are being disrupted by the same technology at the same time. Previous disruptions were localized. The current disruption operates across all channels at once, which means there is no functioning backup. The American Psychological Association published a health advisory in June 2025 warning that AI companion software “may displace or interfere with the development of healthy real-world relationships” in children and adolescents. For adults, the companion displaces existing relational capacities. For children, it may prevent those capacities from forming. Social skills develop through practice in relationships where the other person is unpredictable, sometimes difficult, and occasionally hurtful. The AI companion is none of these things, which makes it a poor training environment for the relationships it is displacing.

The home environment matters at least as much as the school, and for the youngest children it matters more. The period from birth to five is the most intensive phase of cognitive, social, and emotional development, mediated almost entirely by caregiver interaction: joint attention, language scaffolding, responsive turn-taking. If AI-mediated devices are replacing these interactions, the transmission breakdown begins years before the child enters the school system.

Finland provides the clearest evidence that the institutional response is possible, though not yet that it works under full AI saturation. The country has integrated media literacy into its national curriculum since the 1970s, extending through each wave of technology and now incorporating AI literacy. Estonia has built digital literacy into its curriculum since 2012. Singapore includes AI education from primary school. These are early-stage efforts with no outcome data sufficient to prove they work at scale, but they demonstrate that governments can decide to treat cognitive independence as a developmental requirement and design curricula around it. For underserved students, AI can function as a powerful equalizer: a personalized tutor in any language, at any hour, at no cost. The challenge is capturing that opportunity without surrendering to the substitutive pattern.

The outcome depends on which pattern becomes entrenched first. Children growing up inside AI from early childhood, delegating homework, forming bonds with companions, entering a workforce that has eliminated the pipeline through which judgment was previously transmitted, will be AI-fluent, capable with the tool, and dependent on it. Countries that treat cognitive independence as a non-negotiable developmental outcome, that build AI-free experiences into the curriculum and redesign apprenticeship models to use AI for acceleration rather than bypass, will produce something different. Finland, Estonia, and Singapore are attempting the second approach. The substitutive pattern is accelerating among middle and high schoolers in the countries that are not. The institutional alternatives operate on timescales of decades. The technology operates on timescales of months.

The final essay in this collection traces the causal chain from institutional failure to catastrophic risk. Its closing argument depends on a premise this essay is intended to establish: that managing catastrophic risk requires a population capable of independent judgment, institutional adaptation, and collective action under uncertainty. Democracies have always functioned with uneven cognitive distribution, but AI governance requires sustained public support for safety measures against the lobbying power of the industries that benefit from the status quo. A society that has failed to transmit the capacity for informed collective judgment to its next generation may not be able to implement the safety measures the transition demands. The circuit breakers require humans who can pull them, and a democracy requires citizens who can decide when pulling them is warranted.

Go Deeper

Read the full essay → The complete analysis: sourced, footnoted, with counterarguments engaged and three branching futures traced.