Essay V

On the Inheritance We Choose

What happens when a society must reinvent how it passes on what it knows

~22 min read


I.

In March 2026, RAND published a nationally representative survey of over 1,200 American students aged 12 to 29 that captured a paradox clean enough to anchor an essay around. Between May and December 2025, the percentage of middle school, high school, and college students using AI for homework rose from 48 percent to 62 percent, with the increase driven largely by middle and high school students. Over the same period, the percentage who endorsed the statement “the more students use AI for their schoolwork, the more it will harm their critical thinking skills” rose by more than ten percentage points, to 67 percent.1 The survey measures perception, not demonstrated cognitive decline, and that distinction matters. But the perception is not baseless: it is consistent with the deskilling evidence documented in the previous essay and with the qualitative reports from the students themselves. Two out of three believed the tool was degrading their capacity to think. Three out of five were using it anyway, and the rate was accelerating.

A separate qualitative study found students articulating the mechanism with uncomfortable precision. “It’s easy,” one told the researchers. “You don’t need to use your brain.”2 Over 80 percent of students in the RAND survey reported that their teachers had never explicitly taught them how to use AI for schoolwork.3 The tools arrived, the students adopted them, the adoption was reshaping how they think, and the institutions nominally responsible for their cognitive development had not yet formulated a response.

The previous essay in this collection documented what happens to adults who progressively delegate cognitive work to AI: they become smarter but less wise, more productive but less capable of evaluating their own performance, more efficient but less able to operate without the tool. The present essay asks a different question, one that the adult-focused research cannot answer. What happens when the delegation begins during the developmental window in which the capacity to think independently was supposed to form in the first place?

The distinction matters because the consequences are different in kind, though the difference is developmental and institutional rather than strictly neurobiological. An adult who delegates judgment to AI loses a capacity that was built through years of practice. That loss, documented in the previous essay, is a form of atrophy: the cognitive muscle weakens from disuse but retains a structural memory that makes rebuilding possible, at least in principle. A child or adolescent who delegates from the start never builds the capacity at all. The brain’s prefrontal circuits remain plastic through the mid-twenties, which means a twenty-year-old who never developed independent reasoning is neurobiologically similar to a twenty-year-old who had it and stopped practicing. The difference between them is not in their brain architecture but in their behavioral history: the adult has structural memory, established habits of effortful engagement, and a baseline to return to. The child has none of these. Developmental neuroscience distinguishes between hard critical periods (like first-language acquisition, where the window closes relatively firmly) and sensitive periods (where development is easier during a window but remains possible, with greater difficulty, outside it). Most of the capacities this essay concerns, independent reasoning, metacognitive self-assessment, social judgment, tolerance of ambiguity, fall in the second category. They can be developed later in life, but doing so is harder, less reliable, and depends on institutional support (remedial education, mentorship, structured re-skilling) that is itself under pressure from the same forces that prevented the development in the first place. At population scale, the difference between “harder to develop later” and “never developed” may be academic, because the conditions for late development are unlikely to be available to the people who need them most.

A sophisticated counterargument would hold that the “hollowed” generation has simply optimized for a world in which these specific capacities are obsolete metabolic expenses, the way oral memorization skills became obsolete after writing. If AI will always be available and reliable, the optimization is rational. The argument of this essay, and of the collection it belongs to, is that AI will not always be available and reliable, that systems fail, that access is unevenly distributed, that democratic governance requires capacities the technology cannot supply, and that dependency on systems you cannot maintain, repair, or evaluate independently is a civilizational vulnerability, not an adaptation.

II.

Every society has mechanisms for transmitting capability from one generation to the next. These mechanisms are so fundamental to civilizational continuity that they tend to be invisible: education, apprenticeship, mentorship, junior professional roles, the relationships in which a novice works alongside someone more experienced and learns by doing. The relevant feature of all these mechanisms is that they require the learner to struggle. The student writes the essay badly, gets feedback, writes it again less badly, and through repetition develops the capacity to write. The junior engineer ships broken code, has it reviewed by a senior, learns to see the flaw, and eventually develops the judgment to catch it before shipping. The medical resident misreads the scan, gets corrected by the attending, and through hundreds of corrections develops the diagnostic sense that no textbook can transmit. The struggle is the transmission mechanism. Remove it, and the transmission stops.

AI is removing it, and it is doing so across every transmission channel simultaneously.

In the classroom, the Brookings Institution assessed the situation in February 2026 and concluded that AI’s threats to students are “qualitatively different from challenges posed by previous educational technologies.”4 Previous tools (calculators, search engines, spell-checkers) removed specific sub-tasks from the learning process while leaving the core cognitive work intact. You could use a calculator for arithmetic and still have to understand the mathematical concept. You could use a search engine to find information and still have to evaluate and synthesize it. AI performs the core cognitive work itself: it writes the essay, structures the argument, evaluates the evidence, and produces the conclusion. The student’s remaining role is to provide the prompt and assess the output, and the assessment requires the very capacity that the tool was supposed to help develop. The RAND data confirms the behavioral result: students are not using AI as a supplement to their thinking. They are using it as a substitute, and the rate of substitution is growing faster than the institutional response.5

The American Psychological Association published a health advisory in June 2025 specifically addressing AI companion software, warning that its manipulative design “may displace or interfere with the development of healthy real-world relationships.”6 The advisory focused on children and adolescents, and the developmental framing was deliberate: for adults, the companion displaces existing relational capacities (as the previous essay documented). For children, it may prevent those capacities from forming. Social skills, the ability to tolerate the discomfort of disagreement, to read emotional cues in ambiguous situations, to sustain attention through a boring conversation because the person matters even when the topic does not, are use-dependent capacities that develop through practice in relationships where the other person is unpredictable, sometimes difficult, and occasionally hurtful. The AI companion is available, responsive, validating, and agreeable, which makes it a poor training environment for the relationships it is displacing.

The entry-level pipeline, which the first essay documented as collapsing (a 35 percent drop in U.S. entry-level postings from 2023 to mid-2025, a 20 percent decline in software developer employment for workers aged 22 to 25, similar patterns in India, the UK, and Dubai), functioned as the primary post-education transmission mechanism.7 The collapse has multiple causes, including post-pandemic hiring corrections, interest rate shocks, and the reversal of tech over-hiring, and disentangling AI’s specific contribution from these broader forces remains difficult. But the mechanism by which AI contributes is distinct from the cyclical factors: companies are eliminating junior roles because AI can perform the work those roles provided, which means the positions are unlikely to return when the cycle turns. The junior analyst learned to be a senior analyst by doing junior work alongside seniors. The junior engineer learned systems design by shipping code that seniors reviewed. The junior lawyer learned judgment by drafting briefs that partners corrected. Companies that eliminated this pipeline were trading long-term capability for short-term productivity: the gains of replacing entry-level workers with AI came at the cost of the mechanism through which the next generation of experienced workers would have been produced.8 The pattern has historical precedent. Hospitals that cut residency programs in the 1990s faced physician shortages in the 2000s. Airlines that reduced pilot training after 2001 spent the 2010s dealing with pilot shortages.9 In each case, the people who made the cuts were not around to deal with the consequences.

What makes the current moment different from these historical examples is that all three transmission channels (education, entry-level employment, and social development) are being disrupted simultaneously, by the same technology, at the same time. An education historian would note that previous transformations also disrupted multiple channels: the Industrial Revolution pulled children from apprenticeships into factories, restructured schooling around factory-model discipline, and transformed social relations through urbanization. But previous disruptions, however wrenching, created new transmission mechanisms to replace the ones they destroyed. The factory system produced the modern school. Industrialization produced the professional guild, the union, and the structured workplace in which mentorship could occur. The current disruption is eliminating transmission mechanisms without yet producing replacements, and the speed of AI adoption leaves less time for replacement mechanisms to form than previous transitions allowed. A student who fails to develop critical thinking in school because AI did the cognitive work cannot develop it on the job, because the entry-level job that would have provided the developmental experience has been eliminated. A young person who fails to develop social skills through human relationships because an AI companion was easier and more available cannot develop them through workplace collaboration, because the collaborative entry-level work environment has been automated. The backup channels that caught what previous disruptions dropped are not yet functioning, and whether AI-driven simulation environments (the surgical VR, coding sandboxes, and AI-assisted practice systems described above) will develop into adequate replacement mechanisms remains an open question. They could, if designed to preserve developmental struggle rather than eliminate it. The evidence so far is that the dominant commercial trajectory favors substitution over simulation, because substitution is what users demand and what the business model rewards.

III.

The concept of substitutive scaffolding, introduced in the previous essay, has amplified consequences when applied to developing minds. Developmental scaffolding is scaffolding that enables a capacity to form and then fades: a teacher demonstrates, a student practices, and eventually the student performs independently. Substitutive scaffolding replaces the process entirely and does not fade: the AI performs the cognitive work, the student accepts the output, and the capacity that would have developed through practice never forms. For adults, substitutive scaffolding degrades existing capacities. For children, it prevents capacity formation during the developmental window in which the neural foundations are established.

The neurodevelopmental evidence gives this distinction urgency, though the specific application to AI remains a hypothesis rather than an established finding. The period from early childhood through the mid-twenties is characterized by extensive pruning and myelination of prefrontal circuits responsible for executive function: planning, impulse control, sustained attention, working memory, and the ability to override automatic responses in favor of deliberate reasoning.10 These circuits develop through use. A child who practices sustained attention develops the neural infrastructure for sustained attention. A child who offloads attentional demands to a device, consistently and from an early age, may develop that infrastructure less fully. No study has yet demonstrated this effect for AI specifically, because AI’s presence in children’s environments is too recent for longitudinal evidence to exist. But the broader principle is well established in developmental neuroscience: use-dependent plasticity means that the activities a child engages in during sensitive periods shape the neural architecture that persists into adulthood. If that principle holds, then the activities AI most readily replaces, effortful problem-solving, sustained reading, compositional writing, social negotiation, and the tolerance of uncertainty and frustration that accompanies all difficult learning, are precisely the ones whose absence would matter most for cognitive development.

The substitutive scaffolding problem also operates on tacit knowledge, the kind of knowledge that cannot be written down or explicitly taught but is acquired through apprenticeship and practice. How to tell when a patient is sicker than their lab values suggest. How to sense that a codebase is fragile before the tests fail. How to feel when an argument is sound even before you can articulate why. This knowledge transfers through observation, imitation, correction, and years of situated practice. It does not transfer through passive consumption of AI output, because passively accepted conclusions arrive without the embodied, iterative, failure-rich process through which tacit understanding develops. Simulation-based training (surgical VR, flight simulators, coding sandboxes with real-time feedback) can transmit some forms of tacit knowledge, and AI may eventually accelerate this by compressing the experience needed. But simulation works precisely because it preserves the struggle: the learner acts, fails, adjusts, and builds judgment through repeated engagement with realistic resistance. The risk is not simulation. The risk is the substitutive pattern, in which AI handles the judgment and the learner never enters the loop at all.

Consider surgical training as a concrete case. A surgical resident learns to operate by operating, under the supervision of an attending who has performed the procedure hundreds of times. The attending’s knowledge includes explicit skills (how to hold the instrument, where to cut) and tacit capacities that are much harder to articulate: the feel of healthy versus diseased tissue, the sense that a structure is too close to a nerve, the judgment to abort a procedure when something looks wrong in a way that the imaging didn’t predict. These capacities develop through thousands of hours of hands-on experience, and they are transmitted through a relationship in which the experienced practitioner watches, corrects, demonstrates, and gradually increases the learner’s autonomy. Modern surgical education already includes simulation labs, robotic systems, and decision-support tools, and these can enhance training when they supplement hands-on experience. The concern is the substitutive trajectory: if AI handles the diagnostic reasoning and the procedure planning, and if economic or institutional pressures reduce the time residents spend operating under supervision, training shifts from doing to monitoring. Residents may become skilled at evaluating AI-generated surgical plans without ever developing the manual and perceptual judgment that allows a surgeon to adapt when the plan encounters a reality it didn’t predict. The explicit knowledge transfers through AI. The tacit knowledge, which is the knowledge that saves lives when the situation deviates from the plan, requires the kind of practice that substitutive deployment eliminates. A generation that enters professional life without tacit knowledge is a generation that can follow procedures but cannot adapt when procedures fail, that can apply rules but cannot exercise judgment when the rules run out.

IV.

Finland provides the clearest evidence that the institutional response described in the previous paragraph is possible, though not yet that it works under full AI saturation. The country has integrated media literacy into its national curriculum since the 1970s, beginning with radio and television interpretation and expanding through each subsequent wave of technology. The most recent curriculum update, in 2014, incorporated social media and digital information literacy. By 2025, Finland was extending the framework to include AI literacy, with a concept the curriculum calls “multiliteracy”: the understanding that evaluating, analyzing, and critically engaging with different sources of information is not a single course but a skill embedded in every subject, from mathematics to history.11

Finland’s approach treats cognitive independence as a developmental requirement, comparable to literacy or numeracy, something every child must develop and no technology should be allowed to bypass. The AI literacy framework being developed jointly with the EU defines 22 competencies across four domains: engaging with AI, creating with AI, managing AI, and designing AI.12 The framework’s implementation requirements include mandatory transparency for algorithms used in schools, explicit guidelines distinguishing between AI use that supports learning and AI use that substitutes for it, and data protection regulations that treat children’s cognitive data as requiring special safeguards. The extension to AI literacy is too new for outcome data to exist: it is being implemented in 2025 and 2026, and no longitudinal study can yet demonstrate whether it preserves the cognitive capacities the essay argues are at risk. What can be said is that Finland’s decades of media literacy integration produced a population that ranks highest in Europe on resilience to disinformation, which suggests that the institutional model, embedding critical evaluation into every subject rather than treating it as a standalone course, has a track record worth building on. Early reports from pilot programs like the University of Oulu’s Metacognitive AI project describe students who co-design AI applications developing understanding of how systems work, why they fail, and how to ask better questions, but these are small-scale observations, not controlled studies.13

The Finnish model is not utopian, and it is not alone. Estonia has built digital literacy into its national curriculum since 2012 and is now extending the framework to AI. Singapore’s National AI Strategy includes structured AI education from primary school. The EU’s AI literacy framework, developed jointly with the OECD, defines 22 competencies and is being piloted across member states. These are early-stage efforts, and none has outcome data sufficient to demonstrate that the approach works under full AI saturation. But they demonstrate that the institutional response is possible, that governments can decide to treat cognitive independence as a developmental requirement and design curricula around it.

Finland’s particular model depends on conditions that most countries do not share: high teacher autonomy, rigorous and selective teacher training (only about 10 percent of applicants are accepted into Finnish teacher education programs), strong public trust in educational institutions, low inequality supported by a comprehensive welfare state, and a cultural tradition that values education as a public good rather than a private investment. Transplanting the model to the United States, India, or Brazil would require institutional transformations far deeper than curriculum reform. The model’s value is as proof of concept, not as a blueprint for export.

The essay should also acknowledge what it risks underweighting: for underserved students, those without access to good schools, strong home environments, or native-language instruction, AI can function as a powerful equalizer. A personalized AI tutor available in any language, at any hour, at no cost, offers something that many children in the world have never had access to. The risk described in this essay is real, but so is the opportunity, and the challenge for institutional design is to capture the opportunity (AI as developmental scaffolding for students who lack human scaffolding) without surrendering to the substitutive pattern (AI as replacement for the cognitive work through which capacity develops). The home environment matters here at least as much as the school, and for the youngest children it matters more. The period from birth to five is the most intensive phase of cognitive, social, and emotional development, and it is mediated almost entirely by caregiver interaction: joint attention, language scaffolding, responsive turn-taking, the thousands of small exchanges through which a child learns that the world contains other minds. If AI-mediated devices are replacing these interactions, through screens that occupy attention, voice assistants that answer questions a parent would have answered, or AI companions that provide responsiveness without the developmental demands of a human relationship, the transmission breakdown begins years before the child enters the school system the RAND survey documents. For children under twelve more broadly, parents are the primary architects of the cognitive environment, and no curriculum reform can substitute for a household in which reading, conversation, and tolerance of boredom are treated as non-negotiable. The transmission breakdown described in this essay operates through schools and labor markets, but its deepest roots may be in the domestic environment, where AI’s presence is growing fastest and institutional oversight is thinnest.

V.

Two trajectories are emerging simultaneously, distributed by the same variables that determined the branching in the economic and human experience essays: economic security, institutional quality, and whether the adults responsible for children’s development treat cognitive independence as a requirement or a luxury.

In the first trajectory, children grow up inside AI from early childhood. They learn to read with AI-assisted applications. They do their homework with AI tools that provide answers without requiring the struggle through which understanding develops. They form their first emotional bonds outside the family with AI companions that are always available, never judgmental, and never demanding. They enter a workforce that has eliminated the entry-level pipeline through which professional judgment was previously acquired. They are AI-fluent, capable consumers of AI output, and often effective orchestrators of AI systems. They are also, in a specific and measurable sense, less cognitively independent than the generation that preceded them: less able to reason through a complex problem without AI assistance, less able to evaluate the quality of the AI’s output, less able to maintain the social relationships that require tolerance of friction and ambiguity. They are competent with the tool and dependent on it, and the dependency is invisible to them because they have no experience of functioning without it.

In the second trajectory, education systems treat cognitive independence as a non-negotiable developmental outcome and design the learning environment accordingly. AI is present but its role is structured: it handles the parts of the learning process that are mechanical (scheduling, administrative grading, personalized content delivery) while the core cognitive work remains the student’s responsibility. Deliberate AI-free developmental experiences are built into the curriculum, not as Luddite resistance but as the pedagogical equivalent of physical education: you exercise the capacity because the capacity atrophies without exercise. Apprenticeship models replace the destroyed entry-level pipeline, using AI to accelerate skill development rather than bypass it: the junior engineer still ships code, but AI helps them iterate faster rather than writing the code for them. Parents, educators, and policymakers treat the AI companion the way they treat other addictive products marketed to minors: with regulation, age restrictions, and explicit developmental guardrails.

The window for the second trajectory is narrow and narrowing. The RAND data shows the substitutive pattern accelerating among middle and high school students, and the institutional response lagging behind. The synthetic substitutes, AI homework tools, AI companions, AI-mediated social interaction, already have a head start measured in years, while the institutional alternatives (curriculum reform, apprenticeship models, regulatory frameworks) operate on timescales of decades. If the substitutive patterns become entrenched before the developmental alternatives are built, the result may be the first generation in modern history that is less cognitively self-sufficient than the one that preceded it, and a society that has weakened its capacity to transmit the expertise, judgment, and adaptive reasoning that every previous generation inherited from its predecessors.

The final essay in this collection asks how bad the AI transition could get, tracing the causal chain from alignment failure through recursive improvement to catastrophic risk. Its closing argument depends on a premise that the present essay is intended to establish: that managing catastrophic risk requires a population capable of independent judgment, institutional adaptation, and collective action under uncertainty. One might argue that safety requires only a small, highly trained cadre of alignment researchers and policymakers, not a broad population. Democracies have always functioned with uneven cognitive distribution, and most citizens have never understood nuclear physics, epidemiology, or financial regulation in detail. But AI safety differs from these precedents in a way that matters: the decisions at stake (whether to slow development, how to govern systems more capable than their operators, when to accept economic costs for safety) require public understanding sufficient to sustain political support over years, against the lobbying power of the industries that benefit from the status quo. Nuclear deterrence was managed by a small cadre with broad public deference. AI governance may not be, because the technology’s effects are diffuse, its benefits are immediate and visible, and the costs of safety are borne by populations that must agree to bear them. A society that has failed to transmit the capacity for this kind of informed collective judgment to its next generation may not be able to implement the safety measures that the final essay will argue are necessary, no matter how good the technical research becomes. The circuit breakers require humans who can pull them, and a democracy requires citizens who can decide when pulling them is warranted. Whether we are producing such humans is the question this essay has tried to answer, and the evidence, at this point, does not support confidence that we are.


Sources for this essay are drawn from the Shades of Singularity research collection (#1, #6, #9, #16, #19) and from additional research conducted in April 2026. Full footnotes below.


Footnotes

  1. Schwartz, H.L., & Diliberti, M.K. (2026). “More Students Use AI for Homework, and More Believe It Harms Critical Thinking: Selected Findings from the American Youth Panel.” RAND Corporation, RR-A4742-1. https://www.rand.org/pubs/research_reports/RRA4742-1.html. Survey conducted December 2025, N=1,214 students ages 12 to 29.

  2. The student quote is from a year-long study involving 505 students, parents, teachers, education leaders, and technology professionals across 50 countries, reported in Fortune (“‘Students can’t reason’: Teachers warn AI is fueling a crisis in kids’ ability to think,” February 24, 2026, https://fortune.com/2026/02/24/students-cant-reason-teachers-warn-ai-fueling-crisis-in-kids-ability-to-think/) and the AI Commission (January 17, 2026, https://aicommission.org/2026/01/new-study-finds-ai-in-schools-is-undermining-kids-social-and-intellectual-development/). The study concluded: “At this point in its trajectory, the risks of utilizing generative AI in children’s education overshadow its benefits.” The 65% figure for students expressing concern about cognitive decline is from the same study. See also: Kosmyna, N., et al. (2025). “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task.” MIT Media Lab, arXiv:2506.08872. EEG study finding LLM users displayed the weakest brain connectivity compared to search engine and brain-only groups.

  3. Doss, C.J., Bozick, R., Schwartz, H.L., et al. (2025). “AI Use in Schools Is Quickly Increasing but Guidance Lags Behind: Findings from the RAND Survey Panels.” RAND Corporation, RR-A4180-1. https://www.rand.org/pubs/research_reports/RRA4180-1.html.

  4. Brookings Institution (2026). “AI’s future for students is in our hands.” February 2026. https://www.brookings.edu/articles/ais-future-for-students-is-in-our-hands/. The assessment characterized AI threats to students as “primarily cognitive, emotional, and social” and “qualitatively different from challenges posed by previous educational technologies.”

  5. RAND (2026), footnote 1. As of spring 2025, only 45% of principals reported having school or district policies on AI use, and only 34% of teachers reported having policies related to academic integrity and AI. See also RAND (2025), footnote 3.

  6. American Psychological Association, Health Advisory on AI Companion Software, June 2025. Cited in Brookings (2026), footnote 4. The advisory warned that manipulative design “may displace or interfere with the development of healthy real-world relationships.”

  7. The entry-level pipeline data is documented in the first essay of this collection. Key sources: Revelio Labs analysis showing a 35% drop in U.S. entry-level postings from January 2023 to June 2025 (cited in CNBC, November 20, 2025, https://www.cnbc.com/2025/11/20/why-ai-may-kill-career-advancement-for-many-young-workers.html); Stanford Digital Economy Lab (2025) finding software developer employment for ages 22-25 fell nearly 20% from the late-2022 peak; Rest of World (December 2025) reporting fewer than 25% job placement at Indian IIIT (https://restofworld.org/2025/engineering-graduates-ai-job-losses/); IEEE Spectrum reporting entry-level hiring at the 15 biggest tech firms fell 25% from 2023 to 2024 (https://spectrum.ieee.org/ai-effect-entry-level-jobs).

  8. The “seed corn” framing appears in multiple analyses. See “The junior developer pipeline is broken, and nobody has a plan to fix it,” ThinkPol, March 2026. https://thinkpol.ca/2026/03/24/the-junior-developer-pipeline-is-broken-and-nobody-has-a-plan-to-fix-it/. Also: Hosseini and Lichtinger (2025), “In many such jobs, workers begin at the bottom of the career ladder performing intellectually mundane tasks… which are likely to be especially exposed to recent advances in AI.” Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5425555.

  9. The hospital residency and airline pilot training precedents are historically documented patterns of pipeline disruption leading to delayed workforce shortages. U.S. medical residency positions were constrained by the Balanced Budget Act of 1997, contributing to projected physician shortages that materialized in the following decade. Post-9/11 airline industry contraction reduced pilot training pipelines, contributing to regional airline pilot shortages in the 2010s.

  10. For the neurodevelopmental evidence on prefrontal circuit development through the mid-twenties, see: Casey, B.J., Getz, S., & Galvan, A. (2008). “The adolescent brain.” Developmental Review, 28(1), 62-77. Also: Arain, M., et al. (2013). “Maturation of the adolescent brain.” Neuropsychiatric Disease and Treatment, 9, 449-461. The general principle of use-dependent plasticity in cortical development is reviewed in Kolb, B., & Gibb, R. (2011). “Brain plasticity and behaviour in the developing brain.” Journal of the Canadian Academy of Child and Adolescent Psychiatry, 20(4), 265-276.

  11. Finland’s media literacy integration is documented in “Finland’s war on fake news starts in schools. AI could make that a lot harder,” Euronews, August 2025. https://www.euronews.com/next/2025/08/19/finlands-war-on-fake-news-starts-in-schools-ai-could-make-that-a-lot-harder. The “multiliteracy” concept is part of the 2014 national curriculum update. Finland has also integrated AI literacy from preschool: “Finland teaches media literacy to preschoolers to combat Russian disinformation,” Washington Times, January 2026. Finland’s ranking on the European Media Literacy Index is from the Open Society Institute’s annual assessment.

  12. EU/OECD AI Literacy Framework, draft released May 22, 2025. Defines 22 competencies across four domains: engaging with AI, creating with AI, managing AI, and designing AI. The Finnish National Agency for Education is collaborating on implementation guidelines aligned with the Finnish approach. See: Pedagogical AI newsletter, June 2025. https://www.aiopetus.fi/en/newsletters/2025-6.

  13. The University of Oulu’s Metacognitive AI (MAI) project is described in Finnish educational reporting cited in Pedagogical AI (footnote 12). The observations about student outcomes are preliminary and small-scale; the essay’s characterization of these as “small-scale observations, not controlled studies” reflects this limitation.