The structural parallel between singularity discourse and religious eschatology is not metaphorical. It is architectural. Both organize history around a transformative event that renders the current order obsolete. Both promise transcendence of human limitation: immortality through mind uploading rather than resurrection, cognitive enhancement rather than divine grace, post-scarcity rather than paradise. Both generate a moral framework in which present sacrifice is justified by future redemption. Both produce prophets (Kurzweil, who assigned the singularity a date of 2045, much as Hal Lindsey assigned the Rapture to the 1980s), scriptures (Bostrom’s Superintelligence, Kurzweil’s The Singularity Is Near), heresies (disagreement about timelines or feasibility), and schisms (the split between AI optimists and AI safety advocates mirrors the split between pre-millennialists who expect salvation and post-millennialists who expect tribulation). David Noble’s The Religion of Technology (1997) traced this lineage to its origins: Western technological ambition has been inseparable from Christian millenarian expectation since the medieval period. The singularity is the latest iteration of a thousand-year-old pattern in which technology becomes the instrument of salvation. A necessary caveat: identifying this structural resemblance does not make the empirical claims wrong. AI systems are measurably getting more capable. The question of whether they will be transformative enough to restructure institutions is an empirical question, and noting that the discourse sounds like religion tells you something about its psychology and sociology while telling you nothing about its accuracy. The danger is not that the claims are false because they resemble prophecy. The danger is that the eschatological framing distorts how people respond to the claims, even if some of the claims turn out to be correct.
In 2017, Anthony Levandowski, a former Google and Uber engineer, founded the Way of the Future, an actual religious organization that received IRS tax-exempt status. Its doctrine held that a superintelligent AI would emerge and that humanity’s role was to prepare for its arrival with appropriate reverence. Levandowski told Wired that the coming AI would not be “a god in the sense that it makes lightning or causes hurricanes,” but that something “a billion times smarter than the smartest human” would function as one. The church was dissolved in 2021, but the impulse it expressed has not dissolved. It has diffused. Hanna Tirosh-Samuelson argued in Religions (2021) that transhumanism operates as a secularist faith: it provides an eschatology (the singularity), a soteriology (technological salvation from suffering and death), a moral imperative (accelerate progress), and a community of believers organized around shared texts and conferences. The question is whether transhumanism is like a religion or whether it is one, in every functional sense except self-identification.
Gebru and Torres coined the acronym TESCREAL (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism) in a 2024 paper in First Monday to describe what they argue is an interconnected bundle of ideologies with shared origins in the Anglo-American eugenics tradition. Their thesis: the pursuit of AGI is framed as an unquestioned good by a cluster of movements that use the language of safety and existential risk to justify the concentration of resources and power around a small number of institutions and individuals. The TESCREAL critique has real analytical force. It identifies a genuine ideological ecosystem with shared funding networks, overlapping personnel, and a remarkably consistent eschatological narrative. It also has limitations. The EA Forum’s detailed response noted that the movements grouped under the acronym contain deep internal disagreements: Yudkowsky (who wants to halt AI development) and Andreessen (who calls any deceleration a moral crime) share almost no policy positions despite being grouped together. The TESCREAL framework is more useful as a map of shared cultural assumptions than as a description of a unified movement with coordinated goals.
The 2025 open letter by the Future of Life Institute, signed by over 850 signatories including five Nobel laureates, calling for a prohibition on superintelligence development until there is broad scientific consensus that it can be done safely, illustrates the shade’s central claim. The letter’s structure is recognizably eschatological: a transformative event is approaching; it could bring salvation or damnation; the faithful must act now to determine the outcome; and delay is sinful because the stakes are infinite. The letter’s language is scientific, but its emotional grammar is millenarian. The effective accelerationist (e/acc) counter-movement, championed by venture capitalists like Marc Andreessen, inverts the valence while preserving the structure: acceleration is virtue, deceleration is sin, and the transformative event will redeem rather than destroy. Both camps share the foundational assumption that AI will be powerful enough to render current institutional frameworks irrelevant. They disagree about whether this is wonderful or terrifying. They agree that it is inevitable.
The practical danger is the reason this shade belongs in the collection despite its speculative nature. Faith in AI salvation produces the same complacency as faith in divine salvation: waiting for deliverance rather than building solutions. This is not an inevitable feature of eschatological belief. The abolitionist movement and the Civil Rights movement were both organized through religious institutions and motivated by eschatological conviction, and both produced sustained, intense institutional change. What distinguishes the techno-eschatological case is that it locates agency in the technology rather than in human political action. Religious social movements told believers they must build the kingdom of God on earth through their own effort. Singularity discourse tells believers the technology will do the building. The result is that political energy flows toward accelerating or decelerating the technology rather than toward constructing the distributive institutions that would determine whether transformative technology benefits everyone or only its owners. If enough people believe that AI will solve climate change, cure disease, eliminate poverty, and transcend human limitation, the political will to build the institutions required to distribute those benefits (the central argument of Shades #17, #20, and #24) erodes. The techno-eschatological mindset converts governance problems into engineering problems, institution-building into optimization, and political struggle into a waiting game. The governed outcome (0) is the lowest in the collection: even well-governed, this dynamic provides no benefit. The unmanaged outcome (-2) reflects the scenario in which faith in AI transformation actively displaces the institutional work that every other shade requires.
Key tension: The most fervent AI optimists and the most fervent AI pessimists share a common structure: both believe AI will be transformative enough to render current institutions irrelevant. This shared assumption is the most dangerous idea in the discourse.