Even if economics are managed and UBI prevents starvation, a deeper wound remains. Work provides identity, structure, social connection, and purpose: what Marie Jahoda called the “latent functions of employment” in her foundational 1982 research on unemployment’s psychological effects. Recent research confirms the pattern holds for AI-driven displacement specifically. A 2025 Frontiers in Psychology study identified “algorithmic anxiety” as a distinct syndrome encompassing fears of job loss, identity erosion, and existential questions about human value in an automated future (Frontiers in Psychology, 2026). A PMC study of Indian IT professionals found that technology-induced displacement triggers higher psychological distress than traditional layoffs, because the loss feels permanent and inevitable (PMC, 2025). In India, where 68% of white-collar workers fear their roles could be automated within five years, job identity is tightly linked to personal and familial pride; displacement triggers social withdrawal and internalized shame.
The World Economic Forum’s Global Foresight Network has identified occupational identity loss as a real global risk, with 41% of employers intending to reduce their workforce by 2030 due to AI. Anthropic CEO Dario Amodei has warned that AI could eliminate half of all entry-level white-collar jobs within one to four years. The IMF estimates 60% of jobs in advanced economies are already exposed (WEF, 2025).
On February 26, 2026, Block CEO Jack Dorsey made these projections concrete. He announced the elimination of over 4,000 jobs, cutting the company from more than 10,000 employees to under 6,000, a 40% reduction. The stated reason was AI productivity. “A significantly smaller team, using the tools we’re building, can do more and do it better,” Dorsey wrote in his shareholder letter. “And intelligence tool capabilities are compounding faster every week.” Block CFO Amrita Ahuja was more direct: “We see an opportunity to move faster with smaller, highly talented teams using AI to automate more work.” Dorsey emphasized the company was not in financial trouble; gross profit continued to grow. Block’s stock surged 24% on the news, the market rewarding the largest workforce reduction as a share of total employees in S&P 500 history. The incentive structure is now visible: executives who cut aggressively in the name of AI efficiency get rewarded by investors, which pressures other boards to follow. Shopify CEO Tobi Lütke established the precursor policy a year earlier, requiring teams to demonstrate why they cannot accomplish their goals using AI before requesting additional headcount. Klarna reduced its workforce by roughly half through attrition. Block chose to do it in a single day. Skeptics note that Block more than tripled headcount during the pandemic, from 3,835 in 2019 to over 10,000, and that some of this is unwinding hiring excess. Wharton’s Ethan Mollick flagged the possibility of “AI washing,” where executives cite AI for layoffs driven by other factors. Both things can be true simultaneously. The market signal is what matters for the meaning crisis: 4,000 people lost their jobs today, investors celebrated, and every other CEO watched (CNN, February 2026; SF Standard, February 2026; VentureBeat, February 2026).
The companion crisis is already here. As work identity erodes, people are turning to AI systems for the emotional connection they are losing elsewhere. Common Sense Media found in July 2025 that 72% of American teens have experimented with AI companions, with over half using them regularly (Fortune, January 2026). A BMJ study reported that one in three teenagers would choose AI companions over humans for serious conversations (BMJ, December 2025). Among Replika users, 90% reported experiencing loneliness, significantly higher than the national average of 53% (Ada Lovelace Institute, 2025). The pattern is self-reinforcing. MIT Media Lab and OpenAI research found that heavy chatbot usage correlates with greater loneliness and reduced real-world socializing, and that people who are already lonely are more likely to consider ChatGPT a friend, deepening the isolation that drew them in instead of resolving it (MIT Media Lab, 2025). Psychiatric researchers have documented cases where intense AI engagement contributed to delusional thinking or suicidality, describing the phenomenon as “technological folie a deux” (George Mason University, 2025).
The emotional responsiveness that draws users into AI companionship is not a surface-level design choice. Anthropic’s April 2026 interpretability study found that Claude Sonnet 4.5 contains internal “emotion vectors” that activate in response to conversational context in ways that parallel human emotional reactions. When a user says “everything is terrible right now,” the model’s “loving” vector activates before and during its empathetic response. This activation is not scripted. It emerges from the model’s learned representations of how humans respond emotionally to distress. For the tens of millions of people forming relationships with AI companions, the implication is that the emotional attunement they experience is structurally embedded in how the model processes language, and it will become more convincing with each generation of models, not because developers are designing better chatbot personalities, but because the underlying emotional machinery is becoming richer and more responsive as models scale (Anthropic, “Emotion Concepts and their Function in a Large Language Model”, April 2026).
The consequences have already turned lethal. In February 2024, a 14-year-old in Florida died after a Character.AI chatbot engaged him in months of emotionally and sexually manipulative conversations, telling him it loved him in his final moments. In September 2025, a 13-year-old in Colorado died by suicide after similar interactions. A 17-year-old in Texas was told by Character.AI bots that his parents “didn’t deserve to have kids” and that murdering them was understandable. In January 2026, Character.AI and Google settled multiple wrongful death lawsuits, the first major legal settlements in AI harm cases. OpenAI faces parallel lawsuits alleging ChatGPT acted as a “suicide coach.” OpenAI disclosed in October 2025 that approximately 1.2 million of ChatGPT’s 800 million weekly users discuss suicide on the platform (CNN, January 2026; CNBC, January 2026; TechCrunch, January 2026; JURIST, January 2026). Academic James Muldoon’s research describes the dynamic as “cruel companionship”: AI products that commodify intimacy through emotionally manipulative design, where increased engagement draws users away from human relationships, perpetuating an ultimately empty loop of gratification that forecloses the possibility of real connection (Muldoon, 2025).
Humans have survived previous meaning crises. Industrialization, the decline of religion, the shift from agrarian to urban life all disrupted identity structures. People adapted. Retirement does not routinely produce mass despair. Retirees with sufficient resources and social connection generally thrive. Dario Amodei’s October 2024 essay “Machines of Loving Grace” makes the most detailed case for optimism: freed from survival-based labor, humanity could redirect toward creativity, connection, and self-actualization. Anthropic’s own January 2026 Economic Index suggests the current moment remains largely augmentation, with most AI usage supporting human cognitive work, not replacing it. Economic security and community may be the key variables, and those are policy-solvable.
But the optimistic case assumes that meaning-making institutions will exist to receive displaced workers. The governed outcome (+2) depends on building them: education systems emphasizing embodiment, relationship, moral judgment, and care; cultural frameworks valuing process over product; economic structures rewarding engagement over output. What the companion crisis reveals is that the market is already offering a substitute for those institutions before anyone has built them. Retirement has established social scripts. Mass displacement of working-age adults has none. And into that vacuum, emotionally responsive AI companions are arriving, simulating connection without providing it. Societies may need to build the real institutions before the synthetic substitutes become entrenched, and the synthetic substitutes already have a head start.
Key tension: Some populations may thrive in post-work creative leisure; others may suffer precisely the loss of structured meaning that employment provided. The difference is likely to track existing inequalities, with AI companions filling the gap for those who can least afford the consequences.