← Back to all shades
Shade 10 ~75%

The Ecological Reckoning

Tier 2: Highly Probable

Unmanaged -3
Governed 3
Dividend 6

Global data center electricity consumption reached approximately 415 terawatt-hours (TWh) in 2024, about 1.5% of global electricity demand, growing at 12% annually over five years. The IEA projects this will more than double to 945 TWh by 2030, roughly equivalent to Japan’s entire electricity consumption. AI is the primary driver: electricity demand from AI-optimized data centers is projected to more than quadruple by 2030. In the United States, data center power consumption is on course to account for nearly half of electricity demand growth between now and 2030. The U.S. economy is set to consume more electricity for processing data in 2030 than for manufacturing all energy-intensive goods combined, including aluminum, steel, and chemicals (IEA, Energy and AI, 2025; Pew Research, 2025).

The concentration creates localized strain. In Virginia, data centers already consume 26% of electricity. In Dublin, the figure is 79%. A Carnegie Mellon study estimates data centers and cryptocurrency mining could lead to an 8% increase in the average U.S. electricity bill by 2030, potentially exceeding 25% in northern Virginia. The water footprint is equally severe: data centers require enormous volumes for cooling, and in drought-prone regions the competition with residential and agricultural use is already producing political conflict. A Google data center in The Dalles, Oregon consumed over a quarter of the city’s water supply. Goldman Sachs forecasts data center demand to grow by about 50% to 92 GW by 2027, with construction spending in the U.S. tripled in three years (Goldman Sachs, 2025; Carbon Brief, 2025).

The scale argument pushes back hard. The IEA estimates data center emissions will reach only about 1% of global CO2 by 2030, and the increase in data center electricity demand (530 TWh) is less than the growth from electric vehicles (838 TWh) or air conditioning (651 TWh). AI is also the most powerful tool available for climate optimization: modeling climate systems, discovering materials for energy storage, optimizing power grids, monitoring deforestation. Efficiency gains are real. Between 2015 and 2019, data center workloads tripled while power consumption stayed roughly flat.

The industry’s rhetorical strategy for managing ecological criticism is revealing. At the India AI Impact Summit in February 2026, OpenAI CEO Sam Altman called comparisons between AI training costs and human query costs “always unfair,” then offered a reframing: “It also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.” He extended the analogy to the cumulative energy cost of human evolution: “the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever, to produce you.” Measured against that baseline, Altman argued, “probably AI has already caught up on an energy efficiency basis” (CNBC, 2026; TechCrunch, 2026). In the same interview, he dismissed concerns about AI water consumption as “completely untrue, totally insane.” Zoho co-founder Sridhar Vembu, present at the summit, responded directly: “I do not want to see a world where we equate a piece of technology to a human being” (CNBC, 2026). Paris Marx, writing in Disconnect, identified the deeper logic: when the CEO of the world’s most prominent AI company reduces human development to an energy input, he is revealing a worldview in which human life has no inherent value beyond its computational output. “Human life is downgraded to be equivalent to a machine,” Marx wrote, “and thus has none of the inherent value we tend to associate with it” (Disconnect, 2026). Tom’s Hardware noted the circular reasoning: the AI industry’s energy costs should logically include the full prior history of human science, engineering, and computing that made AI possible, by the same logic Altman applied to humans (Tom’s Hardware, 2026). The episode matters less for what it says about energy accounting and more for what it reveals about how the industry’s leaders think about ecological costs. The framing is comparative deflection: every question about AI’s environmental cost gets redirected into a question about why the questioner is measuring AI instead of something else. This is the rhetorical architecture of externalization.

The industry’s proposed solution to data center energy constraints reveals the same logic operating at planetary scale. Google’s Project Suncatcher, announced in November 2025, envisions constellations of solar-powered satellites carrying TPU chips in low Earth orbit, with two prototype satellites planned for early 2027 in partnership with Planet Labs (Google Research, 2025; SpaceNews, 2025). Google CEO Sundar Pichai declared that orbital data centers would be “a more normal way to build data centers” within a decade (Fortune, 2025). SpaceX filed plans with the FCC in January 2026 for millions of satellites with integrated compute capability. In February 2026, Nvidia-backed Starcloud submitted an FCC proposal for a constellation of up to 88,000 satellites for orbital data centers (Wikipedia/FCC filings, 2026). The Nvidia-backed startup had already deployed an H100 GPU to orbit and run Google’s Gemma LLM in space in late 2025 (CNBC, 2025). The environmental logic is circular: the industry generates an energy problem on Earth, then proposes to solve it by industrializing orbit. ESA’s 2025 Space Environment Report found that even without additional launches, the debris population is already growing faster than atmospheric drag can remove it. A 2025 Nature Sustainability study calculated that greenhouse gas emissions could reduce low Earth orbit’s satellite carrying capacity by 50 to 66 percent by 2100, meaning the AI industry’s terrestrial emissions are simultaneously degrading the orbital environment it proposes to colonize (Nature Sustainability, 2025). The UNEP flagged satellite reentry pollutants, including aluminum oxides that damage stratospheric ozone, as an “emerging issue,” with reentry rates already exceeding three intact satellites or rocket bodies per day (Space.com, 2025; Mongabay, 2025). University of Texas astrophysicist Moriba Jah summarized the orbital logic: “Right now, every single object that we launch into orbit is the equivalent of a single-use plastic” (IEEE Spectrum, 2025).

The question the space data center proposals raise is whether centralized cloud infrastructure is even the right architecture for AI inference. A January 2025 ArXiv paper by Siavash Alamouti, “Quantifying Energy and Cost Benefits of Hybrid Edge Cloud,” found that hybrid edge-cloud processing for AI workloads could achieve energy savings of up to 75% and cost reductions exceeding 80% compared to pure cloud processing (InfoWorld, 2026). Apple’s on-device foundation model runs at approximately 3 billion parameters, compressed to 2 bits per weight, performing inference locally on consumer hardware with no cloud dependency (Apple Machine Learning Research, 2025). IDC projects that by 2027, 80% of CIOs will turn to edge services to meet AI inference demands. Trend Micro’s January 2026 analysis described the current moment as an “LLM bubble” driven by inefficient scaling, observing that “using a GPT-5 class model for every task is like hiring a Nobel Prize-winning physicist to do your data entry” (byteiota, 2026). The data supports the analogy: hybrid architectures that route 90 to 95 percent of queries to edge small language models, reserving 5 to 10 percent for cloud LLMs, can replace a one-time $6,000 GPU investment against $8,808 in annual cloud fees. AT&T’s Chief Data Officer confirmed that “fine-tuned SLMs will become a staple used by mature AI enterprises in 2026, as cost and performance advantages drive usage over out-of-the-box LLMs.” The ACM’s May 2025 survey on generative AI at the edge, published in ACM Queue, projected over 50 billion edge devices by 2030 and documented the emerging shift from cloud-centric AI toward distributed, on-device intelligence (ACM Queue, 2025).

The counterargument is real: training frontier models still requires centralized compute at enormous scale, and some inference tasks do need the capacity of large cloud-hosted models. The hybrid consensus reflects this. No serious analyst argues that edge alone can replace cloud infrastructure entirely. The question is proportionality. The current architecture routes the vast majority of AI queries through centralized data centers whose energy requirements then justify nuclear buildouts, water diversion, and now orbital industrialization. If 90 percent of those queries could be handled by local models running on existing hardware, the entire energy calculus changes. The fact that the industry’s business model depends on per-query subscription revenue from cloud-hosted models creates a structural incentive to resist that shift, regardless of what the engineering supports. Cloud made technical sense for traditional software, where the application logic lived on the server. For AI inference, where the trained model can be compressed and distributed, the technical case for centralization is weaker than the commercial one.

The governed outcome (+3) reflects the potential for regulation to force the industry toward renewable energy, water efficiency, and algorithmic optimization while preserving AI’s environmental benefits. The unmanaged outcome (-3) reflects the alternative: exponential growth in energy demand, predominantly fossil-fuel-generated, in the countries where data centers are most concentrated.

Key tension: AI could be the key to solving climate change or the force that makes it materially harder. The difference depends on whether the industry’s environmental costs are internalized or externalized.