The Thermodynamic Wall: The Collision of AI Scaling Laws and Physical Infrastructure
Executive Summary
The era of unconstrained AI scaling - subsidized by excess grid capacity and ambient air cooling - is over. The bottlenecks to intelligence are no longer algorithmic or silicon-based. They're thermodynamic and geological: not enough electrons, not enough capacity to reject heat, not enough transmission density. This is the "Thermodynamic Wall," and it will define who wins the AI race over the next decade.
collision
By 2030, the largest AI training runs will demand 4-10 gigawatts of power - multiple nuclear power stations' worth - while inference workloads will rival the industrial consumption of entire nations.[^1] Meanwhile, the US electrical grid has interconnection queues averaging five years, and the manufacturing base can barely produce the HALEU fuel needed for next-generation nuclear reactors.[^2] The wall is formidable, but it's also a forcing function. It's pushing the industry toward liquid cooling, on-site generation, optical interconnects, and software-defined power. The winners won't be the companies with the most GPUs. They'll be the ones who solve the physics.[^4]
Section 1: The Mechanics of the Wall - Energy, Entropy, and AI Scaling
The demand for compute isn't just growing - it's undergoing a phase transition. The shift from analytical AI to generative and reasoning models has decoupled value creation from energy efficiency. The marginal cost of intelligence is becoming energetically unsustainable.
1.1 The Collision of Laws: Moore, Koomey, and Scaling
For decades, Moore’s Law (transistor density doubling) and Koomey’s Law (computations per joule doubling every 1.57 years) worked in tandem. You got exponential performance without an explosion in energy consumption. LLMs broke that equilibrium. Training compute doubles roughly every six months, far outstripping hardware efficiency gains.[^5] The numbers are stark: training GPT-3 took about 1.29 GWh; GPT-4 consumed over 50 GWh - a 40x increase in one generation.[^6] This isn't just scaling existing workloads. It's a fundamental change in the metabolic rate of digital cognition. The physical manifestation: power density. Traditional racks run at 7-10 kW. AI racks with H100 or Blackwell GPUs demand 40-100+ kW.[^6] A tenfold increase. This breaks thirty years of air-cooling assumptions. Air simply can't carry away the waste heat from 100 kW of silicon in a 20-square-foot footprint. The Thermodynamic Wall is, in part, a heat rejection crisis. The "Silvicultural Architecture of Cognition" frames this starkly: we're approaching an "Energy Wall" where the marginal cost of additional intelligence exceeds the value produced.[^4] Simulating human brain activity with current silicon would require billions of watts - $10^9$ times more than the biological brain's 20 watts.[^4] That gap is the clearest argument for biomimetic and neuromorphic architectures over brute-force scaling.
1.2 The Bifurcation of Energy: Training vs. Inference
Training and inference impose fundamentally different stresses on the grid. Most energy analyses blur them together. That's a mistake - they need distinct infrastructure strategies.
mechanics
1.2.1 Training: The Gigawatt Spikes
Training frontier models requires massive, synchronous compute clusters. These workloads are geographically flexible but energy-intensive on a monolithic scale. They represent the "factories" of the AI age.
- Magnitude: By 2030, a single frontier training run is projected to require a dedicated power capacity of 4-10 GW.[^1] This scale exceeds the generation capacity of most individual power plants, necessitating connections to high-voltage transmission backbones or dedicated reactor clusters.
- Geography: Latency is irrelevant during training (the model isn't live yet), so these "AI Factories" can sit in remote areas with cheap power - deserts with solar arrays, regions with excess hydro, or co-located with nuclear plants. Land and power are cheaper even if network latency to population centers is high.
- Grid Impact: Training clusters are massive, steady baseload consumers - near 100% utilization for months. Unlike residential or commercial loads that fluctuate throughout the day, this flat profile is actually attractive to utilities as stable revenue, if the transmission infrastructure exists to deliver the gigawatts.
1.2.2 Inference: The Distributed Flood
Inference - querying the model - is where the Wall becomes pervasive and hard to manage.
- Magnitude: A single ChatGPT query consumes ~2.9 Wh - roughly ten times a standard Google search (0.3 Wh).[^6] For "agentic" workflows where the AI loops, reasons, and reflects before answering, the cost spikes 13x to over 4 Wh per query.[^8]
- Geography: Inference is latency-sensitive. It must occur closer to the user, in metro-edge data centers where power is most constrained and expensive. A user interacting with an AI agent expects near-instantaneous responses; thus, the compute cannot sit in a remote desert but must reside in Northern Virginia, Silicon Valley, or Frankfurt.
- The "Dreaming" Load: AI systems may soon move beyond static query-response to continuous "dreaming" or consolidation phases. Pattern Computer's (PCM) architecture suggests systems could consolidate memories and optimize geometry during downtime, like biological sleep.[^9] If this happens, inference flips from bursty daytime load to continuous 24/7 demand - eliminating the power consumption valleys that utilities rely on for grid balancing.
- Forecast: Training grabs headlines, but inference is the long-term energy driver. By 2030, inference will likely surpass training in total energy consumption as models integrate into billions of edge devices and enterprise workflows.[^10] Table 1: The Energy Profile of AI Workloads (2025-2030 Forecast)
| Metric | AI Training | AI Inference |
|---|---|---|
| Primary Constraint | Total Generation Capacity (GW) | Latency & Local Grid Capacity |
| Power Density | Extreme (100kW+ per rack) | High to Moderate (20-50kW) |
| Geographic Flexibility | High (Can be remote) | Low (Must be near users) |
| Energy Behavior | Constant, massive load for months | Bursty, diurnal (shifting to continuous with "dreaming" agents) |
| 2030 Energy Share | ~40% of AI Energy | ~60% of AI Energy [^10] |
| Grid Interaction | Transmission-level connection | Distribution-level / Metro-edge connection |
1.3 The Heat Rejection Limit
Every watt consumed by a processor becomes heat. A 100 MW data center is a 100 MW heater. PUE measures how efficiently you remove that heat, but it doesn't change the physics of heat transfer density. Air fails above 30-40 kW per rack. Its heat capacity is low ($C_p \approx 1.005$ J/g·K). Cooling a 100 kW rack with air requires fans running so fast they consume excessive parasitic power and generate acoustic vibrations that can damage hard drives.[^11] The required temperature difference becomes unmanageable without inlet temperatures low enough to cause condensation. This forces a migration to liquid cooling. Water has roughly 4x the specific heat capacity of air ($C_p \approx 4.18$ J/g·K) and 24x the thermal conductivity.
- Direct-to-Chip (DTC): Cold plates sit directly on the GPU/CPU. This captures ~70-80% of the heat, with the remainder removed by air. This is the current standard for hyperscalers.[^12] It allows for "warm water cooling," where inlet temperatures can be 40°C+, significantly increasing the efficiency of dry coolers and reducing the need for mechanical chillers.
- Immersion Cooling: Submerging the entire server in dielectric fluid. This captures nearly 100% of the heat and eliminates fans, reducing server power consumption by 10-15%.[^13] While thermodynamically superior, it faces adoption hurdles due to the messy nature of servicing liquid-submerged hardware and the cost of dielectric fluids.
- The Market Shift: The liquid cooling market is projected to grow at 20%+ CAGR through 2030, driven almost entirely by AI density.[^11] This isn't a preference. Air cooling simply cannot support H100 and B200 generation chips.
Section 2: The Physical Constraint - The Grid and "Time-to-Power"
Heat can be engineered around. Power has to come from somewhere. And the US electrical grid is failing to keep pace.
2.1 The Interconnection Queue Backlog
The interconnection queue - the waiting list for new power generation and large loads to connect to the grid - is the primary bottleneck for data center development. As of late 2024: over 2,600 GW of projects waiting, more than twice the country's installed capacity.[^3]
- Wait Times: Average time from interconnection request to commercial operation has ballooned from under 2 years in 2008 to over 5 years in 2024. In PJM (covering Northern Virginia, the data center capital of the world), waits extend to 2028 or 2030.
- Attrition: The queue is clogged with speculative projects. Only ~19% of projects entering queues between 2000 and 2018 reached commercial operation.[^3] Every dropout forces a re-study of all remaining projects, cascading the delays further.
- The Data Center Impact: Data centers now dominate load growth forecasts. In four years, the five-year demand growth forecast increased sixfold.[^15] Utilities are imposing connection pauses, pushing developers toward alternative sites or off-grid power.
2.2 "Time-to-Power" as the New Currency
For Microsoft, Amazon, Google, and Meta, time-to-power has replaced cost as the key metric. Delaying an AI cluster by two years means billions in lost market share in the race to AGI.
- The Premium: Operators will pay a 50% premium for fast-deploying power solutions that bypass the utility queue.[^16] Price per kWh used to drive site selection. Now the date of energization does.
- The Pivot to On-Site Generation: The grid is too slow, so operators are going off-grid or hybrid. By 2030, an estimated 30% of data centers will use on-site power as their primary source.[^17] This is a fundamental fracture in the utility model - large industrial customers defecting from the centralized grid to keep running.
- Natural Gas Bridge: The immediate beneficiary is natural gas. Bloom Energy fuel cells and gas turbines deploy in 12-18 months, versus 5-8 years for transmission upgrades. Microsoft is piloting data centers powered directly by gas fuel cells to bypass transmission losses and delays.[^18] Google is funding gas plants with carbon capture to maintain 24/7 firm power while meeting climate commitments.[^19] Call it "Island Mode" - a pragmatic capitulation to grid inertia.
2.3 The "Stranded Power" Paradox
Here's the irony: despite the shortage, a huge amount of power in existing data centers sits unused. Provisioned but stranded - a byproduct of conservative engineering and legacy infrastructure that can't adapt to dynamic loads.
- The Buffer: Operators over-provision for reliability, allocating nameplate capacity to racks that rarely hit 100% utilization. Result: 40-50% of capacity sits idle as a safety buffer.[^20]
- The Opportunity: This inefficiency has given rise to Software Defined Power (SDP). Technologies like Virtual Power Systems' (VPS) Intelligent Control of Energy (ICE) use machine learning to dynamically allocate power, allowing operators to "oversubscribe" their infrastructure safely. By identifying this stranded capacity, SDP can unlock 30-50% more compute density within the same physical power envelope.[^21]
- Mechanism: SDP creates a virtualization layer for power - think VMware for electrons. It lets operators define power priorities, cutting non-essential workloads (batch processing, dev environments) during peak spikes so AI inference keeps running. This elasticity is how you handle bursty inference loads without building more substations.
Section 3: The Energy Source - Nuclear Dreams vs. Geological Reality
The industry's long-term answer to the Thermodynamic Wall is nuclear - specifically Small Modular Reactors co-located with data centers. The vision is elegant. The reality is not.
3.1 The SMR Promise and Corporate Bets
SMRs promise factory-fabricated nuclear power with shorter deployment times. Tech giants are betting big, trying to signal enough demand to kickstart the supply chain:
- Google: Partnered with Kairos Power to deploy 500 MW of molten salt reactors by 2035.23
- Amazon (AWS): Investing in X-energy for deployment in Washington state and purchasing a nuclear-powered data center campus from Talen Energy.[^23]
- Microsoft: Restarting Three Mile Island Unit 1 to give dedicated baseload power to its AI operations.[^23]
3.2 The Reality Check: NuScale and Economics
NuScale's "Carbon Free Power Project" collapsed in late 2023. This matters because NuScale was the frontrunner - the only SMR design with NRC approval.
- The Failure: The project was cancelled because too few customers (municipal utilities) signed up to buy the power. The target price for power rose from $58/MWh to $89/MWh, making it uncompetitive with wind, solar, and gas.[^25]
- Root Causes: Rising commodity prices (steel, concrete) and high interest rates, which punish capital-intensive nuclear projects. SMRs lose the efficiency of large reactors but still carry heavy regulatory and security overhead.[^26] And the "modular" promise of factory learning curves? No factory exists yet.
3.3 The Fuel Wall: The HALEU Shortage
The most under-discussed constraint is the fuel. Most advanced SMR designs - X-energy, TerraPower - need High-Assay Low-Enriched Uranium (HALEU), enriched to 5-20% U-235. Standard reactors use LEU at 3-5%.
- The Monopoly: Russia (via Tenex) was the world’s only commercial HALEU supplier. The Ukraine invasion severed that supply chain. Western SMR developers have no fuel source.[^2]
- Domestic Gap: The US has almost no commercial HALEU capacity. Centrus Energy, the sole US licensee, began pilot production in late 2023 at Piketon, Ohio. Output in 2024: 900 kg. Projected DOE demand by 2030: 40,000 kg/year.
- Implication: No fuel, no SMRs. Building a domestic enrichment supply chain takes years of licensing and billions in capital. This pushes realistic widespread SMR adoption past 2030, likely into 2035-2040.[^28] SMRs aren’t a solution for today’s Thermodynamic Wall. They’re a solution for the next cycle.
Section 4: The Fast Moves - Infrastructure Asymmetry
Grid upgrades and nuclear deployments take 5-10 years. The industry needs wins on a shorter timescale. That means optimizing the layers between the grid and the chip - finding places where technology can move faster than concrete and physics.
leverage
4.1 The Interconnect Bottleneck: Co-Packaged Optics (CPO)
At 100,000+ GPU clusters, the network becomes the computer. And moving data between chips eats an increasing fraction of the total power budget.
- The Problem: Traditional pluggable optical modules (transceivers) are hitting an efficiency wall. As speeds increase to 800G and 1.6T, the electrical energy required just to move data from the switch ASIC to the front panel (the "SerDes" power) becomes unsustainable.[^29] Pluggable optics consume ~15-20 pJ/bit.
- The Solution: Co-Packaged Optics (CPO) moves the optical engine directly onto the same package as the switch ASIC, replacing copper traces with light.
- Impact: CPO cuts power consumption by over 50% (to <5 pJ/bit) and eliminates power-hungry re-timers.[^29]
- Adoption: Broadcom and Nvidia are moving to CPO for next-gen AI switches (51.2 Tbps and beyond). The math is compelling: every watt saved on networking is a watt redirected to computation.[^31]
4.2 Silicon Photonics (SiPh)
Silicon Photonics is the underlying technology enabling CPO. By manufacturing optical components using standard CMOS semiconductor processes, SiPh allows for the integration of lasers and modulators directly onto silicon chips.[^32]
- Impact: SiPh enables optical I/O for GPUs - processors communicating with the bandwidth of light and the density of chips. This breaks the "memory wall": disaggregated memory architectures where GPUs access remote memory as fast as local memory.[^33] DustPhotonics and STMicroelectronics are already shrinking transceivers by 30% and cutting power by 20%.[^33]
4.3 The Cabling Revolution: AEC vs. DAC vs. AOC
Inside the rack, cable choice dictates airflow, power consumption, and reach. The tradeoffs matter more than they used to. Table 2: Data Center Cabling Technologies Comparison
| Feature | DAC (Direct Attach Copper) | AEC (Active Electrical Cable) | AOC (Active Optical Cable) |
|---|---|---|---|
| Power Consumption | Zero (Passive) [^34] | Low (~1-2W per end) [^35] | Moderate/High (2W+ per end) [^34] |
| Reach (at 400G+) | Short (<3 meters) | Medium (5-7 meters) | Long (100m+) |
| Cost | Lowest | Moderate (Middle ground) | Highest |
| Airflow Impact | Bulky, thick gauge blocks air | Thinner gauge, better airflow | Thinnest, best airflow |
| Use Case | Top-of-Rack (ToR) | Inter-rack / Row | Cross-hall / Long haul |
* AEC (Active Electrical Cable): The sweet spot for AI clusters. Copper with retimer chips to clean the signal.
- Why it wins: AECs extend copper reach to 5-7 meters (spanning multiple racks) at lower cost than optics and thinner gauge than passive copper, improving airflow.[^35] They're becoming the default for connecting AI accelerators within a row - where DAC is too short and AOC is overkill.
4.4 Coolant Distribution Units (CDUs)
The CDU is the heart of the liquid cooling loop - managing flow, pressure, and temperature of the coolant.[^36] It's also become one of the most important components in the AI infrastructure stack.
- Market Dynamics: The CDU market is exploding, projected to grow from $887 million in 2024 to $3.6 billion by 2032, a CAGR of 20%.[^37]
- Technology: High-efficiency CDUs allow for "warm water cooling" (using 40°C+ water), which eliminates the need for energy-intensive chillers, allowing heat to be rejected via dry coolers even in hot climates. This significantly lowers the total data center PUE.[^38]
- Quick Disconnects (UQD): The "USB of liquid cooling." Standardization of leak-proof quick disconnect couplings (like OCP-compliant UQDs from CPC, Danfoss, Stäubli) is vital for operationalizing liquid cooling at scale. Without reliable UQDs, servicing a liquid-cooled rack is a logistical nightmare of draining and refilling fluids. The market for UQDs is surging as they become the critical failure point to avoid.[^39]
4.5 Data Compaction: The "Dreaming" Advantage
Software can help too. Atombeam and Neurpac use "codewords" to compact data at the source, optimizing bandwidth without sacrificing accuracy.[^9]
- The "Dreaming" Concept: Future AI systems using PCM (Pattern Computer) architectures may consolidate patterns during "sleep" cycles.[^9] This would shift the thermodynamic load from continuous grind to rhythmic cycle - potentially aligning compute demand with renewable availability (wind at night, solar by day).
Section 5: Optimal Strategy - The "Green Compute" Thesis (2025)
"Green Compute" isn't ESG theater. It's an operational survival strategy for a power-constrained world. The model: distributed, resilient, biologically inspired - the "Planetary Forest" over the "Tower of Babel."[^4]
strategy
5.1 Strategy 1: Efficiency as the New Capacity
Grid power is capped. The only way to scale is extracting more operations from the same watt.
- Deploy Software Defined Power (SDP) aggressively. Move from static provisioning (stranding 40% of power) to dynamic, oversubscribed models. This is "free" capacity recovered via software. Virtual Power Systems and Uplight are the key players.[^21]
- Mandate Co-Packaged Optics for all new AI cluster builds. The 50% power savings on the network layer is one of the few ways to free up watts for GPUs.[^29]
5.2 Strategy 2: The "Island Mode" Pivot
Depending on the utility grid is now a strategic risk. Build data centers that can run independently.
- Deploy natural gas with carbon capture or Bloom Energy fuel cells as primary power. Not zero-carbon today, but they deploy in 9-12 months - a speed the grid can't match.[^17]
- Secure land with "behind-the-meter" access to existing power plants. Co-locate at a nuclear or gas plant to skip transmission queues entirely. Amazon's purchase of the Talen Energy nuclear-powered data center campus is the model.[^24]
5.3 Strategy 3: The Cooling Retrofit
Existing air-cooled facilities are becoming obsolete for AI workloads.
- Invest heavily in Liquid-to-Air CDUs. These let you deploy liquid-cooled racks in air-cooled data centers by rejecting heat from the liquid loop into the room's air stream (managed by existing CRACs). This is the bridge technology for 2025-2028 before fully liquid facilities come online.[^41]
- Standardize on Universal Quick Disconnects (UQD) to avoid vendor lock-in and future-proof the plumbing.
5.4 Strategy 4: The Silvicultural Approach ("The Forest Model")
The "Silvicultural Architecture of Cognition" argues against infinite centralization. Build a forest, not a tower.[^4]
- Decentralize inference. Push inference to the edge - use the distributed power capacity of telecom towers and metro data centers. This means smaller, specialized models rather than monolithic LLMs, mimicking biological efficiency (20 watts for a human brain vs. gigawatts for AI).[^4]
- Farm organic data. Human data is the "humus" of the AI forest. Invest in systems that preserve and verify authentic human data to prevent "digital inbreeding" and model collapse.[^4]
Conclusion: The Wall as a Filter
The Thermodynamic Wall won't end AI progress. It will kill inefficient architectures and speculative zombie projects. Brute-force scaling - more H100s in air-cooled racks on a stressed grid - is over. What wins in the next 5-10 years:
- Moving heat with liquid, not air (CDUs).
- Moving data with photons, not electrons (Silicon Photonics).
- Generating power on-site and managing it with software (SDP). The investment alpha isn't in GPU makers - they face commoditization. It's in the picks and shovels: CDUs, UQDs, Silicon Photonics, AECs, and SMR fuel chains. The companies that help climb the wall, not the ones crashing into it. Table 3: The "Green Compute" Investment Matrix (2025-2030)
| Sector | "Buy" Thesis (The Advantage) | "Sell" / Risk Thesis | Key Players |
|---|---|---|---|
| Cooling | Liquid CDUs & UQDs. Essential for >50kW racks. Recurring revenue on fittings/fluids. | Legacy CRAC/CRAH. Air cooling is dead for frontier AI. | Vertiv, nVent, CPC, Stäubli, CoolIT, DCX |
| Power Gen | Fuel Cells & Gas Turbines. The only "fast" power. | SMRs (Short Term). HALEU shortage and reg delays push to 2030+. | Bloom Energy, Mitsubishi, Centrus (Long term) |
| Interconnect | Silicon Photonics (CPO) & AECs. Solves the I/O power bottleneck. | Pluggable Transceivers (Long Reach). Too power hungry for intra-cluster. | Broadcom, Marvell, DustPhotonics, Credo |
| Software | Software Defined Power. Unlocks 30% "free" capacity. | Legacy DCIM. Passive monitoring is insufficient; control is needed. | Virtual Power Systems, Uplight |
| Grid | Transmission Components. Transformers/Switchgear for substations. | Speculative Solar/Wind. Interconnection queues kill IRR. | Eaton, Siemens, Hubbell |
Works cited
[^1]: AI 2030 - final version - Epoch AI, accessed December 21, 2025, https://epoch.ai/files/AI_2030.pdf [^2]: Centrus Reaches 'Critical Milestone' With 900 Kilogram Haleu ..., accessed December 21, 2025, https://www.nucnet.org/news/centrus-reaches-critical-milestone-with-900-kilogram-haleu-delivery-to-us-doe-6-1-2025 [^3]: Clean Energy Interconnection Backlog—2025 Trends & Insights, accessed December 21, 2025, https://www.zeroemissiongrid.com/insights-press-zeg-blog/interconnection-backlog/ [^4]: (PDF) The Silvicultural Architecture of Cognition - ResearchGate, accessed December 21, 2025, https://www.researchgate.net/publication/398664899_The_Silvicultural_Architecture_of_Cognition [^5]: ENVIRONMENTAL IMPACTS OF ARTIFICIAL INTELLIGENCE, accessed December 21, 2025, https://www.oeko.de/fileadmin/oekodoc/Report_KI_ENG.pdf [^6]: Electricity Demand and Grid Impacts of AI Data Centers - arXiv, accessed December 21, 2025, https://arxiv.org/html/2509.07218v4 [^7]: AI power: Expanding data center capacity to meet growing demand, accessed December 21, 2025, https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/ai-power-expanding-data-center-capacity-to-meet-growing-demand [^8]: Research summary - Ethan Wicker, accessed December 21, 2025, https://ethanwicker.com/2025-10-07-research-summary-energy-use-of-ai-inference/ [^9]: Invest in Atombeam | StartEngine, accessed December 21, 2025, https://www.startengine.com/offering/atombeam [^10]: Chipping Point - Greenpeace, accessed December 21, 2025, https://www.greenpeace.org/static/planet4-eastasia-stateless/2025/04/5011514f-greenpeace_chipping_point.pdf [^11]: Data Center Liquid Cooling Market Outlook and Forecast 2025-2030, accessed December 21, 2025, https://www.marknteladvisors.com/research-library/data-center-liquid-cooling-market.html [^12]: Data Center Liquid Cooling Market Size, Companies & Share Analysis, accessed December 21, 2025, https://www.mordorintelligence.com/industry-reports/data-center-liquid-cooling-market [^13]: Data Center Liquid Cooling Market | Size, Share, Growth | 2025 - 2030, accessed December 21, 2025, https://virtuemarketresearch.com/report/data-center-liquid-cooling-market [^14]: The US interconnection queue is twice its installed capacity, accessed December 21, 2025, https://www.latitudemedia.com/news/the-us-interconnection-queue-is-twice-its-installed-capacity/ [^15]: Power Demand Forecasts Revised Up - Grid Strategies, accessed December 21, 2025, https://gridstrategiesllc.com/wp-content/uploads/Grid-Strategies-National-Load-Growth-Report-2025.pdf [^16]: Data center executives pivot toward onsite power, per new report, accessed December 21, 2025, https://www.power-eng.com/onsite-power/data-center-executives-pivot-toward-onsite-power-per-new-report/ [^17]: Reliable Data Center Power Solutions - Bloom Energy, accessed December 21, 2025, https://www.bloomenergy.com/industries/data-center-power/ [^18]: Microsoft to Build Data Center Powered by Gas Fuel Cells, accessed December 21, 2025, https://www.power-eng.com/gas/turbines/microsoft-to-build-data-center-powered-by-gas-fuel-cells/ [^19]: Google signs first contract to capture emissions at natural gas plant, accessed December 21, 2025, https://trellis.net/article/google-funding-new-natural-gas-plant-outfitted-carbon-capture-storage/ [^20]: Top 40 Data Center KPIs, accessed December 21, 2025, https://img.datacenterfrontier.com/files/base/ebm/datacenterfrontier/document/2022/09/1663627559004-eb016_sunbird_ebook_top_40_data_center_kpis.pdf?dl=1663627559004-eb016_sunbird_ebook_top_40_data_center_kpis.pdf [^21]: VPS CEO Dean Nelson on Flipping Data Centers' Wasteful Status Quo, accessed December 21, 2025, https://www.datacenterknowledge.com/sustainability/vps-ceo-dean-nelson-on-flipping-data-centers-wasteful-status-quo [^22]: Virtual Power Systems Software Defined Power Selected by SAP, accessed December 21, 2025, https://eepower.com/news/virtual-power-systems-software-defined-power-selected-by-sap/ [^23]: Big Tech's Nuclear Bet: Key Small Modular Reactors for Cloud Power, accessed December 21, 2025, https://www.wwt.com/blog/big-techs-nuclear-bet-key-small-modular-reactors-for-cloud-power [^24]: Executive Summary – The Path to a New Era for Nuclear Energy - IEA, accessed December 21, 2025, https://www.iea.org/reports/the-path-to-a-new-era-for-nuclear-energy/executive-summary [^25]: NuScale cancels first planned SMR nuclear project due to lack of ..., accessed December 21, 2025, https://www.thechemicalengineer.com/news/nuscale-cancels-first-planned-smr-nuclear-project-due-to-lack-of-interest/ [^26]: The collapse of NuScale's project should spell the end for small ..., accessed December 21, 2025, https://www.utilitydive.com/news/nuscale-uamps-project-small-modular-reactor-ramanasmr-/705717/ [^27]: Building Fuel Supply Chains for SMRs and Advanced Reactors, accessed December 21, 2025, https://www.iaea.org/bulletin/fuelling-the-future-building-fuel-supply-chains-for-smrs-and-advanced-reactors [^28]: High-Assay Low-Enriched Uranium (HALEU), accessed December 21, 2025, https://world-nuclear.org/information-library/nuclear-fuel-cycle/conversion-enrichment-and-fabrication/high-assay-low-enriched-uranium-haleu [^29]: A Key Technology Path for Optical Interconnects in AI Data Centers, accessed December 21, 2025, https://www.naddod.com/blog/cpo-optical-interconnects-in-ai-data-centers [^30]: Energy Efficiency in Co-Packaged Optics, accessed December 21, 2025, https://www.senko.com/energy-efficiency-in-co-packaged-optics/ [^31]: Co-Packaged Optics in Modern Data Centres - ahmedjama.com, accessed December 21, 2025, https://ahmedjama.com/blog/2025/05/co-packaged-optics-in-modern-datacenter [^32]: How silicon photonics is powering the AI data center revolution, accessed December 21, 2025, https://blog.st.com/data-silicon-photonics-ai/ [^33]: Silicon Photonics for Data Centers | DustPhotonics, accessed December 21, 2025, https://www.dustphotonics.com/unlocking-the-potential-of-silicon-photonics/ [^34]: DAC vs AOC Cables: Complete 2025 Data Center Guide (with AEC), accessed December 21, 2025, https://network-switch.com/blogs/networking/dac-vs-aoc-cables-the-guide-2025 [^35]: Active Electrical Cables (AEC): Enabling High-Speed Connectivity, accessed December 21, 2025, https://www.fs.com/blog/active-electrical-cables-aec-enabling-highspeed-connectivity-41201.html [^36]: CDUs: Enabling High-Density Cooling for AI Data Centers, accessed December 21, 2025, https://airsysnorthamerica.com/behind-every-ai-breakthrough-the-cdu-technology-enabling-high-density-cooling/ [^37]: Coolant Distribution Units CDU for Data Center Market Outlook 2025 ..., accessed December 21, 2025, https://www.intelmarketresearch.com/coolant-distribution-units-for-data-center-2025-2032-386-4497 [^38]: Coolant Distribution Units (CDU) for Data Center Market Size, accessed December 21, 2025, https://reports.valuates.com/market-reports/QYRE-Auto-13Y17027/global-coolant-distribution-units-cdu-for-data-center [^39]: The Soaring Rise of Universal Quick Disconnect (UQD) Couplings, accessed December 21, 2025, https://www.intelmarketresearch.com/blog/60/universal-quick-disconnect-coupling-for-liquid-cooling-market [^40]: Virtual Power Plant Solutions - Uplight, accessed December 21, 2025, https://uplight.com/solutions/virtual-power-plant/ [^41]: Understanding Coolant Distribution Units (CDUs) for Liquid Cooling, accessed December 21, 2025, https://www.vertiv.com/en-us/about/news-and-insights/articles/educational-articles/understanding-coolant-distribution-units-cdus-for-liquid-cooling/