Research Question

Research and synthesize the leading publicly available forecasts for US AI/data center electricity demand through 2030 from EPRI, McKinsey, BCG, Goldman Sachs, IEA, Lawrence Berkeley National Lab, and Grid Strategies. Produce a data table comparing each forecast's methodology, key assumptions (training vs. inference split, GPU efficiency curves, PUE assumptions, hyperscaler vs. enterprise mix), and 2030 demand estimate in TWh and as a percentage of total US electricity consumption. Explain the specific modeling choices that drive the widest divergences and conclude with the most defensible base-case estimate and its key sensitivities.

EPRI's state-level pipeline tracking reveals data centers could drive 9-17% of US electricity use by 2030: they aggregate operational, under-construction, and announced projects into low/medium/high scenarios based on realization rates (e.g., low assumes 25% advanced planning online; high adds 30% early planning), converting IT capacity (56-132 GW) to energy via implicit load factors/PUE, yielding 380-790 TWh—60% above their 2024 forecast due to accelerated AI builds.[1][2][3][4]
- Low: 56 GW IT capacity, ~380 TWh (9%)
- Medium: 96 GW, ~590 TWh (13%)
- High: 132 GW, ~790 TWh (17%)
- % based on EPRI's US-REGEN model total demand (~4,200-4,600 TWh).[5]
- No explicit training/inference split, GPU curves, or hyperscaler mix; focuses on project pipelines vs. equipment shipments (e.g., aligns with LBNL to 2028).[6]

For entrants, EPRI's pipeline method highlights execution risk: only ~65% of announced projects may materialize by 2030 per Grid Strategies benchmarks, favoring incumbents with sited power access over speculative builds.[7]

LBNL's bottom-up equipment model projects 325-580 TWh by 2028 (6.7-12% US total), driven by AI server shipments where training overtakes inference (50-53% AI energy) amid GPU power ramps (60-80% rated) and PUE declines to ~1.4 via liquid cooling/hyperscale shift.[8]
- Servers: 85% hyperscale/colocation by 2028 (vs. enterprise decline); AI servers 80% hyperscale.
- Extrapolating 13-27% CAGR to 2030: ~450-850 TWh.
- Operational: AI training 75-85% uptime (high), inference 37.5-42.5%; PUE simulated thermodynamically (e.g., air chillers 1.4-1.6, liquid lower).
- Matches EPRI through 2028 despite methods (shipments vs. sites).

Competitors must match LBNL's efficiency moat: hyperscalers' 90-99% UPS/liquid cooling yields 20% lower PUE than enterprise, gating AI-scale inference.[8]

McKinsey's workload-driven forecast sees US data centers at 606 TWh (11.7% total) from AI tripling (124 GW incremental global), assuming PUE to 1.1 via chips/cooling but hybrid training/inference racks pushing density.[9]
- 25 GW (2024) to 80+ GW capacity; inference shifts edge/cloud.
- No explicit split; emphasizes GW-scale campuses for training.

New players face $500B+ capex barrier per McKinsey: efficiency (PUE 1.1) and hybrid designs favor hyperscalers scaling AI from training to inference.[9]

Forecaster Methodology Key Assumptions 2030 TWh % US Total (implied ~5,100 TWh EIA AEO2026)[10]
EPRI[2] Pipeline (sites/construction/announced) to IT GW → TWh Realization rates; implicit LF/PUE Low: 380
Med: 590
High: 790
9/13/17%[3]
LBNL (ext.)[8] Bottom-up: shipments × power × util × PUE Train>inf by '28; GPU 60-80% rated; PUE~1.4; 85% hyperscale 450-850 9-17%
McKinsey[9] Workload/capex growth (AI/HPC) PUE 1.1; train/inf hybrid; 80+ GW 606 11.7%
IEA[11] Global model; US ~40% share +240 TWh on 185 (2024); eff gains ~425 ~8%
Goldman Sachs[12] Server shipments × intensity 60% global in US; AI 39% ~750 ~15%
BCG (est.)[13] Compute modeling High AI/workload; global analogs ~970 ~19%
Grid Strategies[7] Utility FERC agg. (adj. down) High LF critique; benchmarks 65 GW ~65 GW (~500 TWh est. LF=90%) ~10% peak growth

Widest divergences stem from pipeline (EPRI: sites) vs. shipments (LBNL: GPUs) vs. workloads (McKinsey/GS: AI adoption): EPRI/Grid cap at announced (risking overbuild), LBNL moderates via util/PUE, high-end (BCG/GS) assumes unchecked inference explosion outpacing eff gains (e.g., GS +220% global ignores LF<100%).[7][8]

Defensible base case: 500-600 TWh (10-12% US total), blending EPRI med/McKinsey/LBNL high—aligns bottom-up (equip eff) with pipelines, assuming hyperscaler dominance (85% servers), PUE~1.2-1.4, train/inf ~50/50. Sensitivities: +20% if inference >train (GS risk); -15% faster GPU eff/PUE<1.2; ±10% realization (Grid adj.).[10]

Entrants: target inference efficiency (70% lifecycle per LBNL) and co-locate with renewables for PUE edge, as pipelines favor power-secured hyperscalers.


Recent Findings Supplement (April 2026)

EPRI's February 2026 Update Resets the High End of Forecasts

EPRI's "Powering Intelligence 2026" report, released February 25, 2026, sharply raised its US data center electricity projections by 60% versus its 2024 estimates, driven by 18 months of accelerated AI-fueled project announcements; it uses state-level commercial development pipelines (operational, under construction, advanced/early planning) rather than equipment shipments, assuming most new capacity ramps quickly with type-specific load factors (higher for AI/hyperscalers), cooling/overhead loads, and PUE implicitly around 1.2-1.4 based on industry trends—yielding low/medium/high 2030 demand of 384/596/793 TWh (9/13/17% of ~4,400 TWh total US generation), aligned with LBNL's 2028 band but extending to 2030 via project realization rates (e.g., high assumes all construction/advanced +30% early planning).[1][2][3][4][5][6]
- Capacity: 56 GW low (384 TWh), 96 GW medium (596 TWh), 132 GW high (793 TWh); 2024 baseline 35-44 GW / 177-192 TWh (4-5%).
- Consistent with LBNL to 2028 (325-580 TWh) despite methods divergence: EPRI favors hyperscaler/AI buildout data over LBNL's bottom-up server shipments.[7]
For competitors: Project pipelines undervalue efficiency gains or overstate realization without EPRI's granular state data; sensitivities include planning-stage dropouts (bearish) or crypto/AI surge (bullish).

Goldman Sachs February 2026 Revision Ties Growth to Hyperscaler Capex and Inference Intensity

Goldman Sachs updated its global data center forecast upward to 220% growth by 2030 vs. 2023 (from prior 175%), implying ~543 TWh baseline +905 TWh growth to 1,350 TWh total (~60% US share, or ~810 TWh US including baseline ~200 TWh), modeling server shipments (TMT team revisions), genAI efficiency (low-double-digits annual) offset by power-dense AI servers/GPUs (e.g., +68% per-server for Rubin-era), rising inference mix (greater intensity vs. training), and capex surges; US capacity to 95 GW by 2030 (~10-12% of ~5,000-5,600 TWh total US electricity).[8][9]
- US skew: 60% of incremental demand (vs. prior 50%), driven by hyperscaler reinvestment >$300B upward to 2026-27.
- Assumptions: Inference > training power (debated pervasiveness/automation); non-AI ~10% annual growth; PUE unstated but implied via capacity-to-energy.
For entrants: Server-level modeling misses site constraints; key sensitivity is inference efficiency breakthroughs (could cut 20-30% off high-end).

Forecaster Date Methodology Key Assumptions 2030 US DC Demand (TWh / % Total US)
EPRI (Low/Med/High) Feb 2026 State-level project pipelines (const./adv./early planning %) to IT capacity, then load factors/PUE AI/hyperscaler high load factor; ramp rates; cooling/overhead; 60% > prior EPRI 384 / 9%; 596 / 13%; 793 / 17% (total US ~4,400 TWh)[5][6]
LBNL Dec 2024 (cited 2026) Bottom-up IT shipments (servers/storage/net), ops util., PUE/WUE sims GPU/AI growth (infer. 60%→50%); PUE 1.15-1.35; 50% util.; hyperscale dom. 325-580 (2028: 6.7-12%); trend to ~500-700 (2030 est., my inference)[7]
Goldman Sachs Feb 2026 Server shipments, AI mix/efficiency curves, capex Inference intensity↑; GPU power-dense; low-DD eff.; ~60% US growth share ~810 total (~543 base+growth; 14-16%)[9]
Grid Strategies (agg. utilities) Nov 2025 Utility/RTO FERC filings aggregation High DC load factors (~96%); ~55% growth from DC (90 GW peak) ~900-1,000 (16-18% of 5,591 TWh total; likely -25 GW/40% DC overstate)[10]
BCG Mar 2026 Bottom-up peak capacity modeling AI buildout to firm power gap; hyperscaler focus 50-80 GW gap (implies ~400-650 TWh @ high util.; 8-12%)[11]

Divergences Stem from Pipeline Realization vs. Equipment Constraints

Widest gaps (EPRI low 384 TWh vs. high 793 TWh; utilities overstated per Grid Strategies) arise from project completion (EPRI: 25-100%+ planning stages) vs. bottom-up limits (LBNL/Goldman: GPU shipments/util./PUE 1.15-1.35, inference 40-60% mix); utilities assume near-100% load factors without supply bottlenecks, while analysts cap at 50-96%—EPRI/LBNL converge mid-500s TWh as EPRI includes crypto/enterprise beyond hyperscalers.[10][7]
For market players: Over-reliance on announcements risks 40% shortfall; PUE drops or inference optimization most sensitive (10-20% swing).

Grid Strategies Highlights Utility Optimism on DC Load Factors

November 2025 aggregation of utility/RTO forecasts shows 166 GW peak growth (20% total), 90 GW (55%) DC at ~96% load factor driving energy to 5,591 TWh total US (+32%), but likely 25 GW/40% DC overstatement vs. analysts (e.g., 65 GW max via chips); no split assumptions detailed, but implies hyperscaler-heavy with minimal efficiency curves.[12][10]
Implication: Regional queues undervalue delays; competitors need interconnection reforms.

No Major Post-2025 Updates from McKinsey, BCG, IEA

McKinsey (pre-2026 cites) ~35 GW US DC power (~300-400 TWh est.); BCG Mar 2026 flags 50-80 GW firm power gap (no TWh); IEA Apr 2026 global double to 945 TWh (US ~40-50% share, ~400 TWh base); methods emphasize AI inference but lack new US-specific TWh.% no direct post-Oct 2025 shifts.[13]

Defensible Base Case: 550-650 TWh (11-13% of US Total)

Mid-range synthesis (EPRI medium 596 TWh, LBNL trend-adjusted, Goldman US share) at 550-650 TWh / 11-13% (~5,000 TWh total US per Grid/IEA trends), balancing pipelines (bullish AI/hyperscaler) with shipments/PUE (conservative util. 50%, inference eff.); most defensible as EPRI/LBNL methods converge despite differences, utilities high-bias corrected.[10]
- Sensitivities: +20% if planning >65 GW (Grid adj.); -15% PUE/inference gains; policy (e.g., queues) ±10%.
To compete: Focus midcase, hedge via behind-meter nuclear/gas-CCUS (BCG rec.); no recent regulatory shifts noted.