The Energy Bill Nobody’s Reading
The question I ask is always the same: what’s your energy contract structure?
Let me tell you about a conversation I’ve been having with data center operators across Europe and the US.
The question I ask is always the same: what’s your energy contract structure?
The answers should terrify anyone building AI infrastructure right now.
AI data centers are on track to consume 8% of US electricity by 2030, up from roughly 2% in 2022. That’s not a forecast from some think tank hedging its bets. That’s from the Department of Energy’s own modeling, backed by utility capacity planning documents that are already being executed.
Utilities are signing 20-year power agreements.
AI companies are not.
That gap — between the infrastructure commitments that power AI and the contract structures that pay for it — is where the next crisis in this industry is being built in slow motion.
The Physics of the Problem
Training a single large language model frontier run costs between $50-100 million in compute. That’s the headline. Nobody talks about the fact that 40-60% of that cost is electricity. At scale, AI infrastructure is an energy business that happens to produce tokens.
NVIDIA’s H200 GPUs — the current gold standard — draw about 700 watts each under full load. A standard 8-GPU server: 5.6 kilowatts. A rack of eight servers: ~45 kilowatts. A modest 100-rack cluster: 4.5 megawatts.
The hyperscaler data centers being announced now?
200, 500, 1,000 megawatts.
Microsoft’s announced AI data center commitments for 2025-2026 alone: $80B capex.
Where does that electricity come from?
In the near term: natural gas peakers. Coal in some regions. Nuclear where available. Renewables where the grid can handle it. The clean energy story most AI companies tell their investors doesn’t match the power purchase agreements they’re actually signing.
The Utility Lock-In Nobody’s Talking About
Here’s the structural trap.
Electric utilities are regulated monopolies.
When a data center operator comes to them needing 500 megawatts of new capacity, the utility has to build generation, transmission, and distribution infrastructure to deliver it. That infrastructure has a 20-40 year lifespan.
Utilities are not building that infrastructure speculatively. They are requiring power purchase agreements — long-term contracts, typically 10-20 years, with take-or-pay provisions. You commit to paying for the power whether you use it or not.
The data center operators building for the AI boom are signing these agreements. Some of them are AI companies themselves. Many of them are independent operators who signed LOIs and supply agreements with AI company customers on much shorter terms — 1-3 year agreements, with renewal options.
Do you see the mismatch?
The utility contract: 20 years.
The AI customer contract: 1-3 years.
If the AI customer doesn’t renew — because the model changed, the competitor got cheaper, the startup ran out of money — the data center operator eats the difference. Locked into paying for power that nobody needs.
The European Version of This Problem
In the EU, the problem has an additional layer. Energy prices are structurally higher — averaging 2-3x US industrial rates in Western Europe, and volatile in ways the US market isn’t. The combination of high base energy cost, carbon pricing, and intermittent renewables creates margin compression that makes the US AI infrastructure economics look generous.
The EU data sovereignty requirements are creating demand for EU-jurisdiction AI infrastructure. That’s genuinely good for operators who can deliver it. But the energy economics mean that EU-based GPU compute is approximately 40-60% more expensive per token than equivalent US infrastructure — a structural disadvantage that doesn’t go away with better software.
The operators who figure out how to amortize that energy premium through platform services — fine-tuning, inference serving, developer tooling — are the ones who can make the economics work.
Bare-metal renting alone doesn’t.
Who Holds the Bag
Three groups are exposed here in different ways.
First: hyperscalers who signed the 20-year power agreements. If AI capex rationalizes faster than expected — and it will, because it always does — they’re stuck with stranded power capacity. Microsoft, Google, Amazon all have the balance sheets to absorb it. It hurts. It doesn’t kill them.
Second: independent data center operators who levered up to build AI-specific infrastructure on the assumption of multi-year customer contracts. These are the ones keeping me up at night. The private equity-backed builds, the regional operators, the “AI data center REIT” structures being pitched to institutional investors right now. When the AI customer churn hits, the power contracts don’t churn with them.
Third: the AI companies themselves who signed multi-year compute commitments and then face a world where open-source models halved the compute requirement. They’ve prepaid for capacity they no longer need. That’s a balance sheet problem.
The Opportunity in the Rubble
None of this means don’t build.
It means build differently.
The operators who survive the inevitable rationalization will be the ones who didn’t build for one customer, didn’t sign 20-year power agreements on the assumption of 3-year customer relationships, and built platform layers that create genuine switching costs.
The energy bill is coming.
The question is who planned for it and who just assumed the party would last.


