The Grid That Cannot Keep Up
For the first two years of the AI infrastructure buildout, the binding constraint was GPU availability. That constraint eased as Nvidia ramped production. The new binding constraint is one that cannot
The PJM Interconnection is the largest electricity grid in North America, serving 65 million people across 13 states including the heavily data-center-saturated Northern Virginia region. In the spring of 2024, PJM held its annual capacity market auction — the mechanism that determines how much power will be available and at what price for the following year. The result was a signal that the entire AI infrastructure industry should have treated as a five-alarm warning.
Capacity market clearing prices for the 2026-2027 delivery year hit $329.17 per megawatt-day. The price for the 2024-2025 delivery year had been $28.92 per megawatt-day. That is a 1,037% increase in one year. In a capacity market that had been largely stable for over a decade. Rapid data center growth was explicitly identified as a major contributing factor.
$28.92
PJM capacity price 2024-25 delivery year ($/MW-day)
$329.17
PJM capacity price 2026-27 delivery year ($/MW-day)
1,037%
Price increase in a single year
160 wks
Current substation transformer lead times (up from 140 in 2023)
Why Grid Interconnection Is the Real Bottleneck
Building a large data center requires connecting it to the electrical grid at transmission voltage. This interconnection process — securing a queue position, completing the interconnection study, negotiating agreements with the utility, waiting for utility infrastructure upgrades — routinely takes 3-7 years under normal circumstances. In regions with high data center demand, the queue is measured in gigawatts of pending requests.
Dominion Energy, the dominant utility in Northern Virginia, publicly stated that it faced challenges meeting data center power demand until at least 2026 despite adding over 3 GW of data center capacity. That is a utility that has been preparing for this demand wave for years, with full visibility into the pipeline, acknowledging that it cannot keep up.
The physical hardware constraints compound the timeline problem. Substation transformer lead times have stretched from roughly 140 weeks in 2023 to more than 160 weeks in 2026. Switchgear timelines remain elevated. The electrical equipment market for data centers is projected to grow from $20 billion in 2026 to $65 billion by 2030 — but that equipment takes years to manufacture and install. You cannot accelerate the grid by writing a larger check.
The On-Site Generation Pivot
The most sophisticated AI infrastructure operators are responding to grid constraints by effectively exiting the grid for their largest deployments. Not partially — entirely. They are building dedicated on-site power generation, signing long-term offtake agreements with power generators, and in some cases financing the construction of new generation capacity themselves.
Cleanview’s February 2026 report projects that 30% of anticipated data center energy capacity will come from on-site generation sources — up from effectively zero just two years ago. The company’s founder forecasts that figure rising to 50% as the grid constraint becomes increasingly binding. This is not a fringe development. Microsoft has piloted using data center batteries for grid services in Europe. Constellation Energy has signed major deals to supply nuclear power directly to data centers. Exxon Mobil is supplying natural gas for on-site generation at scale.
“The smartest infrastructure operators have stopped waiting for the grid. They’re building power plants. The AI data center of 2027 will generate its own electricity the same way a 19th-century factory ran its own steam engine.”
What This Means for Infrastructure Deployment Strategy
If grid interconnection is the binding constraint — measured in years, not months — then the competitive advantage in AI infrastructure deployment shifts dramatically toward operators who have either secured power already or have the technical capability to deploy on-site generation alongside compute.
A modular data center co-located with a natural gas turbine, a small modular reactor, or a stranded renewable energy source does not need to join the interconnection queue. It generates its own power, connects to its own generation, and operates independently of the grid constraints that are crippling conventional data center development timelines. Crusoe Energy’s West Texas deployment with Energy Vault’s modular powered shells is exactly this model — AI compute co-located with energy sources that have no grid queue problem because they bypass the grid entirely.
This is a genuine first-mover advantage. The operators who figure out the energy-first deployment model — site selection based on power availability rather than real estate convention, modular facilities designed for on-site generation integration, the operational expertise to run both computing and power generation infrastructure simultaneously — are building a competitive position that cannot be replicated quickly by hyperscale operators locked into their existing grid-dependent development model.
The grid cannot keep up. That is not a temporary bottleneck that will resolve as utilities invest. It is a structural condition that will persist for at least the rest of this decade, given transformer lead times, interconnection queue depths, and the pace of AI compute demand growth. The operators who treat it as a permanent constraint to route around — rather than a temporary problem to wait out — will have a significant structural advantage.


