AI Compute Is the New Electricity
AKA the people who build the infrastructure, not the applications, will capture the structural value.
Every economy runs on an invisible infrastructure nobody thinks about. We flick switches without thinking about power plants. We stream video without thinking about fiber. AI compute is becoming the third entry on that list — and the people who build the infrastructure, not the applications, will capture the structural value.
Nobody in 1890 was thinking about the power grid. They were thinking about the lightbulb. Edison’s invention was dazzling — but the money wasn’t in the bulb. It was in the infrastructure required to power the bulb at scale. The generating station. The transmission lines. The substations. The meters. The whole invisible system that made the light possible.
We are in the exact same moment with AI. Everybody is talking about the application — the chatbot, the agent, the model. Almost nobody is talking about the infrastructure required to run those applications at civilizational scale. And that is the asymmetric opportunity I have been building toward for the last four years.
The Numbers That Change the Framing
Let me give you the data that shifted my thinking. The International Energy Agency published its landmark Energy and AI report in April 2025.
The numbers are staggering. Global data center electricity consumption is on track to exceed 1,000 terawatt-hours by end of 2026 — equivalent to Japan’s entire annual electricity consumption.
In the US alone, Bloom Energy’s January 2026 infrastructure report estimates total data center energy demand will nearly double from 80 GW in 2025 to 150 GW by 2028. Driven almost entirely by AI.
1,000
TWh global DC electricity consumption by end 2026 (IEA)
150 GW
US data center power demand by 2028 (Bloom Energy, Jan 2026)
$400B+
Capital expenditure by 5 large tech companies in 2025 alone (IEA)
+75%
Expected increase in that capex figure in 2026 (IEA)
Five large technology companies spent more than $400 billion on AI infrastructure in 2025. The IEA projects that number will increase by a further 75% in 2026.
This is not R&D spending.
This is physical infrastructure — steel, concrete, transformers, cooling systems, fiber, GPUs. The biggest capital deployment in the history of the technology industry.
And yet, the framing in most investment conversations is still about which AI application will win.
Which model will dominate.
Which company’s chatbot will become the interface layer.
That is like arguing about which lightbulb design is prettier while Thomas Edison is selling shares in the power station.
The Utility Analogy Is More Precise Than It Sounds
When I say AI compute is the new electricity, I am not making a poetic comparison. I mean it with engineering precision.
Electricity has four defining characteristics as a utility:
1) it is fungible (a watt is a watt regardless of source)
2) it is metered (you pay for what you consume)
3) it is delivered via infrastructure (you don’t need to own the generator),
4) it is essential (not optional for modern economic activity).
AI compute is acquiring all four of those characteristics right now!
A GPU-hour of inference compute is increasingly fungible across providers. It is already metered by every major cloud provider down to the millisecond.
The infrastructure to deliver it — data centers, fiber, power — is being built by a handful of large capital allocators.
And within five years, it will be as non-optional for serious enterprise operations as an internet connection is today.
“Nobody questions where the electricity in their office came from. Within five years, nobody will question where the compute running their AI systems came from. It will just flow from the grid.”
The difference between AI compute and electricity is the timeline.
It took electricity roughly fifty years to become a utility. AI compute will do it in less than a decade.
The reason: the demand is coming from enterprises that are already wired, already connected to cloud infrastructure, and already paying for compute.
The distribution problem is mostly solved. What isn’t solved — and this is where the real opportunity lives — is the supply side.
The Supply Side Is Structurally Broken
Here is the problem nobody in the mainstream investment conversation is grappling with honestly.
The supply side of AI compute infrastructure has a fundamental structural deficiency.
The current architecture — built on hyperscale data centers designed for traditional cloud workloads — is not capable of delivering what AI inference at civilizational scale actually requires.
Wood Mackenzie published a report in May 2026 projecting that US data center capacity needs to scale from roughly 24 GW today to 100 GW by 2030.
That is a 4x expansion in four years. Simultaneously, substation transformer lead times have stretched from 140 weeks in 2023 to more than 160 weeks in 2026.
Switchgear timelines remain elevated. Specialized cooling equipment for high-density AI racks is in short supply globally.
You cannot build a hyperscale cathedral in time. The lead times alone make it impossible.
A traditional hyperscale data center takes 4-7 years from site selection to full operational status.
The AI workload that data center was designed to serve will have evolved beyond recognition by the time it comes online.
This is not a hypothesis. It is already happening.
The Infrastructure Layer Is Where the Money Goes
I spent eleven years building enterprise solutions for Fortune 100 companies. Volkswagen, KBC, Mondelez. I have seen technology transitions from inside the organizations that have to absorb them.
And the pattern is always the same: the application layer is visible and gets all the attention, while the infrastructure layer is invisible and captures the structural economics.
Who made money from the internet?
Not the websites. The fiber. The routers. The CDNs.
Who made money from mobile? Not the apps. The spectrum. The towers. The backhaul.
Who is going to make money from AI at scale? Not the models. The compute. The power. The cooling. The interconnect.
The utility thesis is not about being contrarian for its own sake.
It is about following the physics. AI systems require electricity to train and electricity to run inference.
That electricity must come from somewhere. That compute must live somewhere. That somewhere — the physical infrastructure of AI — is the investment thesis. Everything else is derivative.
We are in the earliest innings of a transition that will make the buildout of the electrical grid look modest in ambition.
The question is not whether AI compute will become a utility. It already is one, just not yet at full scale.
The question is who builds the infrastructure, at what cost, and with what architecture.
That is what the rest of this series is about.
The DCXPS Thesis in One Paragraph: AI compute becomes a metered utility within five years. The infrastructure to deliver that utility must be modular, deployable at the speed of demand, co-located with energy sources rather than constrained by grid interconnection queues, and designed for rack densities that existing hyperscale facilities cannot physically support. The window to build that infrastructure is now. The window to invest in it is also now.


