Will the AI data centre boom become a $9T bust?

https://www.ft.com/content/805f78f3-8da3-4fc0-b860-207a859ac723

Comments

mattasMar 29, 2026, 3:16 AM
monodeldiabloMar 29, 2026, 4:06 AM
It's not really even a question. It's an obvious boondoggle. The forecasted net new energy requirements for the AI buildout over the next couple of years are roughly equivalent to all of Western Europe's power demand today.

That's absurd. It's a physical impossibility to bring that much power online that quickly. And the cost to get even close would make AI more expensive than just hiring knowledge workers to do the same tasks.

And it's all predicated on a tower of wobbly or broken assumptions -- chief among them that increasing the size of these models yields better performance.

We're going to look back on this era and wonder why anybody took any of the outrageous claims of tech CEOs seriously.

aoeusnth1Mar 29, 2026, 4:36 AM
> Wobbly assumption that increasing the size of these models yields better performance.

I'm assuming you disagree that larger models are better? Can you expand on what indicates that AI will hit a wall in scaling given the evidence of the last 9 years of scaling transformers (or other models)? Where on the plot does the line go from exponential to flat?

monodeldiabloMar 29, 2026, 5:10 AM
Leaks from within OpenAI have made it pretty clear that they've been struggling to achieve significant improvements lately by simply scaling up parameter size. Experts like LeCunn have also been vocal that blindly scaling up is a dead end.

(Incidentally, the line of skill improvement isn't "exponential". It's been incremental in improvements per generation, but generations have been coming thick and fast of late, and have grown in parameter count exponentially since 2017.)

Speaking more broadly, LLMs don't have to "hit a wall" in scaling to become uneconomical. If incremental improvement continues to come at exponential cost, however, then the fundamental value argument falls apart.

Setting all that aside, even presuming that model performance scales linearly with dimensionality, there are just fundamental limits to the size of the training corpuses. Quality training data is not unbounded and infinite. Given the same size corpus of training data, there's a hard theoretical limit to how much meaning and inference a model can wring out of it.

And then there are other issues with the whole business model. For one thing, it's predicated on continual full scale retraining to achieve even modest gains in skill and relevancy. Topical and targeted learning requires a full retraining. Etc cetera.

I think that the next generation of AI will lean more heavily on RL to be useful beyond a few months. I also think that the energy requirements of a particular technology are a good proxy to whether it's got a realistic future.

emp17344Mar 29, 2026, 4:45 AM
Why do you believe progress is currently exponential? There’s one dubious chart showing “exponential growth” in a single narrow domain, and otherwise zero evidence to suggest exponential improvement.
danarisMar 29, 2026, 7:29 AM
The evidence is the last 9 years of scaling.

The curve flattened out years ago. The exponential was going from GPT-2 to GPT-4 (or thereabouts). After that, it was painfully obvious to anyone observing without a vested interest in believing otherwise that the progress had slowed.

Now, it's not just that progress has slowed: it's that the exponential has reversed. In order to get marginal gains, they have to throw exponentially more hardware at the training.

functional_devMar 29, 2026, 11:22 AM
even if traning is hitting a wall I think they are shifting more to reasoning phase to get better results... and that is inference compute scaling
righthandMar 29, 2026, 4:45 AM
In my experience the models havent gotten any better, just the hype.
heavyset_goMar 29, 2026, 5:26 AM
And companies know this hence the heavy astroturfing, if their new product has minimal improvements they'll just gaslight you into thinking otherwise
jen729wMar 29, 2026, 4:31 AM
> It's a physical impossibility to bring that much power online that quickly.

China begs to differ.

monodeldiabloMar 29, 2026, 4:41 AM
I played a role in China's shift to renewables. It's been decades in the making.
deckar01Mar 29, 2026, 4:58 AM
They could get lucky, make a break through in robotics, and vertically integrate power generation into their business model with minimal human labor.
marcosdumayMar 29, 2026, 3:36 AM
Yeah... The AI industry will die on the shadow of the Iran war, and there will be forever some people claiming that it was healthy all around and would lead to world-change if the rest of the economy didn't blow.
MelatonicMar 29, 2026, 3:29 AM
If it does there is gonna be a lot of cheap second hand hardware out there for those who want to build something cool
CamperBob2Mar 29, 2026, 3:56 AM
Already got my 440 3-phase hookup scheduled. That NVL72 rack ain't gonna run on sunshine and pixie dust
essephMar 29, 2026, 4:36 AM
Will anybody be able to power it?
calvinmorrisonMar 29, 2026, 3:44 AM
I'm gonna scoop my own /8 and lock a 100 year colo lease
spl757Mar 29, 2026, 6:23 AM
A bust that the poor and middle-class will foot the bill for.
atleastoptimalMar 29, 2026, 4:47 AM
People really really don’t understand the implications of AGI.

Whether or not you believe we will reach it in a fee years, we are certainly wayy closer today than we were even two years ago.

The possibility of genuine AGI obliterates all the financial or energy related worries, they pale in comparison to the ultimate impact of such a technology.

However, yes, if you believe AGI is not possible or won’t arrive in the coming decade then all the data center buildup seems foolish.

tipiiraiMar 29, 2026, 5:09 AM
I believe AGI is mostly a marketing term, so the data center buildup definitely seems foolish.
danarisMar 29, 2026, 7:36 AM
First, you have to define "AGI".

Next, you have to have a clear path to reaching it.

Then, you have to have the resources to actually walk that path.

Only with all three of those can you make any credible claim that AGI is near.

As it stands, we have none of them—and the lack of the second is the most damning. It's very, very clear at this point that just scaling up the existing LLMs is not going to reach some critical mass and result in AGI, like the serendipitous sapience of Mycroft in The Moon Is A Harsh Mistress.

Given that, any path to AGI necessarily includes some new breakthrough on it (or more than one). And by their essential nature, breakthroughs are not something you can predict or schedule. Indeed, you cannot even be guaranteed that they will ever happen. (It is likely, assuming that it is physically possible to build AGI, that we will figure out how at some point...but not guaranteed.)

surgical_fireMar 29, 2026, 4:50 AM
"any day now"

Gotta love this argument. Top it off by saying anyone skeptical is a fool, because of course.

atleastoptimalMar 29, 2026, 4:51 AM
i’m sure my argument would have no merit if you ignored the thousands of advancements made in AI over the past few years
CamperBob2Mar 29, 2026, 6:23 AM
I think the biggest case for a bearish attitude towards AGI is simply that we don't take advantage of the intelligence we already have. Look at our elected human leaders, for Pete's sake.

If we had access to AGI today, we'd just find novel and interesting ways to ignore it, enslave it, gimp it, and/or bias it.

sphMar 29, 2026, 8:09 AM
What, your preference would be to just unleash it upon the world? I wish the average software engineer had any foundation whatsoever in humanities and philosophy before being allowed to make such decisions, but alas, we are doomed.
surgical_fireMar 29, 2026, 10:53 AM
Really?

The last major advancement was probably GPT3, af least if we are talking about the LLM companies, the ones involved in the current data center boom.

After that was we experienced were marginal improvements of the same technology. Yes, the current models are better than what OpenAI put out at the time of ChatGPT 3, but none of it was revolutionary (and the gains have been less and less perceptible in newer versions).

We might be as far from AGI as we were in 2022. I think we are multiple revolutions in technology away from it.

amanziMar 29, 2026, 6:04 AM
I hope it goes bust soon. I need to buy some RAM and an SSD for my PC...
adrianwajMar 29, 2026, 5:06 AM
In other news: "HUT 8 Builds Flex Data Centers For AI, Bitcoin"

https://catenaa.com/markets/cryptocurrencies/hut-8-builds-fl...

And they'll also do "high-performance computing."

Yet, I think Sun's early 2000's vision "the network is the computer" is finally coming and these data centers will all end up becoming multi-use. Want access to apps running with 128GB of memory? Fine.. it'll just be on a thin-client with a data-center powering it (and everything else it does.)

It's not a bad model. As I've mentioned previously, on the client-side I think will be a new era of all-in-one modular SBCs (medium clients.) These can become thin-clients for really beefy applications too that don't have to be "local-first" and can thus be "cloud enabled."

It'd also be interesting to see crypto become more dynamic. Like making it super easy to issue a token for say an upcoming event, or better yet, a new invention looking for early adopters and supporters like Rodin Coils. The big data centers on the backend can make it secure. Just speculating. So the "big iron" compute won't ever be wasted, just repurposed dynamically.

All these mad-scientist inventions will come from unemployed geniuses and tin-foil hatters, some of whom may actually be right. Let's see if they can find a way to vastly speed up radioactive decay with lasers, but, letting the bankers be fine with it all.

augsteinMar 29, 2026, 5:44 AM
Imo most AI compute will happen on-device in the near future.
amanziMar 29, 2026, 6:03 AM
Except the price of personal computing is skyrocketing due to the build out of these data centres! I'd love to build a personal computer to run local models, but it's just not affordable any more.
cammikebrownMar 29, 2026, 2:58 AM
I hope so!
woeiruaMar 29, 2026, 3:08 AM
Better hope not. If the AI bubble pops it’s going to make the dotcom bubble look like a tiny divot in the road.
elevationMar 29, 2026, 3:23 AM
The .COM bubble was more than a divot because .COMs in so many industries employed so many people. There was amazon.com, but also pets.com, lowermybills.com, gateway.com. But if our economy somehow loses access to AI (rationing due to wartime efforts? sabotage by a foreign nation? simply not enough grid power to turn them on at the price people are willing to pay?) I would probably need to hire more coders to get the equivalent work done.
Spooky23Mar 29, 2026, 3:46 AM
AI is driving trades, materials, real estate, all sorts of downstream stuff.

The rest of the economy is dead. Oracle is dead without OpenAI. Remember that unlike the dotcom, none of these companies are public. So when it pops, you’ll see private credit and PE funds implode, which could bring down banks with unhedged exposure. The headlines talk about JP Morgan (which likely has the risk managed), but regional banks got into that nature in the last couple of years in a big way.

vaginaphobicMar 29, 2026, 4:50 AM
[dead]
mjdMar 29, 2026, 4:38 AM
Did amazon.com go bust? Seems like I heard they were still in business as of a couple of years ago at least.
starkeeperMar 29, 2026, 3:16 AM
How would this be a bad thing?
jen729wMar 29, 2026, 4:32 AM
Because everyone's retirement depends on the stock market. If you're unlucky and your portfolio vests the day after a massive crash, that can have a very meaningful impact on the rest of your life.
EkarosMar 29, 2026, 10:58 AM
Maybe that will make everyone to think what is going on with this retirement plan build on stock market growth and expectation there will be something to buy with that "wealth" in future...
emp17344Mar 29, 2026, 4:46 AM
If it’s a bubble, it has to pop sooner or later. It’s better if it pops now before growing even larger.
calvinmorrisonMar 29, 2026, 3:45 AM
Like New Caledonia the wealth of our nation has been pumped into a get rich scheme looking for a new world
SilverElfinMar 29, 2026, 3:21 AM
I want to see these overly powerful tech oligarchs to fail too. But one issue is that all of us are tied to their performance to some extent. Our investments are exposed to them. Your 401k probably has funds that include them. When they fail, it hurts others too.

It’s also why SpaceX wants to be included into index funds as soon as possible after they go public. I recall the rules may be revised to support this, meaning everyone who has money saved in those funds will automatically be tied to the fate of SpaceX.

johnvanommenMar 29, 2026, 4:12 AM
[dead]
cgioMar 29, 2026, 3:20 AM
We already have our jobs on the line with AI, right? How will a crash be worse in personal terms?
falkensmaizeMar 29, 2026, 3:59 AM
There's a lot of people who think the only thing keeping us out of a serious recession or even depression is AI investment.
danarisMar 29, 2026, 7:39 AM
The only alternative is to keep feeding the bubble deliberately, while knowing that it's a bubble.

And no bubble can last forever.

And the longer they go, and the larger they get, the worse the fallout is when they finally pop.

DesiLurkerMar 29, 2026, 3:31 AM
why? its all private money funding AI at the moment, sure some data centre reality companies would go belly up but you will be surprised to find out how much of it is big headlines and very little action. there is a lot of installations and GPUs not yet brought online but marked as sold because of .. well physical world delays.

Let me put it this way, IF AI is a bubble then I'd like it to go bust asap instead of dragging along and going public and then us discovering BS/creative accounting revenue in S1 filings. by then it would be much worse. Right now VCs and PE firms will absorb it all.

The thing with dot-com was that there was actual public market corruption & euphoria. that caused the bust painful for everybody. RN its bigtech & PE how has heavy cash reserver and margins to bun through. I'd much rather have them take it then average 401k.

essephMar 29, 2026, 4:40 AM
> The thing with dot-com was that there was actual public market corruption & euphoria.

Just like now, the financial cup game is insane. Commiting money to a company that plans on doing a thing if they can get another company to do a thing and then that company is leveraging those cumulative possibilities in its own wager. The speculation is out of control.

mikeweissMar 29, 2026, 3:44 AM
Of course it's going to pop!
zerosizedweasleMar 29, 2026, 4:17 AM
[dead]
JamesbeamMar 29, 2026, 8:07 AM
I hope not.

Who’s going to pay me otherwise, becoming the chief security officer aboard the Altman-Musktani vessel USCSS Shiba Inu?

You’re all going to eat rats on a stick while I pull the charred meat off some hostile tech CEOs’ neurodivergent talent for lunch I roasted to perfect crispiness with my boring company flamethrower arm attachment.

So please all keep paying the magic word generator companies, so we can replace most of you and your miserably inefficient human production cycles, including eight hours a day not working because you lie around unconsciously, to become better human livestock, eh, I mean valued "Human Resources".

/s

starkeeperMar 29, 2026, 3:17 AM
[flagged]
nick49488171Mar 29, 2026, 2:57 AM
No probably not
VladVladikoffMar 29, 2026, 3:25 AM
I replaced a $22/hr worker entirely with AI. And it costs me about $0.18/hr instead. The AI does a better job, is more reliable and consistent. The human was constantly behind schedule, made frequent mistakes, and also humans get sick, or call off work for other reasons.

So yes, AI is a bubble, but this bubble has generated value, it’s not at all like 2008.

monodeldiabloMar 29, 2026, 3:56 AM
$0.18/hr is the (massively) subsidized price of AI services. Once these companies are required to turn a profit for their investors, they'll raise the price. Then the math doesn't look so lopsided. We're already seeing this process unfold with token windows and ad rollout.
joegibbsMar 29, 2026, 4:14 AM
It's not that subsidised, this is just wishful thinking. You can run a local model like Qwen for equivalent prices. You might see it go up to $0.50/hr but you're definitely not going to see it at $22
monodeldiabloMar 29, 2026, 4:39 AM
I do run open models locally, but let's not fool ourselves into thinking that they're functionally competitive. I'm extremely skeptical of anybody claiming they've obviated a $22/hr job with an open model. Qwen is a big step down in capability. I can play with something like k2.5 for awhile, but if I want real work done I'm going back to a frontier model, which has significant runtime requirements for inference.

You're also ignoring the cost of purchasing and amortizing dedicated hardware in your local model example.

It's not an apples-to-apples comparison.

kcbMar 29, 2026, 4:19 AM
Inference isn't really that expensive, its the training of new foundational models that is. With whatever highly optimized setup the big providers are using, they should be able to pack quite a lot of concurrent users onto a deployment of a model. Just think too, it's very possible their use case would be served just fine by a 100B model deployed to a $4,000 DGX Spark.
chrisccMar 29, 2026, 3:43 AM
Just curious, what did your human worker do that you were able to entirely automate?
floralhangnailMar 29, 2026, 4:04 AM
My bet is something administrative, like reminding people to approve their timesheets for payroll. AI wouldn't be needed to replace that job though, just a recurring calendar event.
falkensmaizeMar 29, 2026, 3:57 AM
I too, want to hear details about what this person did that they could be replaced completely with LLMs.
akomtuMar 29, 2026, 3:39 AM
The value is negative to that worker, apparently.