Secondly, Computing has always been subject to inflation. It cannot escape inflation. You may not notice it , perhaps due to the increase in performance but the cost of parts definitely has risen in the same tiers if you look over a long enough period to avoid pricing amortization
This is especially apparent if you’re a hardware manufacturer and have to buy the same components periodically, since the performance increase that consumers see doesn’t appear.
Good point and that should properly be called inflation in the semiconductor sector. We always have general inflation, but the different sectors of the economy exhibit different rates of inflation depending on the driving forces and their strength.
As of today, tariffs are the major driver of inflation and semiconductors are hit hard because the only high-volume, reasonable quality/price country has been practically excluded from the the US market by export bans and prohibitively high tariffs - that's China of course.
The other producers are in a near monopoly situation and are also acting like a cartel without shame or fear of law... which isn't there to begin with.
And if the definition was that loose to begin with, then the original comment is even more incorrect since there have been multiple rounds of demand/scarcity led pricing increases.
That's why I just buy something when I need it or when I think the price is reasonable, because nowadays, if I wait for something to get cheaper like I used to do in the 90s-00s, chances are it's gonna get even more expensive as time passes, not cheaper.
The days when you would wait 6-12 months and get the same thing for 50% off or a new thing with 50% more performance for the same price are over, when there's only one major semiconductor fab making everything, 3 RAM makers, 3 FLASH makers, 2 GPU vendors, 2 CPU vendors, controlling all supply, and I'm competing with datacenters for it.
Intel Arc GPUs exist, I have a B580 in my desktop and it works well enough.
GPU prices were horrendes when crypto happened (they migrated into a stable issue but it was still because of crypto).
DDR4 jumped because they started focusing on DDR5 before these news right now.
I could probably find more examples but hey
Seems it has been the opposite for some components like GPUs though for years (well before the AI boom)
So its technically not AI "ruining everything" here, but there was a nice, long before-time of reasonable pricing.
I'd been planning to upgrade my desktop as a christmas present for myself.
Now I have the cash and was looking at buying my PCPartPicker list, the cost of the 64GB DDR5-6000 RAM I planned to buy has gone from £300-400 to £700-800+, a difference of almost the price of the 9070 XT I just bought to go in the computer.
I guess I'll stick with my outdated AM4/X370 setup and make the best of the GPU upgrade until RAM prices stop being a complete joke.
Manufactures aren't dumb, they lost a lot of money in the last cycle and aren't playing that game anymore. No additional capacity is planned, OEMs are simply redirecting existing capacity towards high-margin products (HBM), instead of chasing fragile demand.
Because of (c) of images or just because he bought ram?
This will leap frog cancer research, material research etc.
That sounds like a lot, and almost unbelievable, but the scales of all of this kind of sits in that space, so what do I know.
Nonetheless, where are you getting this specific number and story from? I've seen it echoed before, but no one been able to trace it to any sort of reliable source that doesn't boil down to "secret insider writing on Substack".
https://news.samsung.com/samsung-and-openai-announce-strateg...
https://www.tomshardware.com/pc-components/dram/openais-star...
The article says: "OpenAI’s memory demand projected to reach up to 900,000 DRAM wafers per month", but not by when, or what current demand is. If this is based on OpenAI's >$1T of announced capex over the next 5 years, its not clear that money will ever actually materialize.
Last year I brought two 8G DDR3L RAM stick made by Gloway for around $8 each, now the same stick is priced around $22, a 275% increase in price.
SSD makers are also increasing their prices, but that started one or two years ago, and they did it again recently (of course).
It looked like I'll not be buying any first-hand computers/parts before the price can go normal again.
Yes but otherwise you’d get huge shortages and would be unlikely to be able to buy it at all. Also a significant proportion of the surplus currently going to manufacturers/etc. would go to various scalper and resellers
But they are all exactly the same chips. The ECC magic happens in the memory controller, not the RAM stick. Anyone buying ECC RAM for servers is buying on the same market as you building a new desktop computer.
Since DDR5 has 2 independent subchannels, 2 additional chips are needed.
Even when the sticks are completely incompatible with each other? I think servers tend to use RDIMM, desktops use UDIMM. Personally I'm not seeing as step increase in (b2b) RDIMMs compared to the same stores selling UDIMM (b2c), but I'm also looking at different stores tailored towards different types of users.
For those who aren't well versed in the construction of memory modules: take a look at your DDR4 memory module, you'll see 8 identical chips per side if it's a non-ECC module, and 9 identical chips per side if it's an ECC module. That's because, for every byte, each bit is stored in a separate chip; the address and command buses are connected in parallel to all of them, while each chip gets a separate data line on the memory bus. For non-ECC memory modules, the data line which would be used for the parity/ECC bit is simply not connected, while on ECC memory modules, it's connected to the 9th chip.
(For DDR5, things are a bit different, since each memory module is split in two halves, with each half having 4 or 5 chips per side, but the principle is the same.)
And that was caught because we had ECC. If not for that we'd be replacing drives, because metrics made it look like it is one of OSDs slowing to a crawl, which usual reason is drive dying.
Of course, chance for that is pretty damn small, bit also their scale is pretty damn big.
> China's AI Analog Chip Claimed to Be 3000X Faster Than Nvidia's A100 GPU (04.11.2023)
https://news.ycombinator.com/item?id=38144619
> Q.ANT’s photonic chips – which compute using light instead of electricity – promise to deliver a 30-fold increase in energy efficiency and a 50-fold boost in computing speed, offering transformative potential for AI-driven data centers and HPC environments. (24.02.2025)
https://qant.com/press-releases/q-ant-and-ims-chips-launch-p...
Which for the most part it would be an irrelevant cost-of-doing business compared to the huge savings from non-ECC and how incosequential it is if some ChatGPT computation fails...
Of course OpenAI is also not buying that but B200 DGX systems, but that is still the same process at TSMC.
I would not want to rerun a whole run just because of bit flips and bit flips become a lot more relevant the more servers you need.
Enterprise wise, the used servers kinda always have been cheap (at least compared to MSRP or even after discount price), just because there is enough companies that want a feel good of having a warranty on equipment and yeet it after 5 years.
This is a chassis and fan problem not a CPU problem. Some devices do need their own cooling if your case is not a rack mount. E.g. if you have a mellanox chip those run hot unless you cool them specifically. In rackmount use case that happens anyway.
There are myriad other factors that go into this, especially just general inflation, which will likely fill the price gap by the time memory costs go down anyway.
I know I'm comparing apples to oranges here (new to used), but I started buying used 1L PCs instead (Lenovo thinkcentres) for about $20 the cost of a RPi 5 - but with the benefit of it actually coming with the cooling and storage it needs to run and is upgradable, plus runs Intel.
The amount of times I've had a Pi just self-destruct on me is ridiculous. They are known for melting SD cards, and just this week I had one blow the power regulator over USB power and still get hot enough in 2 minutes that it burnt me to touch it. They are considered cheap commodity computing and they aren't cheap enough for that any more.
There are plenty of non-IoT use cases that are viable with 1GB of general-purpose compute. Hell, I rented an obscenely cheap 512MB VPS until recently, and only abandoned it because its ancient kernel version was a security risk.
Most of my RPi tasks are not memory-bound
At work we have a display with a Pi3 (not B) connected, just showing websites in rotation. Websites even with a simple animation are laggy, startup takes a few minutes.
Both of these usecases don't need more than 1 GB of ram, but I want to speed of a 4 or 5.
Nowadays you can no longer get the Raspberrry Pi Zero for less than 12€ or so. I consider the $5 Raspberry Pi Zero to be among the best values on the market and there hasn't been anything else that came close.
It's shovels all the way down.