Graphics cards especially are very unreliable and frequently die within a few months of purchase. But when you can buy a whole PC for the price of one modern videocard, many don't have a choice.
https://aliexpress.com/w/wholesale-intel-xeon-processors.htm...
Likewise for the 1060, its still going strong. I upgraded to a 3060 round about the time my younger brother decided he wanted a PC so he is now using it without any issues. About 10 years use out of it, plenty more to go.
GPUs are pretty damn resilient if you aren't pissing around with them.
Don't look at the branding. Look at the core type, count, and speed (maybe).
It's been a while since I shopped Intel, but they used to typically release a low core count/lower clock speed Pentium/Celeron on the mainstream cores, often with no hyperthreading. These were typically low cost and could be a good value, you'd get decent single core performance because it's the newest architecture and multicore performance would be iffy but you can't have everything.
> N-class CPUs
These are definitely worth avoiding most of the time. Usually twice the cores, but much less performance per clock. Never feels fast for interactive work. But they make sense for some situations. Some of these get an n3 branding to trick people looking for i3s.
They may not be ideal for desktops, but they are great low power home server CPUs. In fact, they are much better than ARM alternatives like Raspberry Pis for the money.
The peak of the Super Socket 7 performance CPUs was reached when AMD released the + versions of those chips, the K6-2+ and K6-3+. Those were initially designed for laptops with lower powerconsumption and some enhanced instruction set. But they quickly became common in typical overclockers setup.
I got myself a K6-3+ that I was able to overclock to around 600MHz, probably on an ASUS motherboard.
Back then AMD was fighting so much to get marketshare that you could order for free all types of merchandising from AMD like posters, stickers and CPU badges, and they would even ship it for free from US to Europe. I remember always bringing some to hacker meetings.
Do you recall how long you used the platform or your next upgrade choice? :)
(586 became Pentium, so 686 would be the Pentium Pro/II microarchitecture.)
My favorite PC I ever built was a dual-CPU Tyan motherboard that eventually held two screaming fast Coppermines. Needed a university copy of Windows 2000 to really make them sing—the Windows 95 series never supported SMP—and it was glorious.
The Athlon was solid but less reliable, various reboots and glitches. I kind have always had a preference for Intel since then.
Some of those chipsets were fine and others were less reliable or compatible. The quality of the drivers for each chipset may also have mattered.
RISC architecture is gonna change everything.
1974: Intel 8080
1978: Intel 8086
1982: Intel 80286
1985: Intel 80386
1990: Intel 8010386
1995: Intel 801040386
2005: Intel 80107045386
2025: Intel 8.010207659386e12But still, internally we call it i586, because that's the way it is. so is Pentium MMX which I reckon is called i686.
> The name invoked the number five, but was completely trademarkable, unlike the number 586.
But marketing was a large part of the reason that they started caring so much at that particular time. The Pentium line was the first time Intel had marketed directly to the end users¹² in part as a response to alt-486 manufacturers (AMD, Cyryx) doing the same with their products⁴ like clock-doubled units compatible with standard 486/487 targetted sockets (which were cheaper and, depending on workload, faster than the Intel upgrade options).
--------
[1] this was the era that “Intel Inside (di dum di dum)” initially came from
[2] that was also why the FDIV bug was such a big thing despite processor bugs³ being, within the industry, an accepted part of the complex design and manufacturing process
[3] for a few earlier examples: a 486 FPU bug that resulted in what should have been errors (such as overflow) being silently ignored, another (more serious) one in trig functions that resulted in a partial recall and the rest of that line being marked down as 486SX units (with a pin removed to effectively disable the FPU), similarly an entire stepping of 386 chips ended up sold as “for 16 bit code only”, going further back into the 8-bit days some versions of the 6502 had a bug or two in handling things (jump instructions, loading via relative references) that straddled page boundaries (which were mitigated in code by being careful with code/data alignment, no recalls, just errata published)
The original Pentium I believe introduced a second pipeline that required a compiler to optimize for it to achieve maximum performance.
AMD actually made successful CPUs based on Berkeley RISC, similar to SPARC (they used register windows). The AMD K5 had this RISC CPU at its core. AMD bought NexGen and improved their RISC design for the K6 then Athlon.
Bob Colwell (mentioned elsewhere ITT) wrote a fascinating technical history of the P6: The Pentium Chronicles.
Around the same time, but I’d classify as separate stumbles.
"Some companies, notably Dell, remained Intel-only well into the 21st century,"
Dell was receiving $1Billion a year in bribes from Intel https://247wallst.com/consumer-electronics/2007/02/02/michea...
"The documents filed in District Court claim that there were $1 billion in kickbacks and payments."
That was the only way to make big boys plunge into Pentium 4 with Rambus fiasco.
It's certainly not the same kind of OoO. They had register renaming¹, But only enough storage for a few renamed registers. And they didn't have any kind of scheduler.
The lack of a scheduler meant execution units still executed all instructions in program order. The only way you could get out-of-order execution is when instructions went down different pipelines. A floating point instruction could finish execution before a previous integer instruction even started, but you could never execute two floating point instructions Out-of-Order. Or two memory instructions, or two integer instructions.
While the Pentium Pro had a full scheduler. Any instruction within the 40 μop reorder buffer could theoretically execute in any order, depending on when their dependencies were available.
Even on the later PowerPCs (like the 604) that could reorder instructions within an execution unit, the scheduling was still very limited. There was only a two entry reservation station in front of each execution unit, and it would pick whichever one was ready (and oldest). One entry could hold a blocked instruction for quite a while many later instructions passed it through the second entry.
And this two-entry reservation station scheme didn't even seem to work. The laster PowerPC 750 (aka G3) and 7400 (aka G4) went back to singe entry reservation stations on every execution unit except for the load-store units (which stuck with two entries).
It's not until the PowerPC 970 (aka G5) that we see a PowerPC design with substantial reordering capabilities.
¹ well on the PowerPC 603, only the FPU had register naming, but the POWER1 and all later PowerPCs had integer register renaming
https://en.wikipedia.org/wiki/Tomasulo's_algorithm
Took a while until transistor budgets allowed it to be implemented in consumer microprocessors.
It wasn't a full pipeline, but large parts of the integer ALU and related circuitry were duplicated so that complex (time-consuming) instructions like multiply could directly follow each other without causing a pipeline bubble. Things were still essentially executed entirely in-order but the second MUL (or similar) could start before the first was complete, if it didn't depend upon the result of the first, and the Pentium line had a deeper pipeline than previous Intel chips to take most advantage of this.
The compiler optimisations, and similar manual code changes with the compiler wasn't bright enough, were to reduce the occurrence of instructions depending on the results of the instructions before, which would make the pipeline bubble come back as the subsequent instructions couldn't be started until the current one was complete. This was also a time when branch prediction became a major concern, and further compiler optimisations (and manual coding tricks) were used to help here too, because aborting a deep pipeline because of a branch (or just stalling the pipeline at the conditional branch point until the decision is made) causes quite a performance cost.
As the CPU was not out of order, to execute two instructions per clock you had to pair them so that the second one was simple, and did not use the output of the first one. Existing code and most compilers around at the time were generally bad at this, but things like inner render loops in games could make a lot of use if you wrote them in assembly.
some say that they tried to add 486 with 100 and the result had some numbers after the comma, that's why they named it pentium (yes, i know about the FDIV bug)
I well remember the 486SX/2-66's and how terrible they were. I liked to say that Compaq put the "sorry" in Presario.
In the late 90's, between around 96 and 98, I made good money building AMD 486 DX/4 133's. Those things were blindingly fast for the price. As I recall there was even a 150MHz variant.
Still, my favorite CPU of all time remains the AMD K6/2-450. It wasn't until the Phenom II BE950, a dual core that I unlocked to quad core , that I felt I had a CPU that matched the K6/2-450 in value. Since then I've had a couple of Ryzen's for my daily driver/work machine, and couldn't be happier. AMD has done a fantastic job keeping price and performance in tune. But, it goes even further if you shop smartly.
Overall, this was an excellent read, and brought back a lot of memories. The 6x86 for example- too much promise for what they actually delivered. And, thanks to this article I now know why so many cheap motherboards had their CPU's soldered. It wasn't a technology decision, but a legal one. I had no idea of that at the time.
I'm not sure where or why I have so many AM4 machines around, but my kids are still playing games fine on machines with a 1st and 2nd gen Ryzen in them.
I just upgraded another to a Ryzen 5 5500. I plan to get a few more years out of it.
The bang for my buck has been pretty high. I don't believe CPUs go obsolete immediately like they used to.
Also, for Half Life.
Intel's marketing department threw a fit, they didn't want the Pentium 4 competing with their flagship Itanium. Bob Colwell was directly ordered to remove the 64-bit functionality.
Which he kind of did, kind of didn't. The functionally was still there, but fused off when Netburst shipped.
If it wasn't for AMD beating them to market with AMD64, Intel would have probably eventually allowed their engineers to enable the 64-bit extension. And when it did come time to add AMD64 support to the Pentium 4 (later Prescott and Cedar Mill models) the existing 64-bit support probably made for a good starting point.
Bob Colwell talks about this (and some of the x86 team vs Itanium team drama) in his quora answer and followup comments: https://www.quora.com/How-was-AMD-able-to-beat-Intel-in-deli...
But this market segmentation idea just seems absolutely insane to me in a way I’ve never had anyone satisfactorily explain.
It requires Intel to voluntarily destroy the commodity economics that put their CPUs on a rocket ship to domination.
It’s as if they actually bought into the RISC FUD from the 1990’s that x86 was unscalable, exactly when it was taking its biggest leaps.
Think back then it was all about massive databases - that was where the big money was and x86 wasn't really setup for the top end load patterns of databases (or OLAP data lakes).
They even had a chance with mobile chips using ATOM but ARM was too compelling and I think Apple was sick of the Intel dependency so when there was an opportunity in the mobile space to not be so deeply tied to Intel they took it.
The problem with mobile was that it fundamentally required low-margin products, and Intel never (or way too late) realized that was a kind of business they should want to be in.
Was it though ? They made a new CPU from scratch, promissing to replace Alpha, PA-RISC and MIPS, but the first release was a flop.
The only "win" of Itanium that I see, is that it eliminated some competitors in low and medium end server market: MIPS and PA-RISC, with SPARC being on life support.
HP-designed later cores were much faster and omitted x86 hardware support replacing it with software emulation if needed, but ultimately IA-64 rarely ever ran with good performance as far as I know.
Pretty sure it was Itanium that finally turned "Sufficiently Smart Compiler" into curse phrase as it is understood today, and definitely popularized it.
That's exactly what was happening.
Though it helps to realise that this argument was taking place inside Intel around 1997. The Pentium II is only just hitting the market, it wasn't exactly obvious that x86 was right in the middle making its biggest leaps.
RISC was absolutely dominating the server/workstation space, this was slightly before the rise of the cheap x86 server. Intel management was desperate to break into the server/workstation space, and they knew they needed a high end RISC cpu. It was kind of general knowledge in the computer space at the time that RISC was the future.
Resulting in claim like "it's pretty good DSP, but hilariously overpriced".
The only way I can rationalize it is that Intel just "missed" that servers hooked up to networks running integer-heavy, branchy workloads were going to become a big deal. OK, few predicted the explosive growth of the WWW, but look around at the growth of workgroup computing in the early 1990's and this should have been obvious?
I would tend to believe that the Itanium is an HP product, given that they've always seems more invested in the platform than Intel.
Cyrix chips get too much hate because of Quake being optimized specifically for the Pentium and its FPU.
On the other hand, I probably wouldn't have recognized the F00F bug mention if you had actually written 0xc8c70ff0.
Not to be too pedantic, I would contend that at the time, it was pretty clear to enthusiast what the differences were. Everyone in the industry was paying attention to 486s and the cost of a genuine intel chip. The FDIV bug was on every Evening News for weeks. AMD and Cyrix vs intel debates were common.
I agree that it is not obvious now that Pentium came after 486, but at the time, it was clear.
- AMD (K5/K6/K6-2/K6-III) - Intel (Pentium) - Centaur (IDT Winchip, later Via C3/C7/Nano) - Cyrix (6x86, 6x86MX) - Rise (mp6)
December 1998 $85 Celeron 300A handily beating June 97 $594 Pentium 233 MMX, not to mention overclocked one matching 1998 $621-824 Pentium 2s.
January 2002 $120 Duron 1300/Celeron 1300 beating 2000 $1000 Athlon 1000/Pentium 3 1000-1133
June 2007 $40 Celeron 420 overclockable out of the box from stock 1.6 to 3.2GHz beat best $1000 CPUs of year 2005 (FX-57, P4 EE).
Same goes for Graphic chips starting around 1998/9.
Ah, I remember the days of Intel's fabs doing “too good” a job and many more chips passing tests for faster use being produced than expected. To fulfil orders for the slower chips some of these better batches were marked down and sold as slower units, so if you were lucky you could really push the overclocking and get yourself a performance bargain. You also needed a good motherboard and quality RAM to pull it off reliably, of course.
Sillyrons is what we used to call the massively overclocked Celerons. At Uni a friend of mine made a good bit of pocket money from selling an optimisation service, for people who didn't feel confident playing with such settings themselves.
And it turns out that for a lot of software, a smaller but faster L2 was actually better than the bigger one. And because there were no fast products that used the Mendocino die, even the fastest of them were sold as Celerons. 300A was particularly nice because very nearly all of them could run at 450, and 100MHz FSB motherboards were widely available to pair with the fixed 4.5 multiplier of the CPU.
But the time since 2020 feels much faster again. It's scary! But it's exciting.
I always wondered if some of that was to offset the negative publicity from the FDIV bug in the early Pentiums.
It was annoying as it seemed every computer ad needed to play it, not just intel ads.
The author links to an example:
https://dfarq.homeip.net/ibm-486slc2-cpu-when-a-clone-isnt-a...
You then had the 486 DLCs which were even worse. you'd get companies that sold 386 and even 286 systems with '486' chips, constrained by slow, 16-bit buses, etc.
But, even a 486SLC wasn't all that bad at the time. It was still much faster than a 386DX for many things (DOOM, for example).
These AT-like machines limited users to 16-bit ISA cards for expansion, and a 24-bit address bus (16 MB RAM). But how many consumer PCs used more than that, back when your 32-bit bus options were VLB (video only), MCA (IBM only), or EISA (expensive servers/workstations only).
Those chips were excellent value for mostly integer work, but had incredibly poor floating-point performance which was a problem for gamers as the 3D era was really getting going around that time. I had one, it did me good service for a few years.
But to be fair, that naming shift probably mattered more than most people realize.
Sure AMD and a few others had back-seat answers, but Intel was literally driving the bus.
https://timeline.intel.com/1993/peripheral-component-interco...
There was a mobile 266MHz Pentium MMX, Tillamook
And it appears there was a 300MHz version according to Wikipedia.
It wasn't much more than a year later that I was able to get a Pentium 100MHz for $2000. It's amazing how fast things progressed back then.
Some years later, back in my home country (Paraguay) I met a lady who had a side business being a VAR builder of desktop PCs. In my country, due to a lot of constraints, there was (and is) quite a money crunch and people tried to cheap out the most when purchasing computers. This gave rise to a lot of unscrupulous VAR resellers who built ultra-low quality, underpowered PCs with almost unusable specs at an attractive price while making a pretty profit. You could still get much better deals in both price and specs, but you had to have an idea about where to look.
Well, back to this lady. She said that during the early 2000s she was on the same line of business, selling beige box desktop PCs at the lowest possible prices. But she said that she loved the AMD K6 and K6/2 architectures because they provided considerable bang for the buck. The cost was affordable, and yet performance was good. Add some reasonable amounts of RAM and storage and you could have a well-performing PC at a good price. The downside, as she said, was that the processors tended to generate lots of heat and thus the fans had to be good. This was especially important in a very hot country like Paraguay. But the bottom line was that AMD K6 line enabled her to offer customers a good deal.
This made me appreciate what AMD did with K6. They really helped to bring good computers to the masses.