It seemed like it might be coming to an end when PCBWay suddenly had no reasonable payment options, but JLC has been great so far (and I believe PCBWay has credit card payments sorted now?)
(Just wish I had far more free time to spend on hobbies, there's so many possibilities with 3D printing, microcontrollers, and custom PCBs now all so readily available)
URL: https://blog.mikhe.ch/quake2-on-fpga/part6.md
404 File not found
The site configured at this address does not contain the requested file.
If this is your site, make sure that the filename case matches the URL as well as any file permissions. For root URLs (like http://example.com/) you must provide an index.html file.
Read the full documentation for more information about using GitHub Pages.
I found the project on YouTube[1] and wanted to share it - but decided to find something that's text for HN, and in the rush to post I failed to check if the post is even complete. I should've posted the video instead.
"More pictures in the next part.
Next part: coming soon"
I suppose the link came to HN a bit too early.
Nonetheless, impressive project!
Has anyone figured out what the minimum specs for Quake are?
I feel like the first thing everyone does with a computer is to determine whether or not it can run quake, and I'm just wondering what the like, most simple computer that could exist is, that could run quake?
…But people have managed to run Quake on the 486.
And the myth people tell about Quake is that it killed Cyrix, because Quake performance on Cyrix was subpar. But was that true? And if it was true, was that because the Cyrix was slower than a Pentium, or was it because the Quake code had assembly that was hand-optimized for the Pentium FPU pipeline?
Anyway. “Most simple computer that could run Quake” is probably going to include a decent FPU. If you are implementing something on an FPGA, you can probably get somewhere around 200 MHz clock anyway. At which point you can run Quake II.
The timing was brutal for Cyrix. This was right when "Intel Inside" was becoming a meaningful consumer brand signal, and game benchmarks were becoming the primary way consumers evaluated CPU purchases. Quake wasn't just a game, it was the benchmark everyone ran at CompUSA to compare machines. Being demonstrably worse at Quake, regardless of the cause, was a marketing catastrophe.
The real floor for running Quake is basically "does it have a hardware FPU." The 486 DX (with FPU) could do it at low resolution and low framerate. The 486 SX (no FPU, software float emulation) was genuinely painful. The Pentium was the first CPU where it actually felt good.
https://news.ycombinator.com/newsguidelines.html
Your entire comment history seems to be AI generated.
Ultimately what killed Cyrix is they just couldn't offer enough of a discount vs intel to matter, especially with all the lock in stuff intel was doing with Dell, Gateway, etc.
Intel Inside was a successful marketing campaign as well. If you were around back then I bet you can imagine the jingle/chord immediately.
It's possible to write a game engine with that limitation, but there's no easy natural conversion from Quake's judicious use of floats to a fully fixed-point codebase. You'd have to redesign and rewrite the entire engine from scratch, basically.
My understanding from talking to the coders at the time was that Unreal's software renderer was a huge advantage as a starting point. They were able to reuse a lot of the portal rendering stuff as setup on the R3K cpu, but none of the rasterization. That had to go to the graphics core, which was a post setup 2D engine that in addition to the usual sprites, could do tris and quads.
We had a budget of about 3k polygons post clipping, and having two enemies on screen would burn about half of that. The other huge limit is the texture cache was tiny, so we couldn't do lightmaps. Our lightning was baked in at vertex level and it just was what it was.
There's a bit more info here: https://www.terrygreer.com/unrealpsx.html
I imagine the situation with Quake was comparable. The BSP stuff would carry right over, but I can't imagine they got lightmapping proper working at the time. They'd also need some sort of solution for overdraw, as Quake's PVS was a lot more loose than Unreal's portal clipping.
Maybe I would prefer to rip out the integer multiplication unit first, before ripping out the FPU.
The DX2s _were_ a significant improvement over the 486DX, but I’ll admit, I might be remembering the excitement of getting to play Quake at all! The framerate may have been 15-20 fps and I just dealt with it,
The minimum requirements, on the box, were apparently Pentium 75Mhz. 8MB (DOS), or 16 RAM (WIN95).
Q1 is playable but not on any modern understanding of that word on a system with Am486DX4/100 with 16MB RAM and S3 Trio64V+. You can disable sound effects for a couple FPS more.
Mostly you would be fine because the level design in Q1 heavily tends to the closed spaces and corridors with only a glimpses of the outside and a rare halls and caverns. That being said the Necropolis would be a test of strategic thinking for turn-based FPS and Ziggurat Vertigo is unplayable.
The diagonal orientation of the DDR3 chip and corresponding diagonal traces I suspect is a choice made by the author to ease the layout process - it's more likely that is hand laid out to get traces of somewhat similar length with a minimum of fuss, followed by a length matching tool. A non-standard orientation can cause issues with pick-and-place machinery, which usually will handle 90 degrees fine, and _often_ 45 degrees fine, but (AFAIK) _rarely_ anything else, but that's not a problem for the author because he's assembling it himself. A diagonal IC also usually results in wasted space, which you can see in the empty areas of the resulting board. A 90 degree orientation may have allowed for a few more decoupling capacitor banks, but since his board works, who am I to sit here and judge?
I didn't use autorouter: I haven't found any reasonable working KiCad plugin for it, and didn't want to buy and commercial software for a hobby project.
(I am the author)
This sounds like nonsense. Pick and place machines don't pick up components perfectly deterministically. There is always a tilt and an offset when you are picking the part up, which is why a computer vision system has to account for part orientation and the center of the part. The machine must compensate the error by moving and rotating the part accordingly.
That way they all have the same delay properties.
Used Midnight Commander within an SSH client…. on my iPhone lol
Being able to click in Midnight Commander makes this surprisingly usable for quick jobs!
Ah, those localhost admins? Sure, if I only had a one or two machines then I probably would had some fancy-shmancy 3D TUI object navigator with a direct rendering to VRAM (supported on a whole lot of videoadapters from the list of three). But I have many, many machines to do the things - and all these things are boring, like looking in the logs and mangling the supreme configuration formatted files. And typing `cd /var/log/shitservice` get boring extremely fast.
Lots of Quake II samples.
But I'm lazy, and just used the Zynq 7020 (Dual ARM A9 + FPGA) MYIR Z-turn Board V2 for hobbies.
Best regards, =3