The online games have depressingly (to me) small communities. But they’re still kicking.
The amazing work the Wine team and Valve have done can't be understated.
Something funny about this statement considering what Proton is.
A lot of actual work went into Proton and into making games work therein.
MS is a slow, lumbering, monoculture that has lacked innovation and creativity for a very long time. I don’t see how freezing APIs or keeping old APIs around (mostly through versioned DLL hell) as some grand accomplishment.
The page was blank when MS wrote upon it.
My guess is, an impossibly high effort that would benefit Quake, but perhaps not other games of that era.
Microsoft is the reason Quake works on Proton still today.
Why do we need Windows 11 to support old software when we can use an older version of Windows, in an emulator at that. Playing Quake doesn't require a secure, patched box, and if a secure environment is the point of extreme backwards compat, then it seems like endless backwards compatibility is not the best way to achieve that goal (sandboxing an old, emulated OS, for example, comes to mind as more reasonable).
Letting Microsoft play this backwards compatibility card feels not healthy for the evolution of software or the diversification of the industry.
Regularly doing it basically forces developers into a limited-term license, subscription, or SaaS model, in order to pay for the upgrade churn required by the platform.
And a lot of it is just churn. Not evolution, not better, just .. different.
It doesn't cull it, you can still run Windows 3.11 or 98SE as well under emulation as on contemporary original hardware.
If anything, breaking backwards compatibility forces you to run your old software in an "authentic" environment, versus say, on some hardware/software combination tens of generations removed. Like, why would you want to run SkiFree in Windows 11, it feels like an abomination to me, almost disrespectful to the game. I don't want to see my old programs in Windows 11...
That's mostly how the backwards compatibility works anyway, just under the hood. The OS is using all sorts of compatibility layers to make the older software sit on top of and work on the newer OS versions. It just mostly works flawlessly, so you don't think about it unless it doesn't work automatically and forces you to go into the properties and tinker with which compatibility layer to manually apply.
Not good for evolution, but fantastic for diversification. Being able to write a program that solves a problem and be "done" with it is fantastic, but having the platform walk out from under you requires ongoing work. That ongoing work often demands payment...so platforms that constantly change tend to be highly commercialized.
Open source on Android suffers from this. So many "done" apps are no longer compatible.
And the changes to the underlying platform may not be benevolent. Android, for example, deprecated their API for filesystem access and introduced a scoped replacement that was two orders of magnitude slower. They then banned Syncthing, a file sharing tool, from the Play Store because it doesn't use the latest APIs (APIs are so slow that SyncThing is unusable...the opened bug hasn't been addressed in the intervening years).
The lesson is that any platform that is a moving target presents a risk to both the developer and the user, as that movement concentrates power with the platform owner in a way more more slow moving (or static) platform does not.
All that said, I use Linux 100x as much as I use Windows, because it gives me other kinds of control.
Also the barrier to use you're suggesting with alternative install/emulator is pretty high for an average user. It also breaks integration with everything else (e.g., a simple alt-tab will show the VM instead of 2 apps running inside)
Also because a lot of progress is regression, so having an old way to opt out into is nice
Drawing a few examples from an old Raymond Chen blog post[1], integrations required for seemless operation include
• Host files must be accessible in guest applications using host paths and vice versa. Obviously this can't apply to all files, but users will at least expect their document files to be accessible, including documents located on (possibly drive-letter-mapped) network shares.
• Cut-and-paste and drag-and-drop need to work between host and guest applications.
• Taskbar notification icons created by guest applications must appear on the host's taskbar.
• Keyboard layout changes must be synchronized between host and guest.
These are, at least to a useful degree, possible. Integrations that are effectively impossible in the general case:
• Using local IPC mechanisms between host and guest applications. Chen's examples are OLE, DDE, and SendMessage, but this extends to other mechanisms like named pipes, TCP/IP via the loopback adapter, and shared memory.
• Using plug-ins running in the guest OS in host applications and vice versa. At best, these could be implemented through some sort of shim mechanism on a case-by-case basis, assuming the plug-in mechanism isn't too heavily sandboxed, and that the shim mechanism doesn't introduce unacceptable overhead (e.g., latency in real-time A/V applications).
Finally, implementing these integrations without complicated (to implement and configure) safeguards would effectively eliminate most of the security benefits of virtualization.
[1] https://web.archive.org/web/20051223213509/http://blogs.msdn...
What about emulation?
But what I really don't get, is why we need backwards compat when computers can run computers, and old operating systems hardly demand resources on a modern computer.
As a concrete example, the source to quake is available, this has allowed quake to run on so many platforms and windows infamous backwards compatibility has little effect in keeping quake running, windows could have broken backwards compatibility and quake would still run on it.
The amazing part is that you don't need to do this in windows whether you have the source or not. I am a linux user, but for all their faults, Microsoft got their backwards compatibility stuff right. Something that the oss world, on average, needs to be convinced it's a desirable thing.
i.e. much less than 1% of all existing games.
At least in order to be playable under Linux. Said this, the 99% of the games from that era will run perfectly fine with OssPD->Pipewire (install OSSPD, just run the game) and 32bit SDL1 libraries.
Quite a lot of game source is lost entirely even by the original authors.
Not to mention that even if you do have the source, changing the use of an API can be a really expensive software modification project. Even Microsoft haven't been entirely systematic, you can easily find WinForms control panel dialogs in Win11.
Some embedded Windows apps exist in this space as well. Oscilloscopes and other expensive scientific instruments that run Windows XP.
The idea is that it builds on 64-bit Linux with a very simple Makefile and SDL2, so you can start from there as your ground truth, and then have fun. It also removes a lot of cruft, like all the DOS and Windows 95 stuff mentioned in the article.
software renderers are so much fun.
Why?
I tried running Quake 2 on Windows NT 4 before 2k came out so like '98/'99 but had an issue as NT lacked DirectX. My memory has faded and I don't remember if the installer failed or it failed to run. I think it was the former as I have a recollection of something complaining about missing DirectX.
I do know that multi-processing was implemented in Quake 3 and I specifically ran Windows 2000 for that.
As result - i was able to play some windowed games without 3D acceleration.