Over Christmas I tried to actually build a usable computer from the 32-bit era. Eventually I discovered that the problem isn't really the power of the computer. Computers have been powerful enough for productivity tasks for 20 years, excepting browser-based software.
The two main problems I ran into were 1) software support at the application layer, and 2) video driver support. There is a herculean effort on the part of package maintainers to build software for distros, and no one has been building 32 bit version of software for years, even if it is possible to build from source. There is only a very limited set of software you can use, even CLI software because so many things are built with 64 bit dependencies. Secondly, old video card drivers are being dropped from the kernel. This means all you have is basic VGA "safe-mode" level support, which isn't even fast enough to play an MPEG2. My final try was to install Debian 5, which was period correct and had support for my hardware, but the live CDs of the the time were not hybrid so the ISO could not boot from USB. I didn't have a burner so I finally gave up.
So I think these types of projects are fun for a proof of concept, but unfortunately are never going to give life to old computers.
> Computers have been powerful enough for productivity tasks for 20 years
It baffles me how usable Office 97 still. I was playing with it recently in a VM to see if it worked as well as I remembered, and it was amazing how packed with features it is considering it's nearing on thirty. There's no accounting for taste but I prefer the old Office UI to the ribbon, there's a boatload of formatting options for Word, there's 3D Word Art that hits me right in the nostalgia, Excel 97 is still very powerful and supports pretty much every feature I use regularly. It's obviously snappy on modern hardware, but I think it was snappy even in 1998.
I'm sure people can enumerate here on the newer features that have come in later editions, and I certainly do not want to diminish your experience if you find all the new stuff useful, but I was just remarkably impressed how much cool stuff was in packed into the software.
I think MS Word was basically feature-complete with v4.0 which ran on a 1MB 68000 Macintosh. Obviously they have added lots of UI and geegaws, but the core word processing functionality hasn't really changed at all.
(edit to say I'm obviously ignoring i8n etc.)
My dad used to run a whole commercial bank on MS Office 4.0 and a 386. (A small one, but still!)
Small, medium and large colleges in the UK ran on Novell servers and 386 client machines with windows for workgroups and whatever Office they came with. I think the universities were using unixy minicomputers then though. Late 80s early 90s. Those 386 machines were built like tanks and survived the tender ministrations of hundreds of students (not to mention some of the staff).
I love this story where a C64 in Poland rans a Auto repair shop.
I still use Office 2010 to this day and feel like absolutely nothing is missing that I truly need. The only issues are Alt-Tab and multiple monitors have bugs. But functionality? 100%.
[deleted]
It's wild to remember that I basically grew up with this type of software. I was there, when the MDI/SDI (Multi-Document Interface / Single-Document Interface) discussion was ongoing, and how much backlash the "Ribbon"-interface received. It also shows that writing documents hasn't really changed in the past 30 years. I wonder if that's a good or bad development.
With memory prices skyrocketing, I wonder if we will see a freeze in computer hardware requirements for software. Maybe it's time to optimize again.
Consumer laptops have been frozen on 8GB of RAM for a while already.
Yeah you can get machines which are higher specced easily enough, but they’re usually at the upper end of the average consumers budget.
Sadly Electron developers will be fired, and C++ and even Rust ones will be highly praised. QT5/6 will be king for tons of desktop software.
One can dream.
Ram shortages are not dreams.
They're not permanent either.
[deleted]
> I was playing with it recently in a VM
With the small caveat that I only use Word, it runs perfectly in WINE and has done for over a decade. I use it on 64-bit Ubuntu, and it runs very well: it's also possible to install the 3 service releases that MS put out, and the app runs very quickly even on hardware that is 15+ years old.
The service packs are a good idea. They improve stability, and make export to legacy formats work.
WINE works better than a VM: it takes less memory, there's no VM startup/shutdown time, and host integration is better: e.g. host filesystem access and bidirectional cut and paste.
I had trouble getting the Office 97 installer working with Wine. Not claiming it’s impossible but I figured just to play with it I could spin up Qemu.
I have MS Office 4.0 installed on my 386DX-40 with 4 MB of RAM and 210 MB HDD, running Windows 3.1, and it is good. Most of the common features are there, it's a perfectly working office setup. The major thing missing is font anti-aliasing. Office 95 and 97 are absolutely awesome.
I do remember running Word on an Am386DX-40 and later an i486DX2-66 and there was an issue that wouldn't be a problem with faster hardware; the widow/orphan control happened live so if you made an edit, then hit print, there was a race condition where you could end up with a duplicated line or missing line across page boundaries. Since later drafts tended to have fewer edits, I once turned in a final draft of a school paper with such an error.
Then again, if you'd also run it at low res on an old CRT it might not or barely benefit from anti-aliasing anyway.
Oh, right! 800x600 was pretty sharp on a 14", and 1024x768 on 15", and when ClearType came out it actually was blurring things on CRTs.
Totally agree!
I‘d pay definitely $300 (lifetime license) for a productivity suite like Windows 95 design and Office 95 with no bloatware and ads. Just pure speed and productivity.
Last true step change in computer performance for general home computing tasks was SSD.
I'd add multicore processors as well, which makes multiprocess computing viable. And as a major improvement, Apple's desktop CPUs which are both fast, energy efficient and cool - my laptop fan never turns on. At one point I was like "do they even work?" so I ran a website that uses CPU and GPU to the max, and... still nothing, stuff went up to 90 degrees but no fan action yet. I installed a fan control app to demonstrate that my system does in fact have fans.
Meanwhile my home PC starts blowing whenever I fire up a video game.
In 20 years? That is nothing.
It's crazy too to realise how much of the multi-application interop vision was realized in Office 97 too. Visual Basic for Applications had rich hooks into all the apps, you could make macros and scripts and embed them into documents, you could embed documents into each other.
It's really astonishing how full-featured it all was, and it was running on those Pentium machines that had a "turbo" button to switch between 33 and 66 MHz and just a few MBs of RAM.
> but I think it was snappy even in 1998.
It definitely was snappy. I used it on school computers that were Pentium (1?) with about as much RAM as my current L2 cache (16MB). Dirty rectangles and win32 primitives. Very responsive. It also came with VB6 where you could write your own interpreted code very easily to do all kinds of stuff.
My crappy old 2018 Chromebook is still just about usable with 2GB but has gone from a snappy system to a lethargic snail.. and getting slower every update.. Yeah for progress!
eMMC Chromebooks are notorious for storage-related slowdowns. If it's an option, booting a ChromeOS variant or similar distro off a high-speed microSD, over USB, or (least likely with a Chromebook) via PXE might confirm.
Maybe with the price of memory going up, we'll start seeing a more conservative use of resources in consumer software.
A fella can dream, anyways.
“Powerful enough for productivity tasks” is very variable depending on what you need to be productive in. Office sure. 3D modelling? CAD? Video editing? Ehhhhh not so sure.
I hate to tell you this, but people were doing CAD and CNC work on PCs back when a 33MHz 80386 with 8MB of RAM was an expensive computer.
And they did video editing on Amigas with an add-on peripheral called a Video Toaster.
I don’t know enough about CAD to comment but video editing is considerably more expensive now for a bunch of reasons and I don’t think an Amiga could handle it now.
Video compression is a lot more computationally complex now than it was in the 90s, and it is unlikely that an Amiga with a 68k or old PowerPC would be able to handle 4k video with H265 or ProRes. Even if you had specialized hardware to decode it, I’m not 100% sure that an Amiga has enough memory to hold a single decompressed frame to edit against.
Don’t get me wrong, Video Toaster is super awesome, but I don’t think it’s up to modern tasks.
This! I have the 14-core M4 Macbook Pro with 48GB of RAM, and Word for Mac (Version 16 at this time) runs like absolute molasses on large documents, and pegs a single core between 70 and 90% for most of the time, even when I'm not typing.
I am now starting to wonder how much of it has to do with network access to Sharepoint and telemetry data that most likely didn't exist in the Office 97 dial-up era.
Features-wise - I doubt there is a single feature I use (deliberately) today in Excel or Word that wasn't available in Office 97.
I'd happily suffer Clippy over Co-Pilot.
> I'd happily suffer Clippy
It's an optional install. You can just click Custom, untick "Office Assistant" and other horrid bits of bloat like "Find Fast" and "Word Mail in Outlook" and get rid of that stuff.
The curse-ed ribbon was a huge productivity regression. I still use very old versions of Word and Excel (the latter at least until the odd spreadsheet exceeds size limits) because they're simply better than the newer drivel. Efficient UI, proper keyboard shortcuts with unintrusive habbit-reinforcing hints, better performance, not trying to siphon all my files up to their retarded cloud. There is almost nothing I miss in terms of newer features from later versions.
The ribbon thing was a taste of things to come in the degradation of UI standards. Take something that works great and looks ok, replace it with something flashy that gives marketing people something to say. Break the workflow of existing users. Repeat every 10 years.
Except for Internet surfing, a plain Amiga 500 would be good enough for what many folks do at home, between gaming, writing letters, basic accounting and the occasional flyers for party invitations.
Total nostalgia talk. Those machines were just glacially slow at launching apps and really everything, like spell check, go get a coffee. I could immediately tell the difference between a 25Mhz Mac IIci and a 25Mhz Mac IIci with a 32KB cache card. That's how slow they were.
Some of us do actually use such machines every now and then.
The point being made was that for many people whose lives doesn't circle around computers, their computing needs have not changed since the early 1990's, other than doing stuff on Internet nowadays.
For those people, using digital typewriter hardly requires more features than Final Writer, and for what they do with numbers in tables and a couple of automatic updated cells, something like Superplan would also be enough.
Yeah, I just posted that a lot of that software was amazing and pretty 'feature-complete', all while running on a very limited old personal conmputers.
Just please don't gaslight us with some alternate Amiga bullshit history. All that shit was super slow, you were begging for +5Mhz or +25KB of cache. If Amiga had any success outside of teenage gamers, that stuff would have all been historical, just like it was on the Mac.
The Amiga had huge success outside of "teenage gamers", even if in niche markets. Amigas were extremely important in TV and video production throughout the 1990s. I remember a local Amiga repair shop in South Florida that stayed in business until about 2007, mainly by servicing Amigas still in service in the local broadcast industry -- all of the local cable providers in particular had loads of them, since they were used for the old Prevue Guide listings, along with lots of other stuff.
Goes both ways, Mac was hardly something to write home about outside US, and they did not follow Commodore footsteps into bankruptcy out of sheer luck.
The Mac was just an expensive toy for people working on different media. No one used it at home, even less at school. Ever.
Amiga was big in Europe. No doubt they were slow though; most computers of the time were.
The Mac didn't exist in Europe except for expensive A/V production machines and the printing world (books, artists, movie posters, covers and the like).
If you were from Humanities and worked for a newspaper design layout you would use a Mac at work. That's it.
> The Mac didn't exist in Europe
That is absolutely not a valid generalisation.
I worked on Macs from the start of my career in 1988. They were the standard computer for state schools in education here in the Isle of Man in the late 1980s and early 1990s.
The Isle of Man's national travel company ran on a Mac database, Omnis, and later moved to Windows to keep using Omnis.
I supported dozens of Mac-using clients in London through the 1990s and they were the standard platform in some businesses. Windows NT Server had good MacOS support from the very first version, 3.1, and Macs could access Windows NT Server shares over the built-in Appleshare client, and store Mac files complete with their Resource Forks on NTFS volumes. From 1993 onwards this made mixed Mac/PC networks much easier.
I did subcontracted Mac support for a couple of friends of mine's consultancy businesses because they were Windows guys and didn't "speak Mac".
Yes, they were very strong in print, graphics, design, photography, etc. but not only in those markets. Richer types used them as home computers. I also worked on Macs in the music and dance businesses and other places.
Macs were always there.
Maybe you didn't notice but they always were. Knowing PC/Mac integration was a key career skill for me, and the rise of OS X made the classic MacOS knowledge segue into more general Unix/Windows integration work.
Some power users defected to Windows NT between 1993 and 2001 but then it reversed and grew much faster: from around 2001, PowerMacs started to become a credible desktop workstation for power users because of OS X. From 2006, Macintel boxes became more viable in general business use because the Intel chips meant you could run Windows in a VM at full speed for one or two essential Windows apps. They ran IE natively and WINE started to make OS X feasible for some apps with no need for a Windows licence.
In other words, the rise of OS X coincided with the rise of Linux as a viable server and GUI workstation.
Those machines could be pretty darn fast - if you get one and run the earliest software that still worked on. DOS-based apps would fly on a 486, even as Windows 95 would be barely usable.
Or controlling the heating and AC systems at 19 schools under its jurisdiction using a system that sends out commands over short-wave radio frequencies
It can boot from a floppy or from a CD drive, and it lets you chainload into a live usb even on old computers.
I used it to boot from CD from a floppy in an old Pentium MMX and it worked great (although slow, of course)
> Eventually I discovered that the problem isn't really the power of the computer.
Nope, that’s a modern problem. That’s what happens when the js-inmates run the asylum. We get shitty bloated software and 8300 copies of a browser running garage applications written by garbage developers.
I can’t wait to see what LLMs do with that being the bulk of their training.
Exciting!
not gonna disagree with you, but, as a solo developer who needs to reach audiences of all sorts, from mobile to powerful servers, the most reasonable choice today is Javascript. JS, with its "running environments" (Chrome, Node, etc.), has done what Java was supposed to do in the 90s. It's a pity that Java didn't hold its promises, but the blame is to put all on the companies that ran the show back then (and running the show now).
Javascript is not the problem at all.
Rookie developers who use hundreds of node modules or huge CSS frameworks are ruining performance and hurt the environment with bloated software that consumes energy and life time.
I used to run a cs1.6 server on an amd 800mhz with 256mb of ram in the 2000s. I'm looking these days to get a mac mini and while thinking that 16gb will not be enough I remembered about that server. It was a NAT gateway too, had a webserver also with hitstats for the cs server. And it was a popular 16v16 type of server too. What happened? How did we get to 16gb minimum and 32gb will make you not sad.
> There is a herculean effort on the part of package maintainers to build software for distros, and no one has been building 32 bit version of software for years, even if it is possible to build from source. There is only a very limited set of software you can use, even CLI software because so many things are built with 64 bit dependencies
That seems odd? Debian 12 Bullseye (oldstable) has fully supported i386 port. I would expect it to run reasonably well on late 32 bit era systems (Pentium4/AthlonXP)
AFAIU the Debian i386 port has effectively required i686 level CPU's for quite a long time (CMOV etc.)? So if he has an older CPU like the Pentium it might not work?
But otherwise, yes, Debian 12 should work fine as you say. Not so long ago I installed it on an old Pentium M laptop I had lying around. Did take some tweaking, turned out that the wifi card didn't support WPA2/3 mixed mode which I had configured on my AP, so I had to downgrade security for the experiment. But video was hopeless, it couldn't even play 144p videos on youtube without stuttering. Maybe the video card (some Intel thing, used the i915 driver) didn't have HW decoding for whatever video encoder youtube uses nowadays (AV1?), or whatever.
You can force YouTube to use H264 instead (via extensions like H264ify), that should reduce the processing load.
Were there actually Pentium M chipsets that could decode anything but MPEG2?
The CPU will be struggling with most modern video formats including h.264.
we were decoding 480x320 MP4 on PalmOS 5 devices in early 2000. Those were single-core in-order 200mhz ARM devices with no accelerators at all. Pentium M outperforms those easily and thus can do it too.
Mp4 is the container. H264 is the video codec.
Good point. Though too late in this particular case, since the battery was also busted, I ended up e-wasting the machine.
NetBSD is probably what would make most sense to run on that old hardware.
Alternatively you may have accidently built a great machine for installing FreeDOS to run old DOS games/applications. It does install from USB, but needs BIOS so can't run it on modern PC hardware.
NetBSD is the only 32bit modern Unix still running like a charm on 32 bit hardware. OpemBSD is second with great wifi support.
I was on linux as my main driver in the early 2000s an we did watch movies back then, even DVDs. Of course, the formats where not HD and it was DivX or DVD ISOs.
I remember running Gentoo and optimizing build flags for mplayer to get it working, at a time I had a 500Mhz Pentium III, later 850Mhz. And I also remember having to tweak the mplayer output driver params to get a good and smooth playback, but it was possible (mplayer -vo xv for Xvideo support). IIRC I got DVD .iso playback to run even on the framebuffer without X running at all (mplayer -vo fb). Also the "-framedrop" flag came in handy (you can do away with a bit less than 25fps when under load). Also, definitely you would need compile-time support for SSE/SSE2 in the CPU. I am not even sure I ever had a GPU that had video decoding support.
mpv and yt-dlp will fix that today.
i had an original 7" eeepc from 2007, running archlinux-32 from ~2017, with Xfce and all that, and few months ago updated it.. took me almost a day, going through various rabbit-holes, like 1-2 static-built pacmans and python and manually picking and combining various versions. The result was okay but somehow took more space than before (it has 4G ssd, from which i did have 2gb free, now only 1.5). But it maybe that is not old enough as machine..
>Computers have been powerful enough for productivity tasks for 20 years
Little known fact; before 2006 all we did was play Pong and make beep-boop noises on our computers.
My 32 bit laptop is a Thinkpad T42 from 2005 which has a functioning CDROM, and which can run Slackware15 stable 32bit install OKish, so I haven't tried any of this but:
My first thought: How about using a current computer to run qemu then mounting the Lenny iso as an image and installing to a qemu hard drive? Then dd the hard drive image to your 32bit target. (That might need access to a hard drive caddy depending on how you can boot the 32bit target machine, so a 'hardware regress' I suppose).
My second thought: If target machine is bootable from a more recent live linux, try a debootstrap install of a minimal Lenny with networking (assuming you can connect target machine to a network, I'm guessing with a cable rather than wifi). Reboot and install more software as required.
I have OpenBSD running on my old 2004 Centrino notebook (I might be lagging 2-3 versions behind, I don't really use it, just play around with it) and it's fine until you start playing YouTube videos, that is kinda hard on the CPU.
Yes, NetBSD and OpenBSD work fine on the 2005 T42 but as you say video performance is low. Recent OpenBSD versions have had to reduce the range of binary packages (i.e. outside of the base and installed with pkg_add) on i386 because of the difficulty of compiling them (e.g. Firefox, Seamonkey needing dependencies that are hard to compile on i386, a point the poster up thread made).
My ~/yt-dlp.conf:
#inicio de fichero
--format=bestvideo[height<=?480][fps<=?30]+bestaudio/best
#fin de fichero
I have a P166 under my desk and once in a blue moon I try to run something on it.
My biggest obstacles are that it doesn't have an ethernet port and that it doesn't have BIOS USB support (although it does have a card with two USB ports).
I've managed to run some small Linux distros on it (I'll definitely try this one), but, you're right, I haven't really found anything useful to run on it.
It seems that both OpenBSD [1] and NetBSD [2] still support i386, for example here [3] you can find the image for a USB stick.
I expect at least the base system (including X) to work without big issues (if your hardware is supported), for extra packages you may need a bit of luck.
Reminds me of my first linux distro called damnsmall linux. I think this was used as a first attempt to port linux to the gamecube, but the main team driving the effort ended up going with Gentoo instead.
From the main page:
As with most things in the GNU/Linux community, this project continues to stand on the shoulders of giants. I am just one guy without a CS degree, so for now, this project is based on antiX 23 i386. AntiX is a fantastic distribution that I think shares much of the same spirit as the original DSL project. AntiX shares pedigree with MEPIS and also leans heavily on the geniuses at Debian. So, this project stands on the shoulders of giants. In other words, DSL 2024 is a humble little project!
Though it may seem comparably ridiculous that 700MB is small in 2024 when DSL was 50MB in 2002, I’ve done a lot of hunting to find small footprint applications, and I had to do some tricks to get a workable desktop into the 700MB limit. To get the size down the ISO currently reduced full language support for German, English, French, Spanish, Portuguese and Brazilian Portuguese (de_DE, en_AU, en_GB, en_US, es_ES, fr_FR, es_ES, pt_PT, & pt_BR ). I had to strip the source codes, many man pages, and documentation out. I do provide a download script that will restore all the missing files, and so far, it seems to be working well.
I was just reacquainting myself with Puppy Linux, DSL, and TinyCoreLinux a couple weeks ago to sandbox an LLM agent in a VM. Good stuff.
For those who are curious, Alpine was the recommended distro as I went through various reviews. I don't know how reliable that advice is.
Alpine is great, especially for anything single purposed and headless (be it physical, VM, or container) so long as that thing isn't too tied to glibc. Been around a long time with a stable community (who are mostly using it for containers). It also defaults to a typical versioned release scheme but has the ability to switch to rolling just by changing the repo if you know you need the latest versions.
I once tried to use it as a GUI daily driver on my work laptop (since I was already using it for containers and VMs at work) and found that stretched it a bit too far out of its speciality. It definitely had the necessary packages, just with a lot of rough edges and increased rate of problems (separate from glibc, systemd, or other expected compatibility angles). Plus the focus on having things be statically linked makes really wide (lots of packages) installs negated any space efficiency gains it had.
Have you tested this on an actual 486?
Sadly, it does not seem to boot on my 486 DX2, I even stuffed 32M of RAM into the machine (8*4M, maximum the mainboard supports), more than the recommended 20M.
I have copied the floppy image from the site. It churns for about a minute and a half, loading kernel and initrd, then says "Booting kernel failed: Invalid Argument" and drops into SYSLINUX prompt.
EDIT: I tried a few more floppies to rule that out as the cause of the problem.
EDIT 2: I cloned SYSLINUX, checked out the specific commit and did some prodding around.
The function `bios_boot_linux` in `com32/lib/syslinux/load_linux.c` initializes errno to EINVAL. Besides sanity checking the header of the kernel image, there are a few other error paths that also `goto bail;` without changing errno.
Those other error paths all seem to be related to handling the memory map. I know that the BIOS in my machine does not support the E820h routine. I have a hunch that this might be the reason why it fails.
Most of those machines seem to be newer systems which probably support E820h, except for another 486 DX2 with a similar vintage as mine, that also failed to boot.
[deleted]
The persistence strategy described here (mount -t msdos -o rw /dev/fd0 /mnt) combined with a bind mount to home is a nice clever touch for saving space.
I don't know if that's also true for data integrity on physical magnetic media. FAT12 is not a journaling filesystem. On a modern drive, a crash during a write is at best, annoying while on a 3.5" floppy with a 33mhz CPU, a write operation blocks for a perceptible amount of time. If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone. The article mentions sync, but sync on a floppy drive is an agonizingly slow operation that users might interrupt.
Given the 253KiB free space constraint, I wonder if a better approach would be treating the free space as a raw block device or a tiny appended partition using a log-structured filesystem designed for slow media (like a stripped down JFFS2 or something), though that might require too many kernel modules.
Has anyone out there experimented with appending a tar archive to the end of the initramfs image inplace for persistence, rather than mounting the raw FAT filesystem? It might be safer to serialize writes only on shutdown, would love more thoughts on this.
Controversial position: journaling is not as beneficial as commonly believed. I have been using FAT for decades and never encountered much in the way of data corruption. It's probably found in far more embedded devices than PCs these days.
If you make structural changes to your filesystem without a journal, and you fail mid way, there is a 100% chance your filesystem is not in a known state, and a very good chance it is in a non-self-consistent state that will lead to some interesting surprises down the line.
No, it is very well known what will happen: you can get lost cluster chains, which are easily cleaned up. As long as the order of writes is known, there is no problem.
Better hope you didn't have a rename in progress with the old name removed without the new name in place. Or a directory entry written pointing to a FAT chain not yet committed to the FAT.
Yes, soft updates style write ordering can help with some of the issues, but the Linux driver doesn't do that. And some of the issues are essentially unavoidable, requiring a a full fsck on each unclean shutdown.
I don't know how Linux driver updates FAT, but if it doesn't do it the way DOS did, then it's a bug that puts data at risk.
1) Allocate space in FAT#2, 2) Write data in file, 3) Allocate space in FAT#1, 4) Update directory entry (file size), 5) Update free space count.
Rename in FAT is an atomic operation. Overwrite old name with new name in the directory entry, which is just 1 sector write (or 2 if it has a long file name too).
No, the VFAT driver doesn't do anything even slightly resembling that.
In general "what DOS did" doesn't cut for a modern system with page and dentry caches and multiple tasks accessing the filesystem without completely horrible performance. I would be really surprised if Windows handled all those cases right with disk caching enabled.
While rename can be atomic in some cases, it cannot be in the case of cross directory renames or when the new filename doesn't fit in the existing directory sector.
FAT has two allocation tables, the main one and a backup. So if you shut it off while manipulating the first one you have the backup. You are expected to run a filesystem check after a power failure.
FAT can be made tolerant form the driver just like a journaled FS:
1) mark blocks allocated in first FAT
If a crash occurs here, then data written is incomplete, so write FAT1 with data from FAT2 discarding all changes.
2) write data in sectors
If a crash occurs here, same as before, keep old file size.
3) update file size in the directory
This step is atomic - it's just one sector to update. If a crash occurs here (file size matches FAT1), copy FAT1 to FAT2 and keep the new file size.
4) mark blocks allocated in the second FAT
If a crash occurs here, write is complete, just calculate and update free space.
5) update free space
Is this something the FAT driver is Linux can do?
No. There are proprietary implementations which can, though not in 100% of the cases.
OpenWrt on some devices such as Turris Omnia writes the squashfs (mounted as RO root fs) in the "root" partition and then, immediately after, in the same partition, it writes a jffs2 (mounted as RW overlayfs). So it can be done.
Ps. On old good days there was not initrd and other ram disk stuff - you read entire system straight from the disk. Slackware 8 was that for sure and NetBSD (even newest one) is still doing it by default
> If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone.
Makes sense, great point. I would rather use a second drive for the write disk space, if possible (I know how rare it's now to have two floppy drives, but still).
> If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone.
This isn't true, I commented lower in the thread, but FAT keeps a backup table, and you can use that to restore the disk.
I remember the QNX Demo on a 1.44 MB floppy disk. It booted straight into a full blown window manager and had a basic web browser. That was 1999 and I never saw anything like that afterwards.
That was 1999 and I never saw anything like that afterwards.
Now you have ;-)
Wow! I never heard of them. KolibriOS looks promising.
The first time I booted menuet OS (2005? high school?) I was absolutely floored at how capable (and decent looking) an OS that lives entirely on a 1.44mb floppy could be.
I got the same feeling, whey I saw it first time. How they fit GUI, drivers all that stuff in 1.44MB.
Would that even fit the unicode tables today?
> There is 264KB of space left for your newly created files.
This could be increased noticeably by using one of the common extended floppy formats. The 21-sectors-per-track format used by MS¹ for Windows 95's floppy distribution was widely supported enough by drives (and found to be reliable enough on standard disks) that they considered it safe for mass use, and gave 1680KB instead of the 1440Kb offered by the standard 18-sector layout. The standard floppy formatting tools for Linux support creating such layouts.
--------
[1] There was some suggestion² that MS invented the extended floppy format, they were sometimes called “windows format”, but it³ had been used elsewhere for some time before MS used them for Windows and Office.
[2] I'm not sure if this came from MS themselves, or was invented by the tech press.
[3] and even further extended formats, including 1720KByte by squeezing in two extra tracks as well as more data per track which IIRC was used for OS/2 install floppies.
I miss the floppy disk sound and the anticipation then joy of finally loading into the OS.
The omnipresent coil whine in almost every laptop I got in past 15 years, gives me at least that nostalgic noise that says "computer is working".
Whish coil whine was configurable :)
At a very low level, it is. I know the individual that made a "diagnostic" for the floppy drive while working as a tech on the Apple I and Apple II designs which caused the drive to whine in patterns that were distinctly ... orgasmic.
> After 5 minutes I got freshly burned floppy.
oh god
That is an indication of someone who grew up in the CD-R/RW era.
I wonder if formatting the floppy is necessary. Could syslinux or maybe lilo load the kernel directly from raw floppy sectors and have the initrd appended to it and the commad line directly inside the kernel via CONFIG_CMDLINE? I know u-boot can do it, but that's 8+ MB.
As an alternative, isn't ext2 smaller by having no FAT tables?
Did I misremember downloading Slackware to 12 floppies in 1997?
Before then, a local clone store had an 'insane deal' on floppy disks, and they came with Slackware. I had a Mac, and the floppies weren't very good so.
MuLinux was also a floppy-based “live” distro, with optional floppy disks for X11, programming languages, etc.
There was also "Tom's Root Boot" distro that fitted on a floppy:
That had to use non-standard image size though. Mind blowing this fits now into regular 1.44MB.
12‽ I'd swear the Slackware I downloaded was closer to 30+. On dialup. Via a VAX. Using FTP to go from internet to the VAX box, then Kermit from the VAX to the DOS PC using Procomm Plus. Write it all, start the install sequence, find out that the 18th disk was bad. Reboot. Rinse. Repeat.
X disks were X11. There were also the A,B, C etc disks.
Then there was the Coherent install, with massive manual on ultra thin paper with the shell on the front.
Probably not. Pretty sure it was Puppy Linux (among I'm sure others) that could be run on just two floppies. I used to have this old 933MHz Coppermine system that I took when a medical office was going to throw it out, some time in the early 00s.
The HDD was borked but it had a 3.5" bay that worked, so I got a floppy-based distro running on it. I later replaced the drive and then made the mistake of attempting to compile X11 on it. Results were... mixed.
There’s something really lovely about this project - especially as they’re using the last kernel from May 2025 before x486 support was removed. It feels like somebody lovingly mending their car for one last time or something similar. (I’m tired but you can probably find a cuter metaphor)
It’s amazing to me that the floppy is still a relevant target unit. Just large enough to be useful, small enough to be a real challenge to use well. I don’t see the same passion for 700MB CDROM distributions, probably because the challenge just isn’t there.
My search continues for a Linux that will run on my 386SX 25MHz with 8MB RAM. So far I’ve only been able to use ELKS, which technically isn’t a Linux.
in theory, 2.4-vintage mainline kernel should be buildable for your system. try it
I should've been more clear. Sure, I started my Linux days on 2.0.36, which booted by floppy, on a Pentium 2. But what I want is some semblance of a distro, with tools and a way to do things, not just rolling my own technically-bootable kernel.
to setup small router on 486 with 12 MB ram and run flawless. Later i get Linksys WRT54GL and decommissioned that machine.
Me too, was CoyoteLinux.
This brings back memories. I used CoyoteLinux to surreptitiously share my ADSL connection with my SO. This was against my provider's ToS at the time.
We used to call the 1.44MB (3.5inch) disk stiffies, since they are rigid, while the physically bigger disks we used to refer to as floppies.
And they used to fail all the time, especially when you had something that spanned more than a single disk.
> We used to call the 1.44MB (3.5inch) disk stiffies
Are you from South Africa? I understand it was the standard slang name there -- and nowhere else, because of the double entendre.
I am indeed. Very specific knowledge of South Africa you have there :)
Is that name used with an eyebrow raised, or did that particular double entendre not make it out of the UK?
My level of English was very basic during the age of stiffies, so that double entendre never occurred to me at the time
So, about twenty people still have hardware to run this? I respect the work regardless.
The original software for the ISS (space station) was stored on a single floppy disk. Not sure about density but one of the engineers told me.
[deleted]
Since it’s an 1.44M image I assume they use 3.5” diskettes. The terms floppy and diskette are used as synonyms today, but the different names make sense since floppies are flexible and “floppy”. Diskettinux?
I was making routers our of old PCs (486 or early pentiums) with 2 network cards (3com or ne2000) back in 2000 with floppies and CoyoteLinux. Installed 10s of them in the students houses.
I was hoping someone would mention CoyoteLinux. It was my residential router for several years in the early 2000s. My 'disaster recovery plan' consisted of a second floppy disk (which fortunately I never had to use).
I thought Linux dropped driver support for real floppy drives. Did that not happen, or am I missing something?
(That mail also mentions the floppy driver is "basically orphaned" though. But evidently it's still there and builds.)
Maybe you're thinking of the floppy tape (ftape) driver, which was removed back in the 2.6.20 kernel. Though there's a project keeping an out-of-tree version of it working with recent kernels at https://github.com/dbrant/ftape
Don't think so? Linux should still support almost all builtin motherboard floppy controllers, for the platforms it still runs on. ISA floppy controller support is probably not as comprehensive, but not because anything has been dropped.
Huh, yeah looks like I misremembered.
No but I find this line interesting:
The Linux kernel drops i486 support in 6.15 (released May 2025), so 6.14 (released March 2025) is the latest version with full compatibility.
Any chance of backporting changes to be able to run on older hardware?
https://kernel.org/ says 6.12 is still a supported LTS, so you could just run that.
I remember the days when Linux came on 50 floppies.
If I recall correctly, when you wanted all of the bells and whistles that Slackware had to offer, it required 72 floppy disks.
mgr on sun hardware probably could have come close
Ok now I need to buy a floppy drive and floppy disks in 2026? Hmm where should I go to buy those?
It's basically what people used before USB sticks. But it was also the storage medium that software was sold on, before CD-ROMs became widespread.
Floppy is a race of robotic jackalopes, known for their floppy ears. A "Single Floppy" is a rare subset of that species where only one ear flops down due to a random mutation of their hardware.
Embedding Linux has the characteristic of making the single floppy highly territorial and aggressive.
Its a real life 3D save icon.
Bring back Slackware?
It never went anywhere.
Ok, impressive, but - why?
No current computer has a floppy disk drive anymore.
The Web Page claims building such a disk is a learning exercise, but the knowledge offered is pretty arcane, even for regular Linux users.
Is this pure nostalgia?
If you have to ask why this is not for you. Why climb a mountain that’s already been climbed hundreds of times? For the challenge.
Over Christmas I tried to actually build a usable computer from the 32-bit era. Eventually I discovered that the problem isn't really the power of the computer. Computers have been powerful enough for productivity tasks for 20 years, excepting browser-based software.
The two main problems I ran into were 1) software support at the application layer, and 2) video driver support. There is a herculean effort on the part of package maintainers to build software for distros, and no one has been building 32 bit version of software for years, even if it is possible to build from source. There is only a very limited set of software you can use, even CLI software because so many things are built with 64 bit dependencies. Secondly, old video card drivers are being dropped from the kernel. This means all you have is basic VGA "safe-mode" level support, which isn't even fast enough to play an MPEG2. My final try was to install Debian 5, which was period correct and had support for my hardware, but the live CDs of the the time were not hybrid so the ISO could not boot from USB. I didn't have a burner so I finally gave up.
So I think these types of projects are fun for a proof of concept, but unfortunately are never going to give life to old computers.
> Computers have been powerful enough for productivity tasks for 20 years
It baffles me how usable Office 97 still. I was playing with it recently in a VM to see if it worked as well as I remembered, and it was amazing how packed with features it is considering it's nearing on thirty. There's no accounting for taste but I prefer the old Office UI to the ribbon, there's a boatload of formatting options for Word, there's 3D Word Art that hits me right in the nostalgia, Excel 97 is still very powerful and supports pretty much every feature I use regularly. It's obviously snappy on modern hardware, but I think it was snappy even in 1998.
I'm sure people can enumerate here on the newer features that have come in later editions, and I certainly do not want to diminish your experience if you find all the new stuff useful, but I was just remarkably impressed how much cool stuff was in packed into the software.
I think MS Word was basically feature-complete with v4.0 which ran on a 1MB 68000 Macintosh. Obviously they have added lots of UI and geegaws, but the core word processing functionality hasn't really changed at all.
(edit to say I'm obviously ignoring i8n etc.)
My dad used to run a whole commercial bank on MS Office 4.0 and a 386. (A small one, but still!)
Small, medium and large colleges in the UK ran on Novell servers and 386 client machines with windows for workgroups and whatever Office they came with. I think the universities were using unixy minicomputers then though. Late 80s early 90s. Those 386 machines were built like tanks and survived the tender ministrations of hundreds of students (not to mention some of the staff).
I love this story where a C64 in Poland rans a Auto repair shop.
https://www.popularmechanics.com/technology/gadgets/a23139/c...
I still use Office 2010 to this day and feel like absolutely nothing is missing that I truly need. The only issues are Alt-Tab and multiple monitors have bugs. But functionality? 100%.
It's wild to remember that I basically grew up with this type of software. I was there, when the MDI/SDI (Multi-Document Interface / Single-Document Interface) discussion was ongoing, and how much backlash the "Ribbon"-interface received. It also shows that writing documents hasn't really changed in the past 30 years. I wonder if that's a good or bad development.
With memory prices skyrocketing, I wonder if we will see a freeze in computer hardware requirements for software. Maybe it's time to optimize again.
Consumer laptops have been frozen on 8GB of RAM for a while already.
Yeah you can get machines which are higher specced easily enough, but they’re usually at the upper end of the average consumers budget.
Sadly Electron developers will be fired, and C++ and even Rust ones will be highly praised. QT5/6 will be king for tons of desktop software.
One can dream.
Ram shortages are not dreams.
They're not permanent either.
> I was playing with it recently in a VM
With the small caveat that I only use Word, it runs perfectly in WINE and has done for over a decade. I use it on 64-bit Ubuntu, and it runs very well: it's also possible to install the 3 service releases that MS put out, and the app runs very quickly even on hardware that is 15+ years old.
The service packs are a good idea. They improve stability, and make export to legacy formats work.
WINE works better than a VM: it takes less memory, there's no VM startup/shutdown time, and host integration is better: e.g. host filesystem access and bidirectional cut and paste.
I had trouble getting the Office 97 installer working with Wine. Not claiming it’s impossible but I figured just to play with it I could spin up Qemu.
I have MS Office 4.0 installed on my 386DX-40 with 4 MB of RAM and 210 MB HDD, running Windows 3.1, and it is good. Most of the common features are there, it's a perfectly working office setup. The major thing missing is font anti-aliasing. Office 95 and 97 are absolutely awesome.
I do remember running Word on an Am386DX-40 and later an i486DX2-66 and there was an issue that wouldn't be a problem with faster hardware; the widow/orphan control happened live so if you made an edit, then hit print, there was a race condition where you could end up with a duplicated line or missing line across page boundaries. Since later drafts tended to have fewer edits, I once turned in a final draft of a school paper with such an error.
Then again, if you'd also run it at low res on an old CRT it might not or barely benefit from anti-aliasing anyway.
Oh, right! 800x600 was pretty sharp on a 14", and 1024x768 on 15", and when ClearType came out it actually was blurring things on CRTs.
Totally agree! I‘d pay definitely $300 (lifetime license) for a productivity suite like Windows 95 design and Office 95 with no bloatware and ads. Just pure speed and productivity.
Last true step change in computer performance for general home computing tasks was SSD.
I'd add multicore processors as well, which makes multiprocess computing viable. And as a major improvement, Apple's desktop CPUs which are both fast, energy efficient and cool - my laptop fan never turns on. At one point I was like "do they even work?" so I ran a website that uses CPU and GPU to the max, and... still nothing, stuff went up to 90 degrees but no fan action yet. I installed a fan control app to demonstrate that my system does in fact have fans.
Meanwhile my home PC starts blowing whenever I fire up a video game.
In 20 years? That is nothing.
It's crazy too to realise how much of the multi-application interop vision was realized in Office 97 too. Visual Basic for Applications had rich hooks into all the apps, you could make macros and scripts and embed them into documents, you could embed documents into each other.
It's really astonishing how full-featured it all was, and it was running on those Pentium machines that had a "turbo" button to switch between 33 and 66 MHz and just a few MBs of RAM.
> but I think it was snappy even in 1998.
It definitely was snappy. I used it on school computers that were Pentium (1?) with about as much RAM as my current L2 cache (16MB). Dirty rectangles and win32 primitives. Very responsive. It also came with VB6 where you could write your own interpreted code very easily to do all kinds of stuff.
My crappy old 2018 Chromebook is still just about usable with 2GB but has gone from a snappy system to a lethargic snail.. and getting slower every update.. Yeah for progress!
eMMC Chromebooks are notorious for storage-related slowdowns. If it's an option, booting a ChromeOS variant or similar distro off a high-speed microSD, over USB, or (least likely with a Chromebook) via PXE might confirm.
Maybe with the price of memory going up, we'll start seeing a more conservative use of resources in consumer software.
A fella can dream, anyways.
“Powerful enough for productivity tasks” is very variable depending on what you need to be productive in. Office sure. 3D modelling? CAD? Video editing? Ehhhhh not so sure.
I hate to tell you this, but people were doing CAD and CNC work on PCs back when a 33MHz 80386 with 8MB of RAM was an expensive computer.
And they did video editing on Amigas with an add-on peripheral called a Video Toaster.
I don’t know enough about CAD to comment but video editing is considerably more expensive now for a bunch of reasons and I don’t think an Amiga could handle it now.
Video compression is a lot more computationally complex now than it was in the 90s, and it is unlikely that an Amiga with a 68k or old PowerPC would be able to handle 4k video with H265 or ProRes. Even if you had specialized hardware to decode it, I’m not 100% sure that an Amiga has enough memory to hold a single decompressed frame to edit against.
Don’t get me wrong, Video Toaster is super awesome, but I don’t think it’s up to modern tasks.
This! I have the 14-core M4 Macbook Pro with 48GB of RAM, and Word for Mac (Version 16 at this time) runs like absolute molasses on large documents, and pegs a single core between 70 and 90% for most of the time, even when I'm not typing.
I am now starting to wonder how much of it has to do with network access to Sharepoint and telemetry data that most likely didn't exist in the Office 97 dial-up era.
Features-wise - I doubt there is a single feature I use (deliberately) today in Excel or Word that wasn't available in Office 97.
I'd happily suffer Clippy over Co-Pilot.
> I'd happily suffer Clippy
It's an optional install. You can just click Custom, untick "Office Assistant" and other horrid bits of bloat like "Find Fast" and "Word Mail in Outlook" and get rid of that stuff.
The curse-ed ribbon was a huge productivity regression. I still use very old versions of Word and Excel (the latter at least until the odd spreadsheet exceeds size limits) because they're simply better than the newer drivel. Efficient UI, proper keyboard shortcuts with unintrusive habbit-reinforcing hints, better performance, not trying to siphon all my files up to their retarded cloud. There is almost nothing I miss in terms of newer features from later versions.
The ribbon thing was a taste of things to come in the degradation of UI standards. Take something that works great and looks ok, replace it with something flashy that gives marketing people something to say. Break the workflow of existing users. Repeat every 10 years.
Except for Internet surfing, a plain Amiga 500 would be good enough for what many folks do at home, between gaming, writing letters, basic accounting and the occasional flyers for party invitations.
Total nostalgia talk. Those machines were just glacially slow at launching apps and really everything, like spell check, go get a coffee. I could immediately tell the difference between a 25Mhz Mac IIci and a 25Mhz Mac IIci with a 32KB cache card. That's how slow they were.
Some of us do actually use such machines every now and then.
The point being made was that for many people whose lives doesn't circle around computers, their computing needs have not changed since the early 1990's, other than doing stuff on Internet nowadays.
For those people, using digital typewriter hardly requires more features than Final Writer, and for what they do with numbers in tables and a couple of automatic updated cells, something like Superplan would also be enough.
Yeah, I just posted that a lot of that software was amazing and pretty 'feature-complete', all while running on a very limited old personal conmputers.
Just please don't gaslight us with some alternate Amiga bullshit history. All that shit was super slow, you were begging for +5Mhz or +25KB of cache. If Amiga had any success outside of teenage gamers, that stuff would have all been historical, just like it was on the Mac.
The Amiga had huge success outside of "teenage gamers", even if in niche markets. Amigas were extremely important in TV and video production throughout the 1990s. I remember a local Amiga repair shop in South Florida that stayed in business until about 2007, mainly by servicing Amigas still in service in the local broadcast industry -- all of the local cable providers in particular had loads of them, since they were used for the old Prevue Guide listings, along with lots of other stuff.
Goes both ways, Mac was hardly something to write home about outside US, and they did not follow Commodore footsteps into bankruptcy out of sheer luck.
The Mac was just an expensive toy for people working on different media. No one used it at home, even less at school. Ever.
Amiga was big in Europe. No doubt they were slow though; most computers of the time were.
The Mac didn't exist in Europe except for expensive A/V production machines and the printing world (books, artists, movie posters, covers and the like).
If you were from Humanities and worked for a newspaper design layout you would use a Mac at work. That's it.
> The Mac didn't exist in Europe
That is absolutely not a valid generalisation.
I worked on Macs from the start of my career in 1988. They were the standard computer for state schools in education here in the Isle of Man in the late 1980s and early 1990s.
The Isle of Man's national travel company ran on a Mac database, Omnis, and later moved to Windows to keep using Omnis.
It's still around:
https://www.omnis.net/
I supported dozens of Mac-using clients in London through the 1990s and they were the standard platform in some businesses. Windows NT Server had good MacOS support from the very first version, 3.1, and Macs could access Windows NT Server shares over the built-in Appleshare client, and store Mac files complete with their Resource Forks on NTFS volumes. From 1993 onwards this made mixed Mac/PC networks much easier.
I did subcontracted Mac support for a couple of friends of mine's consultancy businesses because they were Windows guys and didn't "speak Mac".
Yes, they were very strong in print, graphics, design, photography, etc. but not only in those markets. Richer types used them as home computers. I also worked on Macs in the music and dance businesses and other places.
Macs were always there.
Maybe you didn't notice but they always were. Knowing PC/Mac integration was a key career skill for me, and the rise of OS X made the classic MacOS knowledge segue into more general Unix/Windows integration work.
Some power users defected to Windows NT between 1993 and 2001 but then it reversed and grew much faster: from around 2001, PowerMacs started to become a credible desktop workstation for power users because of OS X. From 2006, Macintel boxes became more viable in general business use because the Intel chips meant you could run Windows in a VM at full speed for one or two essential Windows apps. They ran IE natively and WINE started to make OS X feasible for some apps with no need for a Windows licence.
In other words, the rise of OS X coincided with the rise of Linux as a viable server and GUI workstation.
Those machines could be pretty darn fast - if you get one and run the earliest software that still worked on. DOS-based apps would fly on a 486, even as Windows 95 would be barely usable.
Or controlling the heating and AC systems at 19 schools under its jurisdiction using a system that sends out commands over short-wave radio frequencies
https://www.popularmechanics.com/technology/infrastructure/a...
it’s also proof that Microsoft hasn’t done much with office in decades… except add bloat, tracking, spyware…
> old Office UI to the ribbon
Truly, I do not miss the swamp of toolbar icons without any labels. I don't weep for the old interface.
Try Plop Boot Manager: https://www.plop.at/en/bootmanagers.html
It can boot from a floppy or from a CD drive, and it lets you chainload into a live usb even on old computers.
I used it to boot from CD from a floppy in an old Pentium MMX and it worked great (although slow, of course)
> Eventually I discovered that the problem isn't really the power of the computer.
Nope, that’s a modern problem. That’s what happens when the js-inmates run the asylum. We get shitty bloated software and 8300 copies of a browser running garage applications written by garbage developers.
I can’t wait to see what LLMs do with that being the bulk of their training.
Exciting!
not gonna disagree with you, but, as a solo developer who needs to reach audiences of all sorts, from mobile to powerful servers, the most reasonable choice today is Javascript. JS, with its "running environments" (Chrome, Node, etc.), has done what Java was supposed to do in the 90s. It's a pity that Java didn't hold its promises, but the blame is to put all on the companies that ran the show back then (and running the show now).
Javascript is not the problem at all.
Rookie developers who use hundreds of node modules or huge CSS frameworks are ruining performance and hurt the environment with bloated software that consumes energy and life time.
I used to run a cs1.6 server on an amd 800mhz with 256mb of ram in the 2000s. I'm looking these days to get a mac mini and while thinking that 16gb will not be enough I remembered about that server. It was a NAT gateway too, had a webserver also with hitstats for the cs server. And it was a popular 16v16 type of server too. What happened? How did we get to 16gb minimum and 32gb will make you not sad.
> There is a herculean effort on the part of package maintainers to build software for distros, and no one has been building 32 bit version of software for years, even if it is possible to build from source. There is only a very limited set of software you can use, even CLI software because so many things are built with 64 bit dependencies
That seems odd? Debian 12 Bullseye (oldstable) has fully supported i386 port. I would expect it to run reasonably well on late 32 bit era systems (Pentium4/AthlonXP)
AFAIU the Debian i386 port has effectively required i686 level CPU's for quite a long time (CMOV etc.)? So if he has an older CPU like the Pentium it might not work?
But otherwise, yes, Debian 12 should work fine as you say. Not so long ago I installed it on an old Pentium M laptop I had lying around. Did take some tweaking, turned out that the wifi card didn't support WPA2/3 mixed mode which I had configured on my AP, so I had to downgrade security for the experiment. But video was hopeless, it couldn't even play 144p videos on youtube without stuttering. Maybe the video card (some Intel thing, used the i915 driver) didn't have HW decoding for whatever video encoder youtube uses nowadays (AV1?), or whatever.
You can force YouTube to use H264 instead (via extensions like H264ify), that should reduce the processing load.
Were there actually Pentium M chipsets that could decode anything but MPEG2?
The CPU will be struggling with most modern video formats including h.264.
we were decoding 480x320 MP4 on PalmOS 5 devices in early 2000. Those were single-core in-order 200mhz ARM devices with no accelerators at all. Pentium M outperforms those easily and thus can do it too.
Mp4 is the container. H264 is the video codec.
Good point. Though too late in this particular case, since the battery was also busted, I ended up e-wasting the machine.
NetBSD is probably what would make most sense to run on that old hardware.
Alternatively you may have accidently built a great machine for installing FreeDOS to run old DOS games/applications. It does install from USB, but needs BIOS so can't run it on modern PC hardware.
NetBSD is the only 32bit modern Unix still running like a charm on 32 bit hardware. OpemBSD is second with great wifi support.
I was on linux as my main driver in the early 2000s an we did watch movies back then, even DVDs. Of course, the formats where not HD and it was DivX or DVD ISOs. I remember running Gentoo and optimizing build flags for mplayer to get it working, at a time I had a 500Mhz Pentium III, later 850Mhz. And I also remember having to tweak the mplayer output driver params to get a good and smooth playback, but it was possible (mplayer -vo xv for Xvideo support). IIRC I got DVD .iso playback to run even on the framebuffer without X running at all (mplayer -vo fb). Also the "-framedrop" flag came in handy (you can do away with a bit less than 25fps when under load). Also, definitely you would need compile-time support for SSE/SSE2 in the CPU. I am not even sure I ever had a GPU that had video decoding support.
mpv and yt-dlp will fix that today.
i had an original 7" eeepc from 2007, running archlinux-32 from ~2017, with Xfce and all that, and few months ago updated it.. took me almost a day, going through various rabbit-holes, like 1-2 static-built pacmans and python and manually picking and combining various versions. The result was okay but somehow took more space than before (it has 4G ssd, from which i did have 2gb free, now only 1.5). But it maybe that is not old enough as machine..
>Computers have been powerful enough for productivity tasks for 20 years
Little known fact; before 2006 all we did was play Pong and make beep-boop noises on our computers.
My 32 bit laptop is a Thinkpad T42 from 2005 which has a functioning CDROM, and which can run Slackware15 stable 32bit install OKish, so I haven't tried any of this but:
My first thought: How about using a current computer to run qemu then mounting the Lenny iso as an image and installing to a qemu hard drive? Then dd the hard drive image to your 32bit target. (That might need access to a hard drive caddy depending on how you can boot the 32bit target machine, so a 'hardware regress' I suppose).
My second thought: If target machine is bootable from a more recent live linux, try a debootstrap install of a minimal Lenny with networking (assuming you can connect target machine to a network, I'm guessing with a cable rather than wifi). Reboot and install more software as required.
I have OpenBSD running on my old 2004 Centrino notebook (I might be lagging 2-3 versions behind, I don't really use it, just play around with it) and it's fine until you start playing YouTube videos, that is kinda hard on the CPU.
Yes, NetBSD and OpenBSD work fine on the 2005 T42 but as you say video performance is low. Recent OpenBSD versions have had to reduce the range of binary packages (i.e. outside of the base and installed with pkg_add) on i386 because of the difficulty of compiling them (e.g. Firefox, Seamonkey needing dependencies that are hard to compile on i386, a point the poster up thread made).
My ~/yt-dlp.conf:
My ~/.config/mpv/config#inicio
Usage: mpv $YOUTUBE_URLUpgrade ASAP.
You might have some luck applying isohybrid(1) to the period-correct .iso image, making it bootable by other means: https://manpages.debian.org/stretch/syslinux-utils/isohybrid...
I have a P166 under my desk and once in a blue moon I try to run something on it.
My biggest obstacles are that it doesn't have an ethernet port and that it doesn't have BIOS USB support (although it does have a card with two USB ports).
I've managed to run some small Linux distros on it (I'll definitely try this one), but, you're right, I haven't really found anything useful to run on it.
It seems that both OpenBSD [1] and NetBSD [2] still support i386, for example here [3] you can find the image for a USB stick.
I expect at least the base system (including X) to work without big issues (if your hardware is supported), for extra packages you may need a bit of luck.
[1] https://www.openbsd.org/plat.html
[2] https://wiki.netbsd.org/ports/
[3] https://wiki.netbsd.org/ports/i386/
You can always run Linux off the dos partition with vmlinux loader. Or Slackware DOS version (forgot it's name).
Don't lose hope. You can boot it one way or other :)
loadlin ?
https://youpibouh.thefreecat.org/loadlin/
The last release of NetBSD still has drivers.
Reminds me of my first linux distro called damnsmall linux. I think this was used as a first attempt to port linux to the gamecube, but the main team driving the effort ended up going with Gentoo instead.
From the main page:
As with most things in the GNU/Linux community, this project continues to stand on the shoulders of giants. I am just one guy without a CS degree, so for now, this project is based on antiX 23 i386. AntiX is a fantastic distribution that I think shares much of the same spirit as the original DSL project. AntiX shares pedigree with MEPIS and also leans heavily on the geniuses at Debian. So, this project stands on the shoulders of giants. In other words, DSL 2024 is a humble little project!
Though it may seem comparably ridiculous that 700MB is small in 2024 when DSL was 50MB in 2002, I’ve done a lot of hunting to find small footprint applications, and I had to do some tricks to get a workable desktop into the 700MB limit. To get the size down the ISO currently reduced full language support for German, English, French, Spanish, Portuguese and Brazilian Portuguese (de_DE, en_AU, en_GB, en_US, es_ES, fr_FR, es_ES, pt_PT, & pt_BR ). I had to strip the source codes, many man pages, and documentation out. I do provide a download script that will restore all the missing files, and so far, it seems to be working well.
https://www.damnsmalllinux.org/
I was just reacquainting myself with Puppy Linux, DSL, and TinyCoreLinux a couple weeks ago to sandbox an LLM agent in a VM. Good stuff.
For those who are curious, Alpine was the recommended distro as I went through various reviews. I don't know how reliable that advice is.
Alpine is great, especially for anything single purposed and headless (be it physical, VM, or container) so long as that thing isn't too tied to glibc. Been around a long time with a stable community (who are mostly using it for containers). It also defaults to a typical versioned release scheme but has the ability to switch to rolling just by changing the repo if you know you need the latest versions.
I once tried to use it as a GUI daily driver on my work laptop (since I was already using it for containers and VMs at work) and found that stretched it a bit too far out of its speciality. It definitely had the necessary packages, just with a lot of rough edges and increased rate of problems (separate from glibc, systemd, or other expected compatibility angles). Plus the focus on having things be statically linked makes really wide (lots of packages) installs negated any space efficiency gains it had.
Have you tested this on an actual 486?
Sadly, it does not seem to boot on my 486 DX2, I even stuffed 32M of RAM into the machine (8*4M, maximum the mainboard supports), more than the recommended 20M.
I have copied the floppy image from the site. It churns for about a minute and a half, loading kernel and initrd, then says "Booting kernel failed: Invalid Argument" and drops into SYSLINUX prompt.
EDIT: I tried a few more floppies to rule that out as the cause of the problem.
Here are some screenshots: https://imgur.com/a/floppinux-0-3-1-Mdh1c0w
EDIT 2: I cloned SYSLINUX, checked out the specific commit and did some prodding around.
The function `bios_boot_linux` in `com32/lib/syslinux/load_linux.c` initializes errno to EINVAL. Besides sanity checking the header of the kernel image, there are a few other error paths that also `goto bail;` without changing errno.
Those other error paths all seem to be related to handling the memory map. I know that the BIOS in my machine does not support the E820h routine. I have a hunch that this might be the reason why it fails.
The website has an image gallery where people ran it on actual hardware: https://krzysztofjankowski.com/floppinux/floppinux-in-the-wi...
Most of those machines seem to be newer systems which probably support E820h, except for another 486 DX2 with a similar vintage as mine, that also failed to boot.
The persistence strategy described here (mount -t msdos -o rw /dev/fd0 /mnt) combined with a bind mount to home is a nice clever touch for saving space.
I don't know if that's also true for data integrity on physical magnetic media. FAT12 is not a journaling filesystem. On a modern drive, a crash during a write is at best, annoying while on a 3.5" floppy with a 33mhz CPU, a write operation blocks for a perceptible amount of time. If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone. The article mentions sync, but sync on a floppy drive is an agonizingly slow operation that users might interrupt.
Given the 253KiB free space constraint, I wonder if a better approach would be treating the free space as a raw block device or a tiny appended partition using a log-structured filesystem designed for slow media (like a stripped down JFFS2 or something), though that might require too many kernel modules.
Has anyone out there experimented with appending a tar archive to the end of the initramfs image inplace for persistence, rather than mounting the raw FAT filesystem? It might be safer to serialize writes only on shutdown, would love more thoughts on this.
Controversial position: journaling is not as beneficial as commonly believed. I have been using FAT for decades and never encountered much in the way of data corruption. It's probably found in far more embedded devices than PCs these days.
If you make structural changes to your filesystem without a journal, and you fail mid way, there is a 100% chance your filesystem is not in a known state, and a very good chance it is in a non-self-consistent state that will lead to some interesting surprises down the line.
No, it is very well known what will happen: you can get lost cluster chains, which are easily cleaned up. As long as the order of writes is known, there is no problem.
Better hope you didn't have a rename in progress with the old name removed without the new name in place. Or a directory entry written pointing to a FAT chain not yet committed to the FAT.
Yes, soft updates style write ordering can help with some of the issues, but the Linux driver doesn't do that. And some of the issues are essentially unavoidable, requiring a a full fsck on each unclean shutdown.
I don't know how Linux driver updates FAT, but if it doesn't do it the way DOS did, then it's a bug that puts data at risk.
1) Allocate space in FAT#2, 2) Write data in file, 3) Allocate space in FAT#1, 4) Update directory entry (file size), 5) Update free space count.
Rename in FAT is an atomic operation. Overwrite old name with new name in the directory entry, which is just 1 sector write (or 2 if it has a long file name too).
No, the VFAT driver doesn't do anything even slightly resembling that.
In general "what DOS did" doesn't cut for a modern system with page and dentry caches and multiple tasks accessing the filesystem without completely horrible performance. I would be really surprised if Windows handled all those cases right with disk caching enabled.
While rename can be atomic in some cases, it cannot be in the case of cross directory renames or when the new filename doesn't fit in the existing directory sector.
FAT has two allocation tables, the main one and a backup. So if you shut it off while manipulating the first one you have the backup. You are expected to run a filesystem check after a power failure.
FAT can be made tolerant form the driver just like a journaled FS:
Is this something the FAT driver is Linux can do?
No. There are proprietary implementations which can, though not in 100% of the cases.
OpenWrt on some devices such as Turris Omnia writes the squashfs (mounted as RO root fs) in the "root" partition and then, immediately after, in the same partition, it writes a jffs2 (mounted as RW overlayfs). So it can be done.
Ps. On old good days there was not initrd and other ram disk stuff - you read entire system straight from the disk. Slackware 8 was that for sure and NetBSD (even newest one) is still doing it by default
> If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone.
Makes sense, great point. I would rather use a second drive for the write disk space, if possible (I know how rare it's now to have two floppy drives, but still).
> If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone.
This isn't true, I commented lower in the thread, but FAT keeps a backup table, and you can use that to restore the disk.
I remember the QNX Demo on a 1.44 MB floppy disk. It booted straight into a full blown window manager and had a basic web browser. That was 1999 and I never saw anything like that afterwards.
MenuetOS/KolibriOS:
https://news.ycombinator.com/item?id=38059961
https://news.ycombinator.com/item?id=27249075
That was 1999 and I never saw anything like that afterwards.
Now you have ;-)
Wow! I never heard of them. KolibriOS looks promising.
The first time I booted menuet OS (2005? high school?) I was absolutely floored at how capable (and decent looking) an OS that lives entirely on a 1.44mb floppy could be.
xwoaf-rebuild matches that
https://web.archive.org/web/20240901115514/https://pupngo.dk...
I got the same feeling, whey I saw it first time. How they fit GUI, drivers all that stuff in 1.44MB.
Would that even fit the unicode tables today?
> There is 264KB of space left for your newly created files.
This could be increased noticeably by using one of the common extended floppy formats. The 21-sectors-per-track format used by MS¹ for Windows 95's floppy distribution was widely supported enough by drives (and found to be reliable enough on standard disks) that they considered it safe for mass use, and gave 1680KB instead of the 1440Kb offered by the standard 18-sector layout. The standard floppy formatting tools for Linux support creating such layouts.
--------
[1] There was some suggestion² that MS invented the extended floppy format, they were sometimes called “windows format”, but it³ had been used elsewhere for some time before MS used them for Windows and Office.
[2] I'm not sure if this came from MS themselves, or was invented by the tech press.
[3] and even further extended formats, including 1720KByte by squeezing in two extra tracks as well as more data per track which IIRC was used for OS/2 install floppies.
I miss the floppy disk sound and the anticipation then joy of finally loading into the OS.
The omnipresent coil whine in almost every laptop I got in past 15 years, gives me at least that nostalgic noise that says "computer is working".
Whish coil whine was configurable :)
At a very low level, it is. I know the individual that made a "diagnostic" for the floppy drive while working as a tech on the Apple I and Apple II designs which caused the drive to whine in patterns that were distinctly ... orgasmic.
> After 5 minutes I got freshly burned floppy.
oh god
That is an indication of someone who grew up in the CD-R/RW era.
I wonder if formatting the floppy is necessary. Could syslinux or maybe lilo load the kernel directly from raw floppy sectors and have the initrd appended to it and the commad line directly inside the kernel via CONFIG_CMDLINE? I know u-boot can do it, but that's 8+ MB.
As an alternative, isn't ext2 smaller by having no FAT tables?
Did I misremember downloading Slackware to 12 floppies in 1997?
Before then, a local clone store had an 'insane deal' on floppy disks, and they came with Slackware. I had a Mac, and the floppies weren't very good so.
MuLinux was also a floppy-based “live” distro, with optional floppy disks for X11, programming languages, etc.
There was also "Tom's Root Boot" distro that fitted on a floppy:
https://en.wikipedia.org/wiki/Tomsrtbt
That had to use non-standard image size though. Mind blowing this fits now into regular 1.44MB.
12‽ I'd swear the Slackware I downloaded was closer to 30+. On dialup. Via a VAX. Using FTP to go from internet to the VAX box, then Kermit from the VAX to the DOS PC using Procomm Plus. Write it all, start the install sequence, find out that the 18th disk was bad. Reboot. Rinse. Repeat.
X disks were X11. There were also the A,B, C etc disks.
Then there was the Coherent install, with massive manual on ultra thin paper with the shell on the front.
Probably not. Pretty sure it was Puppy Linux (among I'm sure others) that could be run on just two floppies. I used to have this old 933MHz Coppermine system that I took when a medical office was going to throw it out, some time in the early 00s.
The HDD was borked but it had a 3.5" bay that worked, so I got a floppy-based distro running on it. I later replaced the drive and then made the mistake of attempting to compile X11 on it. Results were... mixed.
There’s something really lovely about this project - especially as they’re using the last kernel from May 2025 before x486 support was removed. It feels like somebody lovingly mending their car for one last time or something similar. (I’m tired but you can probably find a cuter metaphor)
It’s amazing to me that the floppy is still a relevant target unit. Just large enough to be useful, small enough to be a real challenge to use well. I don’t see the same passion for 700MB CDROM distributions, probably because the challenge just isn’t there.
My search continues for a Linux that will run on my 386SX 25MHz with 8MB RAM. So far I’ve only been able to use ELKS, which technically isn’t a Linux.
in theory, 2.4-vintage mainline kernel should be buildable for your system. try it
I should've been more clear. Sure, I started my Linux days on 2.0.36, which booted by floppy, on a Pentium 2. But what I want is some semblance of a distro, with tools and a way to do things, not just rolling my own technically-bootable kernel.
25 years ago i used floppyfw
https://www.zelow.no/floppyfw/
to setup small router on 486 with 12 MB ram and run flawless. Later i get Linksys WRT54GL and decommissioned that machine.
Me too, was CoyoteLinux.
This brings back memories. I used CoyoteLinux to surreptitiously share my ADSL connection with my SO. This was against my provider's ToS at the time.
We used to call the 1.44MB (3.5inch) disk stiffies, since they are rigid, while the physically bigger disks we used to refer to as floppies.
And they used to fail all the time, especially when you had something that spanned more than a single disk.
> We used to call the 1.44MB (3.5inch) disk stiffies
Are you from South Africa? I understand it was the standard slang name there -- and nowhere else, because of the double entendre.
I am indeed. Very specific knowledge of South Africa you have there :)
Is that name used with an eyebrow raised, or did that particular double entendre not make it out of the UK?
My level of English was very basic during the age of stiffies, so that double entendre never occurred to me at the time
So, about twenty people still have hardware to run this? I respect the work regardless.
The original software for the ISS (space station) was stored on a single floppy disk. Not sure about density but one of the engineers told me.
Since it’s an 1.44M image I assume they use 3.5” diskettes. The terms floppy and diskette are used as synonyms today, but the different names make sense since floppies are flexible and “floppy”. Diskettinux?
I was making routers our of old PCs (486 or early pentiums) with 2 network cards (3com or ne2000) back in 2000 with floppies and CoyoteLinux. Installed 10s of them in the students houses.
I was hoping someone would mention CoyoteLinux. It was my residential router for several years in the early 2000s. My 'disaster recovery plan' consisted of a second floppy disk (which fortunately I never had to use).
I thought Linux dropped driver support for real floppy drives. Did that not happen, or am I missing something?
Someone was still working on some minor cleanups in August 2025: https://lore.kernel.org/lkml/20250825163545.39303-1-andriy.s...
(That mail also mentions the floppy driver is "basically orphaned" though. But evidently it's still there and builds.)
Maybe you're thinking of the floppy tape (ftape) driver, which was removed back in the 2.6.20 kernel. Though there's a project keeping an out-of-tree version of it working with recent kernels at https://github.com/dbrant/ftape
Don't think so? Linux should still support almost all builtin motherboard floppy controllers, for the platforms it still runs on. ISA floppy controller support is probably not as comprehensive, but not because anything has been dropped.
Huh, yeah looks like I misremembered.
No but I find this line interesting:
The Linux kernel drops i486 support in 6.15 (released May 2025), so 6.14 (released March 2025) is the latest version with full compatibility.
Any chance of backporting changes to be able to run on older hardware?
https://kernel.org/ says 6.12 is still a supported LTS, so you could just run that.
I remember the days when Linux came on 50 floppies.
If I recall correctly, when you wanted all of the bells and whistles that Slackware had to offer, it required 72 floppy disks.
mgr on sun hardware probably could have come close
Ok now I need to buy a floppy drive and floppy disks in 2026? Hmm where should I go to buy those?
What's a floppy?
https://en.wikipedia.org/wiki/Floppy_disk
It's basically what people used before USB sticks. But it was also the storage medium that software was sold on, before CD-ROMs became widespread.
Floppy is a race of robotic jackalopes, known for their floppy ears. A "Single Floppy" is a rare subset of that species where only one ear flops down due to a random mutation of their hardware.
Embedding Linux has the characteristic of making the single floppy highly territorial and aggressive.
Its a real life 3D save icon.
Bring back Slackware?
It never went anywhere.
Ok, impressive, but - why? No current computer has a floppy disk drive anymore. The Web Page claims building such a disk is a learning exercise, but the knowledge offered is pretty arcane, even for regular Linux users. Is this pure nostalgia?
If you have to ask why this is not for you. Why climb a mountain that’s already been climbed hundreds of times? For the challenge.