Puppy was the first Linux distro I ever tried since it was such a small download (250ish MB) and I had limited bandwidth. Good memories.
wondering what's your typical usage for those small distros?
> I also like SliTaz
thank you for this reminder! I had completely forgotten about SliTaz, looks like I need to check it out again!
Wow, Slax is still around and supports Debian now too? Thanks for sharing.
I used to use it during the netbook era, was great for that.
I love lightweight distros. QNX had a "free as in beer" distro that fit on a floppy, with Xwindows and modem drivers. After years of wrangling with Slackware CDs, it was pretty wild to boot into a fully functional system from a floppy.
[delayed]
I never understood how that QNX desktop didn't pick up instanntly, it was amazing !
That famous QNX boot disk was the first thing I thought of when reading the title as well.
Me too! And the GUI was only a 40KB distribution and was waaaaaay better than Windows 3.0!
In around 2002, I got my hands on an old 386 which I was planning to use for teaching myself things. I was able to breathe life into it using MicroLinux. Two superformatted 1.44" floppy disks and the thing booted. Basic kernel, 16 colour X display, C compiler and Editor.
I don't know if there are any other options for older machines other than stripped down Linux distros.
Not to disrespect this, but it used to be entirely normal to have a GUI environment on a machine with 2MB of RAM and a 40MB disk.
Or 128K of ram and 400 kb disk for that matter.
A single 1920x1080 framebuffer (which is a low resolution monitor in 2025 IMO) is 2MB. Add any compositing into the mix for multi window displays and it literally doesn’t fit in memory.
It's so much fun working with systems with more pixels than ram though. Manually interleaving interrupts. What joy.
I had a 386 PC with 4MB of RAM when I was a kid, and it ran Windows 3.1 with a GUI, but that also had a VGA display at 640x480, and only 16-bit color (4 bits per pixel). So 153,600 bytes for the frame buffer.
640 * 480 / 2 = 150KB for a classic 16-color VGA screen.
Do you really need the framebuffer in RAM? Wouldn't that be entirely in the GPU RAM?
To put it in GPU RAM, you need GPU drivers.
For example, NVIDIA GPU drivers are typically around 800M-1.5G.
That math actually goes wildly in the opposite direction for an optimization argument.
Doesn't the UEFI firmware map a GPU framebuffer into the main address space "for free" so you can poke raw pixels over the bus? Then again the UEFI FB is only single-buffered, so if you rely on that in lieu of full-fat GPU drivers then you'd probably want to layer CPU framebuffers on top anyway.
well, if you poke framebuffer pixels directly you might as well do scanline racing.
> NVIDIA GPU drivers are typically around 800M-1.5G.
They also pack in a lot of game-specific optimizations for whatever reason. Could likely be a lot smaller without those.
Even the open source drivers without those hacks are massive. Each type of card has its own almost 100MB of firmware that runs on the card on Nvidia.
VGA standard supports up to 256k
Aren’t you cheating by having additional ram dedicated for gpu use exclusively? :)
[dead]
When I first started using QNX back in 1987/88 it was distributed on a couple of 1.4MB floppy diskettes! And you could install a graphical desktop that was a 40KB distribution!
Yea, but those platforms were not 64bit
64 bit generally adds about 20% to the size of the executables and programs as t to last on x86, so it's not that big of a change.
The Acorn Archimedes had the whole OS on a 512KB ROM.
That said, OSs came with a lot less stuff then.
If that is a lot less of things not needed for the specific use case, that is still a big plus.
It was GUI defined manually by pixel coordinates, having more flexible guis that could autoscale and other snazy things made things really "slow" back then..
Sure we could go back... Maybe we should. But there are lots of stuff we take for granted to day that were not available back then.
I would like to have this again
I prefer to use additional RAM and disk for data not code
To think that the entire distro would fit in a reasonable LLC (last level cache)..
> Or 128K of ram and 400 kb disk for that matter.
Or 32K of RAM and 64KB disk for that matter.
What's your point? That the industry and what's commonly available gets bigger?
This is cool. My first into to a practical application of Linux in the early 2000s was using Damn Small Linux to recover files off of cooked Windows Machines. I looked up the project the other day while reminiscing and thought it would be interesting if someone took a real shot at reviving the spirit of the project.
I have an older laptop with a 32-bit processor and found that TinyCoreLinux runs well on it. It has its own package manager that was easy to learn. This distro can be handy in these niche situations.
Similar situation here. Have some old 32bit machines that I'm turning into writer decks. Most Linux distros have left 32bit behind so you can't just use Debian or Ubuntu and a lot of distros that aim to run on lower hardware are Ubuntu derivatives
Is that actually tiny core? It’s _likely_ it is, but that’s not good enough.
> this same thing came up a few years ago
Honestly, that makes this inexcusable. There are numerous SSL providers available for free, and if that’s antithetical to them, they can use a self signed certificate and provide an alternative method of verification (e.g. via mailing list). The fact they don’t take this seriously means there is 0 chance I would install it!
Honestly, this is a great use for a blockchain…
I usually only install on like a Raspberry Pi or VM for these toy distros
An integrity check where both what you're checking and the hash you're checking against is literally not better than nothing if you're trying to prevent downloading compromised software. It'd flag corrupted downloads at least, so that's cool, but for security purposes the hash for a artifact has to be served OOB.
It is better than nothing if you note it down. You can compare it later if somebody / or you was compromised to see whether you had the same download as everyone else.
It’s not better than nothing - it’s arguably worse.
There is a secure domain to download from as a mirror. For extra high security, the hash should be delivered OOB like on a mailing list but it isn’t
Ideas to decrease risk of MITM:
Download from at least one more location (like some AWS/GCP instance) and checksum.
But can they please empower a user interface designer to simply improve the margins and paddings of their interface? With a bunch of small improvements it would look significantly better. Just fix the spacing between buttons and borders and other UI elements.
Modern UX trends are a scourge of excessive whitespace and low information density that get in the way of actually accomplishing tasks.
Any project that rejects those trends gets bonus points in my book.
I sympathize, but I feel compelled to point out that the parent didn’t say that the interface had to look like a contemporary desktop.
In my opinion, I believe the Tiny Core Linux GUI could use some more refinement. It seems inspired by 90s interfaces, but when compared to the interfaces of the classic Mac OS, Windows 95, OS/2 Warp, and BeOS, there’s more work to be done regarding the fit-and-finish of the UI, judging by the screenshots.
To be fair, I assume this is a hobbyist open source project where the contributors spend time as they see fit. I don’t want to be too harsh. Fit-and-finish is challenging; not even Steve Jobs-era Apple with all of its resources got Aqua right the first time when it unveiled the Mac OS X Public Beta in 2000. Massive changes were made between the beta and Mac OS X 10.0, and Aqua kept getting refined with each successive version, with the most refined version, in my opinion, being Mac OS X 10.4 Tiger, nearly five years after the public beta.
If you look at the screenshots it immediately jumps out that it is unpolished: the spacings are all over the place, the window maximize/minimize/close buttons have different widths and weird margins.
I thought that would be immediately clear to the HN crowd but I might have overestimated your aesthetic senses.
There is a balance.
Too much information density is also disorienting, if not stressing. The biggest problem is finding that balance between multiple kinds of users and even individuals.
Look at screenshots -> wallpaper window. The spacing between elements is all over the place and it simply looks like shit. Seeing this I'm having doubts if the team who did this is competent at all
Exactly.
I know that not everybody spent 10 years fiddling with CSS so I can understand why a project might have a skill gap with regards to aesthetics. I'm not trying to judge their overall competence, just wanted to say that there are so many quick wins in the design it hurts me a bit to see it. And due to nature of open source projects I was talking about "empowering" a designer to improve it because oftentimes you submit a PR for aesthetic improvements and then notice that the project leaders don't care about these things, which is sad.
[dead]
This just looks like a standard _old_ *nix project. I've used Tiny, a couple of decades ago IIRC, from a magazine cover CD.
I imagine the sign-off date of 2008, the lack of very simple to apply mobile css, and no https to secure the downloads (if it had it then it would probably be SSL).
This speaks to me of a project that's 'good enough', or abandoned, for/by those who made it. Left out to pasture as 'community dev submissions accepted'.
I've not bothered to look, but wouldn't surprise me if the UI is hardcoded in assembly and a complete ballache to try and change.
One could argue that visible borders are a feature, not a bug.
If you are trying to maximize for accessibility, that is.
It's not about the damn borders it is about the spacing between the buttons and other UI elements as you can see in the screenshot. I don't want them to introduce some shitty modern design, just fix the spacing so it doesn't immediately jump out as odd and unpolished.
Pretty sure it was not about presence of visible borders, but about missing spacing between borders and buttons. That on some screenshots, but not others. It's not like this ui has some high-density philosophy, it's just very inconsistent
I've used many of these small Linux distros. I used to have Tiny Core in a VM for different things.
I also like SliTaz: http://slitaz.org/en, and Slax too: https://www.slax.org/
Oh and puppy Linux, which I could never get into but was good for live CDs: https://puppylinux-woof-ce.github.io/
And there's also Alpine too.
Puppy was the first Linux distro I ever tried since it was such a small download (250ish MB) and I had limited bandwidth. Good memories.
wondering what's your typical usage for those small distros?
> I also like SliTaz
thank you for this reminder! I had completely forgotten about SliTaz, looks like I need to check it out again!
Wow, Slax is still around and supports Debian now too? Thanks for sharing.
I used to use it during the netbook era, was great for that.
I love lightweight distros. QNX had a "free as in beer" distro that fit on a floppy, with Xwindows and modem drivers. After years of wrangling with Slackware CDs, it was pretty wild to boot into a fully functional system from a floppy.
[delayed]
I never understood how that QNX desktop didn't pick up instanntly, it was amazing !
That famous QNX boot disk was the first thing I thought of when reading the title as well.
Me too! And the GUI was only a 40KB distribution and was waaaaaay better than Windows 3.0!
In around 2002, I got my hands on an old 386 which I was planning to use for teaching myself things. I was able to breathe life into it using MicroLinux. Two superformatted 1.44" floppy disks and the thing booted. Basic kernel, 16 colour X display, C compiler and Editor.
I don't know if there are any other options for older machines other than stripped down Linux distros.
Not to disrespect this, but it used to be entirely normal to have a GUI environment on a machine with 2MB of RAM and a 40MB disk.
Or 128K of ram and 400 kb disk for that matter.
A single 1920x1080 framebuffer (which is a low resolution monitor in 2025 IMO) is 2MB. Add any compositing into the mix for multi window displays and it literally doesn’t fit in memory.
It's so much fun working with systems with more pixels than ram though. Manually interleaving interrupts. What joy.
I had a 386 PC with 4MB of RAM when I was a kid, and it ran Windows 3.1 with a GUI, but that also had a VGA display at 640x480, and only 16-bit color (4 bits per pixel). So 153,600 bytes for the frame buffer.
640 * 480 / 2 = 150KB for a classic 16-color VGA screen.
Do you really need the framebuffer in RAM? Wouldn't that be entirely in the GPU RAM?
To put it in GPU RAM, you need GPU drivers.
For example, NVIDIA GPU drivers are typically around 800M-1.5G.
That math actually goes wildly in the opposite direction for an optimization argument.
Doesn't the UEFI firmware map a GPU framebuffer into the main address space "for free" so you can poke raw pixels over the bus? Then again the UEFI FB is only single-buffered, so if you rely on that in lieu of full-fat GPU drivers then you'd probably want to layer CPU framebuffers on top anyway.
well, if you poke framebuffer pixels directly you might as well do scanline racing.
> NVIDIA GPU drivers are typically around 800M-1.5G.
They also pack in a lot of game-specific optimizations for whatever reason. Could likely be a lot smaller without those.
Even the open source drivers without those hacks are massive. Each type of card has its own almost 100MB of firmware that runs on the card on Nvidia.
VGA standard supports up to 256k
Aren’t you cheating by having additional ram dedicated for gpu use exclusively? :)
[dead]
When I first started using QNX back in 1987/88 it was distributed on a couple of 1.4MB floppy diskettes! And you could install a graphical desktop that was a 40KB distribution!
Yea, but those platforms were not 64bit
64 bit generally adds about 20% to the size of the executables and programs as t to last on x86, so it's not that big of a change.
The Acorn Archimedes had the whole OS on a 512KB ROM.
That said, OSs came with a lot less stuff then.
If that is a lot less of things not needed for the specific use case, that is still a big plus.
It was GUI defined manually by pixel coordinates, having more flexible guis that could autoscale and other snazy things made things really "slow" back then..
Sure we could go back... Maybe we should. But there are lots of stuff we take for granted to day that were not available back then.
I would like to have this again
I prefer to use additional RAM and disk for data not code
To think that the entire distro would fit in a reasonable LLC (last level cache)..
> Or 128K of ram and 400 kb disk for that matter.
Or 32K of RAM and 64KB disk for that matter.
What's your point? That the industry and what's commonly available gets bigger?
This is cool. My first into to a practical application of Linux in the early 2000s was using Damn Small Linux to recover files off of cooked Windows Machines. I looked up the project the other day while reminiscing and thought it would be interesting if someone took a real shot at reviving the spirit of the project.
I have an older laptop with a 32-bit processor and found that TinyCoreLinux runs well on it. It has its own package manager that was easy to learn. This distro can be handy in these niche situations.
Similar situation here. Have some old 32bit machines that I'm turning into writer decks. Most Linux distros have left 32bit behind so you can't just use Debian or Ubuntu and a lot of distros that aim to run on lower hardware are Ubuntu derivatives
Another small one is the xwoaf (X Windows On A Floppy) rebuild project 4.0 https://web.archive.org/web/20240901115514/https://pupngo.dk...
Showcase video https://www.youtube.com/watch?v=8or3ehc5YDo
iso https://web.archive.org/web/20240901115514/https://pupngo.dk...
2.1mb, 2.2.26 kernel
>The forth version of xwoaf-rebuild is containing a lot of applications contained in only two binaries: busybox and mcb_xawplus. You get xcalc, xcalendar, xfilemanager, xminesweep, chimera, xed, xsetroot, xcmd, xinit, menu, jwm, desklaunch, rxvt, xtet42, torsmo, djpeg, xban2, text2pdf, Xvesa, xsnap, xmessage, xvl, xtmix, pupslock, xautolock and minimp3 via mcb_xawplus. And you get ash, basename, bunzip2, busybox, bzcat, cat, chgrp, chmod, chown, chroot, clear, cp, cut, date, dd, df, dirname, dmesg, du, echo, env, extlinux, false, fdisk, fgrep, find, free, getty, grep, gunzip, gzip, halt, head, hostname, id, ifconfig, init, insmod, kill, killall, klogd, ln, loadkmap, logger, login, losetup, ls, lsmod, lzmacat, mesg, mkdir, mke2fs, mkfs.ext2, mkfs.ext3, mknod, mkswap, mount, mv, nslookup, openvt, passwd, ping, poweroff, pr, ps, pwd, readlink, reboot, reset, rm, rmdir, rmmod, route, sed, sh, sleep, sort, swapoff, swapon, sync, syslogd, tail, tar, test, top, touch, tr, true, tty, udhcpc, umount, uname, uncompress, unlzma, unzip, uptime, wc, which, whoami, yes, zcat via busybox. On top you get extensive help system, install scripts, mount scripts, configure scripts etc.
The site doesn't have HTTPS and there doesn't seem to be any mention of signatures on the downloads page. Any way to check it hasn't been MITM'd?
https://github.com/tinycorelinux
Not foolproof. Could compute MD5 or SHA256 after downloading.
And compare it against what?
EDIT: nevermind, I see that it has the md5 in a text file here: http://www.tinycorelinux.net/16.x/x86/release/
Which is served from the same insecure domain. If the download is compromised you should assume the hash from here is too.
An integrity check is better than nothing, but yes it says nothing about its authenticity.
You can use this site
https://distro.ibiblio.org/tinycorelinux/downloads.html
And all the files are here
https://distro.ibiblio.org/tinycorelinux/16.x/x86/release/
Under a HTTPS connection. I am not at a terminal to check the cert with OpenSSL.
I don’t see any way to check the hash OOB
Also this same thing came up a few years ago
https://www.linuxquestions.org/questions/linux-newbie-8/reli...
Is that actually tiny core? It’s _likely_ it is, but that’s not good enough.
> this same thing came up a few years ago
Honestly, that makes this inexcusable. There are numerous SSL providers available for free, and if that’s antithetical to them, they can use a self signed certificate and provide an alternative method of verification (e.g. via mailing list). The fact they don’t take this seriously means there is 0 chance I would install it!
Honestly, this is a great use for a blockchain…
I usually only install on like a Raspberry Pi or VM for these toy distros
An integrity check where both what you're checking and the hash you're checking against is literally not better than nothing if you're trying to prevent downloading compromised software. It'd flag corrupted downloads at least, so that's cool, but for security purposes the hash for a artifact has to be served OOB.
It is better than nothing if you note it down. You can compare it later if somebody / or you was compromised to see whether you had the same download as everyone else.
It’s not better than nothing - it’s arguably worse.
There is a secure domain to download from as a mirror. For extra high security, the hash should be delivered OOB like on a mailing list but it isn’t
Ideas to decrease risk of MITM:
Download from at least one more location (like some AWS/GCP instance) and checksum.
Download from the Internet Archive and checksum:
https://web.archive.org/web/20250000000000*/http://www.tinyc...
[dead]
Does it run docker?
This would be perfect if it had an old Mac OS 7 Platinum-like look and window shading.
[flagged]
Why use a terminal when you can use punch cards?
Because this is now a paperless office, please consider the enviroment
Switching away from clay tablets was a step in the wrong direction.
Those clay tablets needed to be fired and had a negative carbon footprint.
Handmade parchment, or leather carvings if you don’t mind.
Those both require cattle farming, which is a major contributor to greenhouse gas emissions (at least 10%, perhaps as high as 19.6%, per https://thebreakthrough.org/issues/food-agriculture-environm...). Stick with papyrus or wax, please.
[dead]
Why use a computer when you can use a slide rule?
Looks really nice, I like the idea.
But can they please empower a user interface designer to simply improve the margins and paddings of their interface? With a bunch of small improvements it would look significantly better. Just fix the spacing between buttons and borders and other UI elements.
Modern UX trends are a scourge of excessive whitespace and low information density that get in the way of actually accomplishing tasks.
Any project that rejects those trends gets bonus points in my book.
I sympathize, but I feel compelled to point out that the parent didn’t say that the interface had to look like a contemporary desktop.
In my opinion, I believe the Tiny Core Linux GUI could use some more refinement. It seems inspired by 90s interfaces, but when compared to the interfaces of the classic Mac OS, Windows 95, OS/2 Warp, and BeOS, there’s more work to be done regarding the fit-and-finish of the UI, judging by the screenshots.
To be fair, I assume this is a hobbyist open source project where the contributors spend time as they see fit. I don’t want to be too harsh. Fit-and-finish is challenging; not even Steve Jobs-era Apple with all of its resources got Aqua right the first time when it unveiled the Mac OS X Public Beta in 2000. Massive changes were made between the beta and Mac OS X 10.0, and Aqua kept getting refined with each successive version, with the most refined version, in my opinion, being Mac OS X 10.4 Tiger, nearly five years after the public beta.
If you look at the screenshots it immediately jumps out that it is unpolished: the spacings are all over the place, the window maximize/minimize/close buttons have different widths and weird margins.
I thought that would be immediately clear to the HN crowd but I might have overestimated your aesthetic senses.
There is a balance.
Too much information density is also disorienting, if not stressing. The biggest problem is finding that balance between multiple kinds of users and even individuals.
Look at screenshots -> wallpaper window. The spacing between elements is all over the place and it simply looks like shit. Seeing this I'm having doubts if the team who did this is competent at all
Exactly.
I know that not everybody spent 10 years fiddling with CSS so I can understand why a project might have a skill gap with regards to aesthetics. I'm not trying to judge their overall competence, just wanted to say that there are so many quick wins in the design it hurts me a bit to see it. And due to nature of open source projects I was talking about "empowering" a designer to improve it because oftentimes you submit a PR for aesthetic improvements and then notice that the project leaders don't care about these things, which is sad.
[dead]
This just looks like a standard _old_ *nix project. I've used Tiny, a couple of decades ago IIRC, from a magazine cover CD.
I imagine the sign-off date of 2008, the lack of very simple to apply mobile css, and no https to secure the downloads (if it had it then it would probably be SSL).
This speaks to me of a project that's 'good enough', or abandoned, for/by those who made it. Left out to pasture as 'community dev submissions accepted'.
I've not bothered to look, but wouldn't surprise me if the UI is hardcoded in assembly and a complete ballache to try and change.
One could argue that visible borders are a feature, not a bug.
If you are trying to maximize for accessibility, that is.
It's not about the damn borders it is about the spacing between the buttons and other UI elements as you can see in the screenshot. I don't want them to introduce some shitty modern design, just fix the spacing so it doesn't immediately jump out as odd and unpolished.
Pretty sure it was not about presence of visible borders, but about missing spacing between borders and buttons. That on some screenshots, but not others. It's not like this ui has some high-density philosophy, it's just very inconsistent
[dead]