Very related but self promotional—I have a hobby business selling restored Mac mini G4s. I clean all of them internally. upgrade them with 128 GB SSDs, max them out at 1 GB of RAM, put a new clock battery in, and pre-install the Mac OS 9 Lives hacked version of Mac OS 9 that runs on them. You can buy one from me here:
https://os9.shop
I don't think I'll start pre-installing System 7 since most of my customers are using Mac OS 9 (and the domain is os9.shop!), but you could certainly get a machine from me with Mac OS 9 and install System 7 yourself if you so desire.
My customers have included a lot of real businesses running legacy software who want the fastest, least intrusive, and least energy intensive Mac OS 9 desktop machine they can buy. I've sold to dentists, veterinarians, museums, and auto repair stores. You'd be amazed how many people are running Classic Mac software in 2025.
> It is also my opinion Mac OS 9.2.2 is the greatest OS, and Mac OS, ever, but not everything that is possible in earlier Mac OS versions is possible in Mac OS 9.2.2.
I had fun with hypercard on MacOS 9. At work, even. The boss was into rapid prototyping, and I cooked up some damn productive stacks in a hurry.
It runs on the Cube and under OS 9 emulation on the new stuff.
Hypercard scripters did cool things that most users don't do today. And without those monster data centers.
Back when Java was the NextBigLanguage, we built Java development tools at KL Group/Sitraka (now a part of Quest). For version 2 of the suite of tools, we were getting rid of the nerdy configuration text file and planned on shipping a configuration wizard (yes, we called them wizards while fondling the onions we tied to our belts).
I was the Program Manager, and as usual we were very tightly constrained for time, and in the era of golden master DVDs that had to be ready to distribute at JavaOne in the Moscone Centre... Hard decisions had to be made. The team decided to work on more important features, and drop the configuration wizard from 2.0. Then I did what everyone knows is a no good, very bad, terrible thing. And although I got away with it that time, it's still a no good very bad, terrible thing:
I took my work computer home for the weekend and fired up a HyperCard "compiler" called Runtime Revolution that could make executables for Windows and Unix. Come Monday morning, we had a shippable configuration wizard. Leadership blew its top, because one of their values was, "We're a Java shop, which means we use Java to write Java tools." And after I left the company, they rewrote the configuration wizard in Java Swing.
To this day I consider firing up Electron and a complete React framework for simple tools to be a "Turing Tarpit," a place where absolutely anything you imagine is possible, but nothing of interest (in the domain of simple tools) is easy.
Not only that, everything felt _snappy_. No wasteful animations to add 0.28 ms to every interaction.
Oh, gotta be super snappy on a Mac mini G4!
Yeah, when a coworker and I showed my wife the first OS X preview, she was alarmed at how long it took to shut down (I mean System 7 shut down like you just kicked the cord out). "You'll have to find something else to like about it," was my coworker's response.
And to be sure, there was/is a lot to like about OS X.
But, probably because of the lack of a kernel, etc., System 7 sits somewhere in that nether/middle region on our personal computer journey. It's rich library of functions (the Toolbox) set it apart from machines before it that might have instead had a handful of ASSM routines you could "CALL" in BASIC to switch display modes, clear the screen, etc. But, as Amiga owners often reminded the Mac community in the day, no "true" preemptive multitasking…
I should say too, regarding programming, these days your ability to write safe, threaded code is probably the highest virtue to strive for, hardest to perfect — at least for me (so hard to wrap my head around). It seems to separate the hacks (in the negative sense) from the programming gods. I think wistfully of those simpler times when managing memory well, handling error returned from the system API gracefully were the only hurdles.
"You can’t simply add a lock here, because this function can be called while the lock is already held. Taking the same lock again would cause a deadlock…"
"The way you've implemented semaphores can still allow a potential race condition. No, I have no idea how we can test that scenario, with the unit tests it may still only happen once in a million runs—or only on certain hardware…"
(Since I have retired I confess my memory of those hair-pulling days are getting fuzzier—thankfully.)
There are plenty of ways to multi threaded code these days. From actors to coroutines on the programmatic interface level to using green threads directly in go or Java. There is very little reason to resort using locks, mutexes, or semaphores outside of frameworks designed to make multi threading easier or very specific high performance code. (Where in the latter case it could be argued that multi threaded probably adds unreasonable latency and context switching.)
I don't understand why it takes 5 seconds for Chrome to open on my MBP while it's near instant on my Linux and Windows PC.
Why is eveything so slow on new MacOS?
It’s not everything, it’s just Chrome. Chrome is 1.6GB including all its dependencies. It’s going to be slow to start on any system if those dependencies aren’t preloaded.
Most Mac software I use (I don’t use Chrome) starts quickly because the dependencies (shared libraries) are already loaded. Chrome seems to have its own little universe of dependencies which aren’t shared and so have to be loaded on startup. This is the same reason Office 365 apps are so slow.
I'm running the latest MacOS right now on a modest m4 Mini and it doesn't seem slow to me at all. I use Windows for gaming and Linux for several of my machines as well and I don't "feel" like MacOS is slow.
In any case, Chrome opens quickly on my Mac Mini, under a second when I launch it from clicking its icon in my task bar or from spotlight (which is my normal way of starting apps). When Chrome is idle with no windows, opening chrome seems even faster, almost instant.
This made me curious so I tried opening some Apple apps, and they appear to open about the same speed as Chrome.
Gui applications like Chrome or Keynote can be opened from a terminal command line using the open command so I tried timing this:
$ time open /Applications/Google\ Chrome.app
which indicated that open was finished in under 0.05 seconds total. So this wasn't useful because it appears to be timing only part of the time involved with getting the first window up.
Somewhere around 2011 when I switched my MBP to an SSD (back when you could upgrade the drives, and memory, yourself), Chrome opened in 1-2 bounces of the dock icon instead of 12-14 second.
People used to make YouTube videos of their Mac opening 15 different programs in 4/5 seconds
Now, my Apple Silicon MacBook Air is very, very fast but at times it takes like 8-9 seconds to open a browser again.
I loved the MBP’s from that era. That was my first (easy) upgrade as well in addition to more memory. Those 5400 RPM hard drives were horrible. Also another slick upgrade you could do back then is to swap out the super drive with a caddy to have a second SSD/HDD.
It still works fine today, though I had install Linux on it to keep it up to date.
Do you by chance still run an intel version of chrome on an apple silicon device?
Our work laptops have antivirus and other verification turned on which impose a 4-16x penalty on IO.
The cpu, memory, and ssd are blazing fast. Unfortunately they are hamstrung by bad software configuration.
It's always been that way. Even when I had a maxed out current-gen Mac Pro in 2008, it still launched and ran faster in Windows than MacOS.
I have seen people suggesting that it's because of app signature checks choking on Internet slowness, but 1. those are cached, so the second run should be faster, and in non-networked instances the speed is unchanged, and 2. I don't believe those were even implemented back in 2002 when I got my iMac G4, and it was likewise far quicker in Linux than in OS X.
At the time (2002), I joked that it was because the computer was running two operating systems at once: NeXTSTEP and FreeBSD.
MacOS 9 was awful, a product of a rather unpleasant era for Apple really. I wanna say through 9.2.1 maybe even through to 9.2.2 the OS had a nasty habit of corrupting your disk. Hardware-wise Apple used CMD64x based IDE controllers so when OS9 wasn't screwing with your data the hardware itself would.
There absolutely were animations e.g. when closing a Finder window, but they were much lighter weight. As far as I'm concerned System 7 was probably the zenith.
I'd rather say the zenith was 8.1 which was not very widely used. 8.5 did add some nice gimmicks like the app switcher palette but for some reason it felt way slower than 8.1.
Mac OS 9 was Apple Windows ME; too many side ports of new features into the rickety legacy core OS (Win32 / Toolbox Mac OS) and not enough attention paid to detail since the Next Big Thing was already cooking (XP / OS X).
Mac OS 9 was certainly not rock solid as far as crashes were concerned, but very much better than System 7, that was clear to me. Maybe it is my rose-tinted glasses colouring my memory but I also remember that there were very few small bug, you know the just annoying kind, than I have today with macOS 15, there may be fewer hard crashes, but the number of paper cuts have increased by many orders of magnitude.
I remember it crashing a lot but maybe that's because I came of age around the OS 8/9 era. IIUC OS 9 had no memory protection so it's not exactly a surprise it was fragile.
Well, I got my B&W G3 because MacOS 9 lunched the filesystem as it was prone to doing. SCSI drive so it wasn't that other disk corruption fun (which I went through in PC land). As far as I'm concerned MacOS 9 was mostly a bunch of paper cuts glued together. Lots of stuff that would've demoed in OSX if Apple had the time and patience.
So yeah Apple had tacked on vestigial multi-user support, an automatic system update mechanism, USB support, etc., etc. but underneath it was still the same old single user, cooperative multitasked, no memory protection OS as its predecessors. Unlike OSX, MacOS 9 (like 7 and 8 before it) still relied on the Toolbox which was a mishmash of m68k and ppc code.
To me it’s the opposite, System 7 crashed all the time and MacOS 9 was rock solid. System 7 was a mess until 7.6, at which point it was basically MacOS 8. And the UI was way more pleasing, the system 7 one had a 80s vibe to me.
7.6.x was pretty cool
[deleted]
W95 and W98 werent' much better until W98SE. Linux distros were rough but mega-stable.
Win98 was head and shoulders above System 9, from a stability perspective. It had protected memory, actual preemptive multitasking, a somewhat functional driver system built on top of an actual HAL, functional networking, etc, etc.
To be clear, Win98 was a garbage fire of an OS (when it came to stability); which makes it so much worse that Mac OS 8-9 were so bad.
98's multitasking and memory 'protection' were a joke. In the same mid high machine for the era, 2k and xp were miles ahead of w98 on mid-high load.
Maybe not on a Pentium, but once you hit 192MB of RAM and some 500 MHz P3/AMD k7, NT based OSes were tons better.
You only realized that upon opening a dozen of IE windows. W98, even SE, will sweat. 2k will fly.
On single tasks, such as near realtime multimedia ones, w98 would be better, such as emulators/games or video players with single thread decoders. On multiprocessing/threading, w98 crawled against W2K even under p4's and 256MB of RAM.
System 6 had menu blinks, zoom animations (with rect XORs no less), and button blinks when you used keyboard completion. Mac was the original "wasteful animation" OS.
That xor effect was under FVWM too for moving and resizing windows and doing an xor wireframe was MUCH faster than a full repaint.
If you had no X11 acceleration (xvesa for instance), that mode was magnitudes faster than watching your whole browser window repaint on a resize lasting more than 3 seconds on a Pentium.
HyperCard is one of my all time favourite memories of Mac OS.
I like System 6: the most complete version of the “real” classic Mac OS before System 7 started to be more “modern.” Dead simple, not a lot of new abstractions and metaphors layered on.
I kind of wish there was a version of System 6 without MultiFinder. Classic Mac OS clearly wasn’t built with multi-tasking in mind.
You could turn off Multifinder in System 6, no problem. It wasn't until System 7 that it was fully baked-in.
FYI, Trello (or one of the many clones of it) can be used for similar purposes.
I enjoyed how quick it was on my G4 iMac (Mac OS X 10.1/10.2 was a total dog) but it was never stable enough for my liking. Forced to choose between fast and unstable (OS9) or slow and steady (OS X), I chose to install Yellow Dog Linux instead (reject the premise).
> In my case, first I tried using the latest Python 3.13.9 both from Windows 7 (bad idea due to resource fork loss) and macOS 10.14.6 Mojave, but neither worked: it seems like that version of Python was just too new. I then retried with Python 3.8.10 instead (which I chose thinking it might be more period-appropriate for the script's age) on Mojave, which worked flawlessly.
Ah, classic Python. Removing features [0] and breaking perfectly working software just because the feature is old, ugly, and not widely used.
Max frustrating. If I were writing tbxi again it would be in Go.
Why not C89? Try to make it as portable as possible. The software is intended for preservation of old computers and their software. Would make sense for the software to be as portable as possible.
Who knows, maybe someone would want to run it on vintage Mac hardware?
It already exists. But, if any, Free Pascal with Lazarus for classic Mac ppc would be ideal.
Put that MacSSL port available under fpc and now you can compete with the rest.
Misread as “Mac mini M4” and was going to be _very_ impressed.
Honestly this is still pretty insane.
[dead]
StarMax series (and the 4400) seemed to be about as close to CHRP as we got. My off-brand StarMax clone (PowerCity) had a PS/2 and an ISA port. Ran BeOS well, and had a quirk that I could hear a tight loop on the speaker.
AFAIK most StarMax systems that were released (a prototype exists of a CHRP StarMax model) are based on the Tanzania / LPX-40 design, which is mostly a traditional PCI PowerMac[1], albeit with oddities like support for PC style floppy drives. PS/2 is handled by the CudaLite microcontroller which presents it to the OS as ADB devices for example. I've not heard of a version with ISA slots, although I assume you could just have a PCI to ISA bridge chip, even if MacOS presumably wouldn't do anything with it.
Right, I think those were the closest we got to the CHRP standard, as they moved the platform toward PC-style floppies, PS/2, ATX PSU and even more generic "platform" stuff than most clones. I'm fairly sure I had an ISA slot, I do remember trying to get a bargain bin NE2K card working in mine under linux (it didn't work). Definitely did nothing under OS 8/9.
The powercity models were interesting, because they came out after Apple revoked Motorola's clone license. A German company, ComJet, bought up the boards and sold unlicensed clones cheap. Case was slightly different, but otherwise they corresponded to StarMax models (fairly certain they were identical but may have been last revision boards).
Kinda sorts. The systems that the "MacOS on CHRP" thing ran on had a very strange looking device tree, with some bizarre combination of PC and Mac peripherals.
Refer to the "Macintosh Technology in the
Common Hardware Reference Platform" book for more information, if you're curious about the Mac IO pieces.
The Motorola Yellowknife board seems remarkably similar to this system, as well as the IBM Long Trail system (albeit with Long Trail using a VLSI Golden Gate versus a MPC106 memory controller). Both of them use W83C553 southbridges and PC87307 Super I/O controllers.
The architecture is kind of weird, but the schematics on NXP's website can probably elucidate a bit more on the system's design.
A fun "do-it-yourself" question for people who've always wanted to learn about the baroque architecture of the PowerPC Mac and the classic Mac OS: where is hardware support for specific models implemented?
In concentrically encrusted layers
I have an iMac G4 1.25 GHz. Originally, it was a 1GHz, but I swapped out the motherboard for a later model. For a while I've been wondering if I would had been better off with an earlier motherboard capable of booting OS 9 natively. Compared with using OS X's classic mode, this would omit the overhead of running a whole other OS and leave me with more resources to run OS 9 apps and games. I don't get a whole lot of use out of the earlier OS X software that I have on there...
Maybe in the future I won't have to make that choice! I'd much rather dual boot OS 9 off a different partition, but that hasn't been supported on the 1-1.25GHz models (Thanks Steve...) and no one has gotten it working properly. Maybe now it will be possible! A man can dream...
9 has been possible on that board for years now. No internal speaker but the headphone jack works.
This is really cool, the kind of content great to see here.
That's impressive but early macOS were pretty awful UX; I think the UI thread was everything.
I remember clicking and waiting.
I remember that yes, expensive operations could take a while, but the interface was much faster than my M1 Max Studio for the sole reson you actually do not have to wait for animations.
And not just for the reasons that animations were sparse, they also never blocked input, so for example if you could see where a new element would appear you could click there DURING the animation and start eg typing and no input would be lost meaning that apps you used every day and became accustomed to would just zip past at light speed because there were no do-wait do-wait pipeline.
The animations were there, but they were frame-based with the number of frames carefully calculated to show UI state changes that were relevant. For example, when you would open a folder, there would be an animation showing a window rect animating from the folder icon into the window shape, but it would be very subtle - I remember it being 1 or 2 intermediate frames at most. It was enough to show how you get from "there" to "here" but not dizziingly egregious the way it became in Aqua.
Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.
There were also quite a few tricks used all the way from the dithering/blitting optimizations on the early Macs. For example, if you can blit a dotted rect for a window being dragged instead of buffering the entire window, everything underneath, the shadow mask - and then doing the shadow compositing and the window compositing on every redraw - you can save a ton of cycles.
You could very well have do-wait-do-wait loops when custom text compositing or layout was involved and not thoroughly optimized - like in early versions of InDesign, for instance - but it was the exception rather than the rule.
> Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.
Done exactly this myself to conceal ugly inconsistent lags - I don’t think it is that uncommon an idea.
My most durable memory is all the reboots due to programs crashing. Didn't help that a null pointer deref required a system reboot - or that teenage me was behind the keyboard on that front.
more the fault of MB of ram and HDDs being quite slow to be honest.
> I think the UI thread was everything.
How would you have done it?
Preemption is a very nice OS feature it turns out (particularly once multi-core rolled around). Still, I recall os 8 and 9 being generally snappier than windows 98 (and a lot snappier than early builds of OSX)
How does preemption work on a processor that barely has interrupts and has no way to recover state after a page fault, in an OS that has to fit into a couple dozen kilobytes of ROM?
The Amiga had preemptive multithreading with multiple task priorities on the original MC68000. Preemption is distinct from memory protection or paging.
There were plenty of preemptive multitasking systems for the original 68000, and regardless page fault recovery was fixed from the 010 onwards.
And certainly was very not a problem on PowerPC which TFA is about.
Also not sure how you can say the 68000 "barely has interrupts" I don't even know what you're on about.
MacOS was broken because Jobs forced it to be that way after he was kicked off the Lisa team. Which had a preemptive multitasking operating system on the 68000.
I’ve been waiting for this post.
I run OS 9 on my lamp iMac G4 but now I want to try 7.6.1!
yes, multiple Macs within arms reach right now!
++ BBEdit
One of my early Macs was a Performa 638CD with no dedicated FPU. I had upgraded to a Performa 6400 (which felt like an absolute dog despite its size) but finally had an opportunity to move to the PowerComputing PowerTower Pro 225. What a beast! I hate to say it, but it was probably my favorite Mac I'd ever owned before the first iMac.
The Megahertz wars in the 1990s made it really difficult to understand relative performance across even the same ISA like this, and I think computers with the 603 CPU were a bit of a wrench in people's perception of the Mac.
The 180 or 200MHz 603e with 16k L1 cache in that Performa 6400 wasn't slow by any stretch, but it probably didn't have L2 cache. Coupled with the gradual transition to PPC native code of the OS and apps, these machines were often a little mismatched to expectations and realities of the code.
Meanwhile that PowerTower had a 604e with 32/32k L1 and 1MB L2 cache. That was a fast flier with a superscalar and out of order pipeline more comparable to the Pentium Pro and PII.
Oh believe me. I owned it. It felt slow even at the time.
Yup. Recall the far better cycle efficiency of the 100 MHz hyperSPARC.
Consumers didn't grok cycle efficiency, pipeline depth, or branch prediction miss pipeline stall latency.
I have a PowerCenter Pro 210 in my basement right now! It's not quite as nice as the newer architecture in the PowerTower Pro machines, but it runs MacOS 7.6.1 wonderfully. It is more than enough for classic Mac games of that era - and a joy to use.
The later PowerCenter Pro’s could run with a 60 MHz FSB whereas the PowerTower Pro’s were usually 45-50 MHz FSB. There are a variety of tasks where my PowerCenter Pro 240 outruns my PowerTower Pro 250 for precisely that reason.
As an European, Classic Macs (and current ones) were just for arts/writting people. If you knew what CMYK was in order to print a newspaper, you were a Mac user.
I emulated Mac OS 7 under XP times, and i was impressed that you could get far faster speeds emulating the M68k (and partially the PPC) compared to Intel X86 without any hardware accelerating chip (IntelVT) or kernel modules trapping X86 instructions running it at native speeds.
I mean, PPC and M68k chips where much easier to emulate than X86 on itself.
On software, Classic Mac users can just resort to IRC and Gopher clients and visit the public https://bitlbee.org IRC servers in order to connect 'modern' accounts and being proxied to a Mac IRC client. And for Gopher, you have gopher://hngopher.com, gopher://magical.fish and the like.
Sadly you don't have an easy TLS library as Amiga users have (AmiSSL) where even modern web can work on it (and IRC over TLS, Gemini...).
Altough... if Amiga m68k emulators run fast with the Rosetta like tech for PPC... you would just fire up Workbench and then AmiSSL. Crude, but it would work. If not, here in the Apple subdir
you can get, maybe, some TLS enabled browsers:
Usenet will work fine without any TLS, and there's tons of content out there.
And scientists.
For some reason european science was full of old school Mac users.
It was because of QuarkXPress and Photoshop. In the same way WordPerfect and Lotus 1-2-3 were dominant for business computers.
Wish someone would try to create native MacOS classic on x86 hardware.
There are so many Unix or Linux ABI compatible kernels like the recent Moss written in rust.
Ardi Executor. There's a recent fork at GitHub. You can run m68k binaries seamlessly. You don't need propietary MacOS parts, just the software.
But if you are some software preserver, having a libre option to run legacy media it's always good for historical reasons. I am a daily libre software user but I emulate ancient machines with propietary stuff just for curiosity. As it not a personal computing device I find it fine. It's just an historical toy and not my computing device. And, well, if you want to create libre engines for old Mac games (ScummVM, SDL ports...), for sure you need to at least emulate the old OSes and run the propietary game in order to compare the output and correctness.
Also, it already exists "Mac" for x86. It was Rhapsody DR2 and it could run Classic Mac software and NeXT one too. It was like a blend of these two. OSX it's like NeXT Step concept 2.0, with few traces of Mac Classic.
Qemu will run it fine.
Rhapsody DR2 is not a solution for classic Mac OS on x86. Lunduke writes:
"Unfortunately [the Blue Box] was only available on PowerPC versions of Rhapsody"
Another option is Advanced Mac Substitute. It doesn't run everything, but what it does run it runs really well. One of my goals is that you can use a 68K Mac application (e.g. MacPaint) as part of your personal computing workflow, if you wish.
Very related but self promotional—I have a hobby business selling restored Mac mini G4s. I clean all of them internally. upgrade them with 128 GB SSDs, max them out at 1 GB of RAM, put a new clock battery in, and pre-install the Mac OS 9 Lives hacked version of Mac OS 9 that runs on them. You can buy one from me here: https://os9.shop
I don't think I'll start pre-installing System 7 since most of my customers are using Mac OS 9 (and the domain is os9.shop!), but you could certainly get a machine from me with Mac OS 9 and install System 7 yourself if you so desire.
My customers have included a lot of real businesses running legacy software who want the fastest, least intrusive, and least energy intensive Mac OS 9 desktop machine they can buy. I've sold to dentists, veterinarians, museums, and auto repair stores. You'd be amazed how many people are running Classic Mac software in 2025.
> It is also my opinion Mac OS 9.2.2 is the greatest OS, and Mac OS, ever, but not everything that is possible in earlier Mac OS versions is possible in Mac OS 9.2.2.
I had fun with hypercard on MacOS 9. At work, even. The boss was into rapid prototyping, and I cooked up some damn productive stacks in a hurry.
It runs on the Cube and under OS 9 emulation on the new stuff.
Hypercard scripters did cool things that most users don't do today. And without those monster data centers.
Back when Java was the NextBigLanguage, we built Java development tools at KL Group/Sitraka (now a part of Quest). For version 2 of the suite of tools, we were getting rid of the nerdy configuration text file and planned on shipping a configuration wizard (yes, we called them wizards while fondling the onions we tied to our belts).
I was the Program Manager, and as usual we were very tightly constrained for time, and in the era of golden master DVDs that had to be ready to distribute at JavaOne in the Moscone Centre... Hard decisions had to be made. The team decided to work on more important features, and drop the configuration wizard from 2.0. Then I did what everyone knows is a no good, very bad, terrible thing. And although I got away with it that time, it's still a no good very bad, terrible thing:
I took my work computer home for the weekend and fired up a HyperCard "compiler" called Runtime Revolution that could make executables for Windows and Unix. Come Monday morning, we had a shippable configuration wizard. Leadership blew its top, because one of their values was, "We're a Java shop, which means we use Java to write Java tools." And after I left the company, they rewrote the configuration wizard in Java Swing.
https://en.wikipedia.org/wiki/LiveCode_(company)
To this day I consider firing up Electron and a complete React framework for simple tools to be a "Turing Tarpit," a place where absolutely anything you imagine is possible, but nothing of interest (in the domain of simple tools) is easy.
Not only that, everything felt _snappy_. No wasteful animations to add 0.28 ms to every interaction.
Oh, gotta be super snappy on a Mac mini G4!
Yeah, when a coworker and I showed my wife the first OS X preview, she was alarmed at how long it took to shut down (I mean System 7 shut down like you just kicked the cord out). "You'll have to find something else to like about it," was my coworker's response.
And to be sure, there was/is a lot to like about OS X.
But, probably because of the lack of a kernel, etc., System 7 sits somewhere in that nether/middle region on our personal computer journey. It's rich library of functions (the Toolbox) set it apart from machines before it that might have instead had a handful of ASSM routines you could "CALL" in BASIC to switch display modes, clear the screen, etc. But, as Amiga owners often reminded the Mac community in the day, no "true" preemptive multitasking…
I should say too, regarding programming, these days your ability to write safe, threaded code is probably the highest virtue to strive for, hardest to perfect — at least for me (so hard to wrap my head around). It seems to separate the hacks (in the negative sense) from the programming gods. I think wistfully of those simpler times when managing memory well, handling error returned from the system API gracefully were the only hurdles.
"You can’t simply add a lock here, because this function can be called while the lock is already held. Taking the same lock again would cause a deadlock…"
"The way you've implemented semaphores can still allow a potential race condition. No, I have no idea how we can test that scenario, with the unit tests it may still only happen once in a million runs—or only on certain hardware…"
(Since I have retired I confess my memory of those hair-pulling days are getting fuzzier—thankfully.)
There are plenty of ways to multi threaded code these days. From actors to coroutines on the programmatic interface level to using green threads directly in go or Java. There is very little reason to resort using locks, mutexes, or semaphores outside of frameworks designed to make multi threading easier or very specific high performance code. (Where in the latter case it could be argued that multi threaded probably adds unreasonable latency and context switching.)
I don't understand why it takes 5 seconds for Chrome to open on my MBP while it's near instant on my Linux and Windows PC.
Why is eveything so slow on new MacOS?
It’s not everything, it’s just Chrome. Chrome is 1.6GB including all its dependencies. It’s going to be slow to start on any system if those dependencies aren’t preloaded.
Most Mac software I use (I don’t use Chrome) starts quickly because the dependencies (shared libraries) are already loaded. Chrome seems to have its own little universe of dependencies which aren’t shared and so have to be loaded on startup. This is the same reason Office 365 apps are so slow.
I'm running the latest MacOS right now on a modest m4 Mini and it doesn't seem slow to me at all. I use Windows for gaming and Linux for several of my machines as well and I don't "feel" like MacOS is slow.
In any case, Chrome opens quickly on my Mac Mini, under a second when I launch it from clicking its icon in my task bar or from spotlight (which is my normal way of starting apps). When Chrome is idle with no windows, opening chrome seems even faster, almost instant.
This made me curious so I tried opening some Apple apps, and they appear to open about the same speed as Chrome.
Gui applications like Chrome or Keynote can be opened from a terminal command line using the open command so I tried timing this:
which indicated that open was finished in under 0.05 seconds total. So this wasn't useful because it appears to be timing only part of the time involved with getting the first window up.Somewhere around 2011 when I switched my MBP to an SSD (back when you could upgrade the drives, and memory, yourself), Chrome opened in 1-2 bounces of the dock icon instead of 12-14 second.
People used to make YouTube videos of their Mac opening 15 different programs in 4/5 seconds
Now, my Apple Silicon MacBook Air is very, very fast but at times it takes like 8-9 seconds to open a browser again.
I loved the MBP’s from that era. That was my first (easy) upgrade as well in addition to more memory. Those 5400 RPM hard drives were horrible. Also another slick upgrade you could do back then is to swap out the super drive with a caddy to have a second SSD/HDD.
It still works fine today, though I had install Linux on it to keep it up to date.
Do you by chance still run an intel version of chrome on an apple silicon device?
Our work laptops have antivirus and other verification turned on which impose a 4-16x penalty on IO.
The cpu, memory, and ssd are blazing fast. Unfortunately they are hamstrung by bad software configuration.
It's always been that way. Even when I had a maxed out current-gen Mac Pro in 2008, it still launched and ran faster in Windows than MacOS.
I have seen people suggesting that it's because of app signature checks choking on Internet slowness, but 1. those are cached, so the second run should be faster, and in non-networked instances the speed is unchanged, and 2. I don't believe those were even implemented back in 2002 when I got my iMac G4, and it was likewise far quicker in Linux than in OS X.
At the time (2002), I joked that it was because the computer was running two operating systems at once: NeXTSTEP and FreeBSD.
MacOS 9 was awful, a product of a rather unpleasant era for Apple really. I wanna say through 9.2.1 maybe even through to 9.2.2 the OS had a nasty habit of corrupting your disk. Hardware-wise Apple used CMD64x based IDE controllers so when OS9 wasn't screwing with your data the hardware itself would.
There absolutely were animations e.g. when closing a Finder window, but they were much lighter weight. As far as I'm concerned System 7 was probably the zenith.
I'd rather say the zenith was 8.1 which was not very widely used. 8.5 did add some nice gimmicks like the app switcher palette but for some reason it felt way slower than 8.1.
Mac OS 9 was Apple Windows ME; too many side ports of new features into the rickety legacy core OS (Win32 / Toolbox Mac OS) and not enough attention paid to detail since the Next Big Thing was already cooking (XP / OS X).
Mac OS 9 was certainly not rock solid as far as crashes were concerned, but very much better than System 7, that was clear to me. Maybe it is my rose-tinted glasses colouring my memory but I also remember that there were very few small bug, you know the just annoying kind, than I have today with macOS 15, there may be fewer hard crashes, but the number of paper cuts have increased by many orders of magnitude.
I remember it crashing a lot but maybe that's because I came of age around the OS 8/9 era. IIUC OS 9 had no memory protection so it's not exactly a surprise it was fragile.
Well, I got my B&W G3 because MacOS 9 lunched the filesystem as it was prone to doing. SCSI drive so it wasn't that other disk corruption fun (which I went through in PC land). As far as I'm concerned MacOS 9 was mostly a bunch of paper cuts glued together. Lots of stuff that would've demoed in OSX if Apple had the time and patience.
So yeah Apple had tacked on vestigial multi-user support, an automatic system update mechanism, USB support, etc., etc. but underneath it was still the same old single user, cooperative multitasked, no memory protection OS as its predecessors. Unlike OSX, MacOS 9 (like 7 and 8 before it) still relied on the Toolbox which was a mishmash of m68k and ppc code.
To me it’s the opposite, System 7 crashed all the time and MacOS 9 was rock solid. System 7 was a mess until 7.6, at which point it was basically MacOS 8. And the UI was way more pleasing, the system 7 one had a 80s vibe to me.
7.6.x was pretty cool
W95 and W98 werent' much better until W98SE. Linux distros were rough but mega-stable.
Win98 was head and shoulders above System 9, from a stability perspective. It had protected memory, actual preemptive multitasking, a somewhat functional driver system built on top of an actual HAL, functional networking, etc, etc.
To be clear, Win98 was a garbage fire of an OS (when it came to stability); which makes it so much worse that Mac OS 8-9 were so bad.
98's multitasking and memory 'protection' were a joke. In the same mid high machine for the era, 2k and xp were miles ahead of w98 on mid-high load.
Maybe not on a Pentium, but once you hit 192MB of RAM and some 500 MHz P3/AMD k7, NT based OSes were tons better.
You only realized that upon opening a dozen of IE windows. W98, even SE, will sweat. 2k will fly.
On single tasks, such as near realtime multimedia ones, w98 would be better, such as emulators/games or video players with single thread decoders. On multiprocessing/threading, w98 crawled against W2K even under p4's and 256MB of RAM.
System 6 had menu blinks, zoom animations (with rect XORs no less), and button blinks when you used keyboard completion. Mac was the original "wasteful animation" OS.
That xor effect was under FVWM too for moving and resizing windows and doing an xor wireframe was MUCH faster than a full repaint.
If you had no X11 acceleration (xvesa for instance), that mode was magnitudes faster than watching your whole browser window repaint on a resize lasting more than 3 seconds on a Pentium.
HyperCard is one of my all time favourite memories of Mac OS.
I like System 6: the most complete version of the “real” classic Mac OS before System 7 started to be more “modern.” Dead simple, not a lot of new abstractions and metaphors layered on.
I kind of wish there was a version of System 6 without MultiFinder. Classic Mac OS clearly wasn’t built with multi-tasking in mind.
You could turn off Multifinder in System 6, no problem. It wasn't until System 7 that it was fully baked-in.
You might enjoy decker:
https://internet-janitor.itch.io/decker
FYI, Trello (or one of the many clones of it) can be used for similar purposes.
I enjoyed how quick it was on my G4 iMac (Mac OS X 10.1/10.2 was a total dog) but it was never stable enough for my liking. Forced to choose between fast and unstable (OS9) or slow and steady (OS X), I chose to install Yellow Dog Linux instead (reject the premise).
> In my case, first I tried using the latest Python 3.13.9 both from Windows 7 (bad idea due to resource fork loss) and macOS 10.14.6 Mojave, but neither worked: it seems like that version of Python was just too new. I then retried with Python 3.8.10 instead (which I chose thinking it might be more period-appropriate for the script's age) on Mojave, which worked flawlessly.
Ah, classic Python. Removing features [0] and breaking perfectly working software just because the feature is old, ugly, and not widely used.
[0] https://github.com/elliotnunn/tbxi/issues/1
Max frustrating. If I were writing tbxi again it would be in Go.
Why not C89? Try to make it as portable as possible. The software is intended for preservation of old computers and their software. Would make sense for the software to be as portable as possible.
Who knows, maybe someone would want to run it on vintage Mac hardware?
It already exists. But, if any, Free Pascal with Lazarus for classic Mac ppc would be ideal. Put that MacSSL port available under fpc and now you can compete with the rest.
Misread as “Mac mini M4” and was going to be _very_ impressed.
Honestly this is still pretty insane.
[dead]
StarMax series (and the 4400) seemed to be about as close to CHRP as we got. My off-brand StarMax clone (PowerCity) had a PS/2 and an ISA port. Ran BeOS well, and had a quirk that I could hear a tight loop on the speaker.
AFAIK most StarMax systems that were released (a prototype exists of a CHRP StarMax model) are based on the Tanzania / LPX-40 design, which is mostly a traditional PCI PowerMac[1], albeit with oddities like support for PC style floppy drives. PS/2 is handled by the CudaLite microcontroller which presents it to the OS as ADB devices for example. I've not heard of a version with ISA slots, although I assume you could just have a PCI to ISA bridge chip, even if MacOS presumably wouldn't do anything with it.
[1] https://cdn.preterhuman.net/texts/computing/apple_hardware_d...
Right, I think those were the closest we got to the CHRP standard, as they moved the platform toward PC-style floppies, PS/2, ATX PSU and even more generic "platform" stuff than most clones. I'm fairly sure I had an ISA slot, I do remember trying to get a bargain bin NE2K card working in mine under linux (it didn't work). Definitely did nothing under OS 8/9.
The powercity models were interesting, because they came out after Apple revoked Motorola's clone license. A German company, ComJet, bought up the boards and sold unlicensed clones cheap. Case was slightly different, but otherwise they corresponded to StarMax models (fairly certain they were identical but may have been last revision boards).
Kinda sorts. The systems that the "MacOS on CHRP" thing ran on had a very strange looking device tree, with some bizarre combination of PC and Mac peripherals.
Refer to the "Macintosh Technology in the Common Hardware Reference Platform" book for more information, if you're curious about the Mac IO pieces.The Motorola Yellowknife board seems remarkably similar to this system, as well as the IBM Long Trail system (albeit with Long Trail using a VLSI Golden Gate versus a MPC106 memory controller). Both of them use W83C553 southbridges and PC87307 Super I/O controllers.
The architecture is kind of weird, but the schematics on NXP's website can probably elucidate a bit more on the system's design.
A fun "do-it-yourself" question for people who've always wanted to learn about the baroque architecture of the PowerPC Mac and the classic Mac OS: where is hardware support for specific models implemented?
In concentrically encrusted layers
I have an iMac G4 1.25 GHz. Originally, it was a 1GHz, but I swapped out the motherboard for a later model. For a while I've been wondering if I would had been better off with an earlier motherboard capable of booting OS 9 natively. Compared with using OS X's classic mode, this would omit the overhead of running a whole other OS and leave me with more resources to run OS 9 apps and games. I don't get a whole lot of use out of the earlier OS X software that I have on there...
Maybe in the future I won't have to make that choice! I'd much rather dual boot OS 9 off a different partition, but that hasn't been supported on the 1-1.25GHz models (Thanks Steve...) and no one has gotten it working properly. Maybe now it will be possible! A man can dream...
9 has been possible on that board for years now. No internal speaker but the headphone jack works.
This is really cool, the kind of content great to see here.
That's impressive but early macOS were pretty awful UX; I think the UI thread was everything.
I remember clicking and waiting.
I remember that yes, expensive operations could take a while, but the interface was much faster than my M1 Max Studio for the sole reson you actually do not have to wait for animations.
And not just for the reasons that animations were sparse, they also never blocked input, so for example if you could see where a new element would appear you could click there DURING the animation and start eg typing and no input would be lost meaning that apps you used every day and became accustomed to would just zip past at light speed because there were no do-wait do-wait pipeline.
The animations were there, but they were frame-based with the number of frames carefully calculated to show UI state changes that were relevant. For example, when you would open a folder, there would be an animation showing a window rect animating from the folder icon into the window shape, but it would be very subtle - I remember it being 1 or 2 intermediate frames at most. It was enough to show how you get from "there" to "here" but not dizziingly egregious the way it became in Aqua.
Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.
There were also quite a few tricks used all the way from the dithering/blitting optimizations on the early Macs. For example, if you can blit a dotted rect for a window being dragged instead of buffering the entire window, everything underneath, the shadow mask - and then doing the shadow compositing and the window compositing on every redraw - you can save a ton of cycles.
You could very well have do-wait-do-wait loops when custom text compositing or layout was involved and not thoroughly optimized - like in early versions of InDesign, for instance - but it was the exception rather than the rule.
> Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.
Done exactly this myself to conceal ugly inconsistent lags - I don’t think it is that uncommon an idea.
My most durable memory is all the reboots due to programs crashing. Didn't help that a null pointer deref required a system reboot - or that teenage me was behind the keyboard on that front.
more the fault of MB of ram and HDDs being quite slow to be honest.
> I think the UI thread was everything.
How would you have done it?
Preemption is a very nice OS feature it turns out (particularly once multi-core rolled around). Still, I recall os 8 and 9 being generally snappier than windows 98 (and a lot snappier than early builds of OSX)
How does preemption work on a processor that barely has interrupts and has no way to recover state after a page fault, in an OS that has to fit into a couple dozen kilobytes of ROM?
The Amiga had preemptive multithreading with multiple task priorities on the original MC68000. Preemption is distinct from memory protection or paging.
There were plenty of preemptive multitasking systems for the original 68000, and regardless page fault recovery was fixed from the 010 onwards.
And certainly was very not a problem on PowerPC which TFA is about.
Also not sure how you can say the 68000 "barely has interrupts" I don't even know what you're on about.
MacOS was broken because Jobs forced it to be that way after he was kicked off the Lisa team. Which had a preemptive multitasking operating system on the 68000.
I’ve been waiting for this post.
I run OS 9 on my lamp iMac G4 but now I want to try 7.6.1!
yes, multiple Macs within arms reach right now!
++ BBEdit
One of my early Macs was a Performa 638CD with no dedicated FPU. I had upgraded to a Performa 6400 (which felt like an absolute dog despite its size) but finally had an opportunity to move to the PowerComputing PowerTower Pro 225. What a beast! I hate to say it, but it was probably my favorite Mac I'd ever owned before the first iMac.
The Megahertz wars in the 1990s made it really difficult to understand relative performance across even the same ISA like this, and I think computers with the 603 CPU were a bit of a wrench in people's perception of the Mac.
The 180 or 200MHz 603e with 16k L1 cache in that Performa 6400 wasn't slow by any stretch, but it probably didn't have L2 cache. Coupled with the gradual transition to PPC native code of the OS and apps, these machines were often a little mismatched to expectations and realities of the code.
Meanwhile that PowerTower had a 604e with 32/32k L1 and 1MB L2 cache. That was a fast flier with a superscalar and out of order pipeline more comparable to the Pentium Pro and PII.
Oh believe me. I owned it. It felt slow even at the time.
Yup. Recall the far better cycle efficiency of the 100 MHz hyperSPARC.
Consumers didn't grok cycle efficiency, pipeline depth, or branch prediction miss pipeline stall latency.
I have a PowerCenter Pro 210 in my basement right now! It's not quite as nice as the newer architecture in the PowerTower Pro machines, but it runs MacOS 7.6.1 wonderfully. It is more than enough for classic Mac games of that era - and a joy to use.
The later PowerCenter Pro’s could run with a 60 MHz FSB whereas the PowerTower Pro’s were usually 45-50 MHz FSB. There are a variety of tasks where my PowerCenter Pro 240 outruns my PowerTower Pro 250 for precisely that reason.
As an European, Classic Macs (and current ones) were just for arts/writting people. If you knew what CMYK was in order to print a newspaper, you were a Mac user.
I emulated Mac OS 7 under XP times, and i was impressed that you could get far faster speeds emulating the M68k (and partially the PPC) compared to Intel X86 without any hardware accelerating chip (IntelVT) or kernel modules trapping X86 instructions running it at native speeds. I mean, PPC and M68k chips where much easier to emulate than X86 on itself.
On software, Classic Mac users can just resort to IRC and Gopher clients and visit the public https://bitlbee.org IRC servers in order to connect 'modern' accounts and being proxied to a Mac IRC client. And for Gopher, you have gopher://hngopher.com, gopher://magical.fish and the like. Sadly you don't have an easy TLS library as Amiga users have (AmiSSL) where even modern web can work on it (and IRC over TLS, Gemini...).
Altough... if Amiga m68k emulators run fast with the Rosetta like tech for PPC... you would just fire up Workbench and then AmiSSL. Crude, but it would work. If not, here in the Apple subdir you can get, maybe, some TLS enabled browsers:
gopher://bitreich.org/1/lawn
and
gopher://happymacs.ddns.net/1Vintage-Mac-Software-Archive
MacSSL:
https://github.com/demoniccode12/MacSSL
Usenet will work fine without any TLS, and there's tons of content out there.
And scientists.
For some reason european science was full of old school Mac users.
It was because of QuarkXPress and Photoshop. In the same way WordPerfect and Lotus 1-2-3 were dominant for business computers.
Wish someone would try to create native MacOS classic on x86 hardware.
There are so many Unix or Linux ABI compatible kernels like the recent Moss written in rust.
Ardi Executor. There's a recent fork at GitHub. You can run m68k binaries seamlessly. You don't need propietary MacOS parts, just the software.
But if you are some software preserver, having a libre option to run legacy media it's always good for historical reasons. I am a daily libre software user but I emulate ancient machines with propietary stuff just for curiosity. As it not a personal computing device I find it fine. It's just an historical toy and not my computing device. And, well, if you want to create libre engines for old Mac games (ScummVM, SDL ports...), for sure you need to at least emulate the old OSes and run the propietary game in order to compare the output and correctness.
Also, it already exists "Mac" for x86. It was Rhapsody DR2 and it could run Classic Mac software and NeXT one too. It was like a blend of these two. OSX it's like NeXT Step concept 2.0, with few traces of Mac Classic. Qemu will run it fine.
https://lunduke.substack.com/p/hands-on-with-1998s-rhapsody-...
Rhapsody DR2 is not a solution for classic Mac OS on x86. Lunduke writes:
"Unfortunately [the Blue Box] was only available on PowerPC versions of Rhapsody"
Another option is Advanced Mac Substitute. It doesn't run everything, but what it does run it runs really well. One of my goals is that you can use a 68K Mac application (e.g. MacPaint) as part of your personal computing workflow, if you wish.
https://www.v68k.org/ams/
Adding Executor does that for free as in freedom.
Edit, ah, both are similar.
It would be great if somebody tried to create an opensource version of Rhapsody DR2 that ran on X86 baremetal.
Would not even need to be binary compatible. Source compatible API would be enough.
Rhapsody DR2 is more like Classic Mac than any current MacOS.
Source compatible API it's GNUStep since the 90's.
At least the NeXTStep part; not the Mac GUI (Carbon?) one.
I will have to see if this is yet able to run Macromedia Freehand/MX --- if it is, I no longer need to have a Windows machine for that....
Now, if I can just get a nice portable with:
- largish OLED
- current gen Wacom EMR digitizer support
- decent battery life
running Linux, I can get off the Windows update treadmill....