143

The lost cause of the Lisp machines

As someone who used Franz LISP on Sun workstations while someone else nearby used a Symbolics 3600 refrigerator-sized machine, I was never all that impressed with the LISP machine. The performance wasn't all that great. Initially garbage collection took 45 minutes, as it tried to garbage-collect paged-out code. Eventually that was fixed.

The hardware was not very good. Too much wire wrap and slow, arrogant maintenance.

I once had a discussion with the developers of Franz LISP. The way it worked was that it compiled LISP source files and produced .obj files. But instead of linking them into an executable, you had to load them into a run-time environment. So I asked, "could you put the run time environment in another .obj file, so you just link the entire program and get a standalone executable"? "Why would you want to do that?" "So we could ship a product." This was an alien concept to them.

So was managing LISP files with source control, like everything else. LISP gurus were supposed to hack.

And, in the end, 1980s "AI" technology didn't do enough to justify that hardware.

15 hours agoAnimats

I worked on Franz Lisp at UCB. A couple of points:

The ".obj" file was a binary file that contain machine instructions and data. It was "fast loaded" and the file format was called "fasl" and it worked well.

The issue of building an application wasn't an issue because we had "dumplisp" which took the image in memory and wrote it to disk. The resulting image could be executed to create a new instance of the program, at the time dumplisp was run. Emacs called this "unexec" and it did approximately the same thing.

Maybe your discussions with my group predated me and predated some of the above features, I don't know. I was Fateman's group from '81-84.

I assume your source control comments were about the Lisp Machine and not Franz Lisp. RCS and SCCS were a thing in the early 80's, but they didn't really gain steam until after I arrived at UCB. I was the one (I think... it was a long time ago) that put Franz Lisp under RCS control.

6 hours agoe40

I was doing this in 1980-1983. Here's some code.[1] It's been partly converted to Common LISP, but I was unable to get some of the macros to work.

This is the original Oppen-Nelson simplifier, the first SAT solver. It was modified by them under contract for the Pascal-F Verifier, a very early program verifier.

We kept all the code under SCCS and built with make, because the LISP part was only part of the whole system.

[1] https://github.com/John-Nagle/pasv/tree/master/src/CPC4

3 hours agoAnimats

Do you remember who you discussed it with? It had to be either Sklower or Foderaro, unless you talked with Fateman.

Were the macros originally from another dialect of Lisp?

an hour agoe40

Franz LISP had, I think, MacLISP macros, while Common LISP has a different system.

I talked to Fateman at some point. Too long ago to remember about what.

37 minutes agoAnimats

Franz Lisp could create standalone executables from very early in the project, the compiler is one.

8 hours agorjsw

Correct. To continue the puns, it was called Liszt.

6 hours agoe40

Lisp Machines had versioning file systems IIRC. Kinda like on VMS. Was SCCS really that far ahead?

14 hours agovarjag

Yes, because on VMS (and presumably Genera) 20 versions of a file took 20× as much disk space as one version, so you wouldn't keep unlimited versions. In SCCS the lines that didn't change are only stored once, so 20 versions might be 2× or 1.1× or 1.01× the original file size.

3 hours agokragen

Ummmm... yes. The problem with versioning file systems is that they only kept the last few versions; for files under active development, it was usually difficult to recover state older than a week or two.

(SCCS handled collaborative development and merges a lot worse than anything current, but... versioning file systems were worse there, too; one war story I heard involved an overenthusiastic developer "revising" someone else's file with enough new versions that by the time the original author came back to it, their last version of the code was unrecoverable.)

5 hours agorst

> "So we could ship a product." This was an alien concept to them.

This mentality seems to have carried over to (most) modern FP stacks

14 hours agoraverbashing

Nah, it carried over to scripting languages.

Most of them still require a very specific, very special, very fragile environment to run, and require multiple tools and carefully ran steps just so it does same you can do with a compiled executable linked to the OS.

They weren't made for having libraries, or being packaged to run in multiple machines, or being distributed to customers to run in their own computers. Perhaps JS was the exception but only to the last part.

Sure it mostly works today, but a lot of people put a lot of the effort so we can keep shoving square pegs into round roles.

14 hours agowhstl

TCL has good solutions for this, but its not made it a success.

Where I see Python used is in places where you do not need it packaged as executables:

1. Linux - where the package manager solves the problem. I use multiple GUI apps written in python

2. On servers - e.g. Django web apps, where the the environment is set up per application

3. Code written for specific environments - even for specific hardware

4. One off installs - again, you have a specified target environment.

In none of the above cases do I find the environment to be fragile. On the other hand, if you are trying to distribute a Windows app to a large number of users I would expect it to be problematic.

9 hours agograemep

You don't find the environment to be fragile because millions of human hours have been spent fixing those problems or working around them.

Which is significantly more than was needed for different technologies to achieve similar results.

8 hours agowhstl

But people start by hacking away with one-off installs written for their specific environments, get it to the point where it’s useful to others, and then expect others to install all the tools and dependencies needed to install it.

Quick start guide: works on my machine.

8 hours agomr_toad

Don't get me started. I tried to use a very simply python program the other day, to talk to a bluetooth module in a device I'm building. In the end I gave up and wrote the whole thing in another language, but that wasn't before fighting the python package system for a couple of hours thinking the solution is right around the corner, if only I can get rid of one more little conflict. Python is funny that way, it infantilized programming but then required you to become an expert at resolving package manager conflicts.

For a while Conda seemed to have cracked this, but there too I now get unresolvable conflicts. It is really boggling the mind how you could get this so incredibly wrong and still have the kind of adoption that python has.

13 hours agojacquesm

You and me both. These days I don't even try, just docker pull python and docker run -v .:/app python /app/foo.py

8 hours agofoobarian

I thought that was just me - I really rather liked Python the language but was completely confused at how the package system seemed to work.... Mind you this was 12 years ago or so but it was enough to put me off using it ever again.

8 hours agoarethuza

Yeah, it's still shitty. So often I have to go through some weird hoops to even just run the tests for a project with commits made last week. I can't even ask Claude to explain something about any given repo, it naively tries to run them tests, only to hit the wall. The number of different linters and checkers we have to run on CI just to make sure things are in good state, yet every time I clone something and try to get it running, almost always some kind of bullcrap. Why the fuck we even keep trying to write things in Python, I just don't get it.

6 hours agoiLemming

Or, you know, it might just be that you're not very good at computers.

Instead of jamming in thing after thing after thing blindly hoping it's going to work, try reading the error messages and making sense of why it's doing what it's doing.

This is such Gen Z behaviour - it doesn't work first time so throw a strop and fling stuff.

13 hours agoErroneousBosh

> you're not very good at computers.

Yup, I guess I am not. Been coding for over 20 years, went through over a dozen different PLs and only Python - the best fucking friend who doesn't sugar coat it, tells you without stuttering - "you suck at this, buddy"

     # PEP 9001: Constructive Computing Feedback

     ## Abstract

     This PEP proposes a standardized error message for situations where Python interpreters shall inform the user of insufficient computational competence.

     ## Specification

     When a user attempts to execute Python code that supposedly should work but it doesn't, the interpreter shall emit:

         You're suck at computers
6 hours agoiLemming

> Yup, I guess I am not. Been coding for over 20 years

Wow, nearly as long ago as my first kernel module.

Then you ought to be able to figure out how to install one thing then, eh?

Just out of interest, what was it that was causing such a hassle? I expect it's not really relevant now, but it would be interesting to see how hard it would be to fix.

2 hours agoErroneousBosh

It's not hard. It's just annoying to deal with this shit on constant basis. Like just the other day, the tests wouldn't pass locally, while they're passing on CI. I was scratching my head for sometime, turns out there was breaking change in csv.QUOTE_STRINGS or something, between 3.12 and 3.13 of Python. How the fuck did they manage to fix/improve fucking csv logic introducing a breaking change?

2 hours agoiLemming

This is such a hilarious comment.

Thank you for making my day.

13 hours agojacquesm

Hey Gen Z, as long as I have you on the line, could you please explain 67 to me?

I've heard of "68 and I'll owe you one", so is 67 about owing you two?

12 hours agoDonHopkins

I'm having a hard time coping with my social media addiction while doing some fairly hardcore development on an STM32 based platform so sorry :)

Incidentally, when will you (multiple) come and visit?

It's been too long.

12 hours agojacquesm

I owe you at least one or two! Maybe we can test your drones out on that Russian guy with the GoFundMe campaign, then I'll owe you three! ;)

12 hours agoDonHopkins

thats a gen alpha thing sorry unc

7 hours agoux266478

Oh yeah? Well the jerk store called, and they’re running out of you!

11 hours agos0sa

You are correct unfortunately

13 hours agoraverbashing

Yeah, anytime I see a useful tool, and then find out it's written in Python, I want to kms — ofc, unless it happens to work with UV, but they don't always

13 hours agologicprog

Not the ones I've used. Haskell compiles to executables, F# compiles to the same bytecode that C# does and can be shipped the same way (including compiling to executables if you need to deploy to environments where you don't expect the .NET runtime to be already set up), Clojure compiles to .jar files and deploys just like other Java code, and so on.

I'll grant that there are plenty of languages that seemed designed for research and playing around with cool concepts rather than for shipping code, but the FP languages that I see getting the most buzz are all ones that can ship working code to users, so the end users can just run a standard .exe without needing to know how to set up a runtime.

13 hours agormunn

True but some still wants me to understand what a monofunctor is or something that sounds like a disease to do things like print to screen or get a random number

I feel that is the biggest barrier to their adoption nowadays (and also silly things like requiring ;; at the end of the line)

Pure functions are a good theoretical exercise but they can't exist in practice.

13 hours agoraverbashing

> Pure functions are a good theoretical exercise but they can't exist in practice.

Well, they can. But not all the way up to the top level of your program. But the longer you can hold off from your functions having side effects the more predictable and stable your codebase will be, with as an added benefit fewer bugs and less chance of runtime issues.

13 hours agojacquesm

Yes, but they're "Hello world!" hostile, so traditional programming language pedagogy doesn't work well.

Q: How many Prolog programmers does it take to change a lightbulb?

A: Yes.

12 hours agoDonHopkins

I imagine LLMs have already thrown traditional programming language pedagogy out the window.

11 hours agomchaver

Yes I agree, pure functions are good building blocks (for the most part), but I don't think the current abstractions and ways of bridging the FP and Procedural world are good enough

Also have you managed to eliminate the side effect of your IP register changing when your program is running? ;)

11 hours agoraverbashing

> but I don't think the current abstractions and ways of bridging the FP and Procedural world are good enough

I find that both Python and Javascript allow you to use functional code when appropriate, without forcing you to use it when it isn’t.

8 hours agomr_toad

I love FP but at the end of the day registers are global variables. Half of modern compiler theory consists of workarounds for this sad truth.

8 hours agodreamcompiler

A functional program is an a self-contained expression -- an isolated system following its own rules. The foremost example we have of such a thing is the universe itself, but the universe is not a good example in this discussion, because we have plenty of reasons to think that the universe contains pure (not pseudo-) randomness. Beyond that, isolation , when it matters, is not an easily proven proposition, and is a deplorable fantasy when assumed in many of the other science and engineering disciplines.

2 hours agolucas_membrane

In most FP languages it is simple to print to screen and get a random number.

Pure functions often exist in practice and are useful for preventing many bugs. Sure, they may not be suitable for some situations but they can prevent a lot of foot guns.

Here's a Haskell example with all of the above:

  import System.Random (randomRIO)

  main :: IO ()
  main = do
    num <- randomRIO (1, 100)
    print $ pureFunction num

  pureFunction :: Int -> Int
  pureFunction x = x * x + 2 * x + 1
13 hours agororyc89

There's 'FP stacks' and "FP stacks" and some aren't expressly similar. Volumes of money/data get handled by FP stacks - Jane Street famously uses OCaml; Cisco runs their entire cybersec backend on Clojure; Nubank covers entire Latin America and about to spread into the US - runs on Clojure on Elixir; Apple has their payment system, Walmart their billing, Netlfix their analytics on Clojure; Funding Circle in Europe and Splash in the US; etc. etc. There are tons of actual working products built on FP stacks. Just because your object-oriented brain can't pattern match the reality, it doesn't mean it's not happening.

7 hours agoiLemming

Wouldn't the whole system be the product then? There's tradeoffs, but that's just integration.

13 hours agodbtc

> The hardware was not very good.

The hardware was never very interesting to me. It was the "lisp all the way down" that I found interesting, and the tight integration with editing-as-you-use. There's nothing preventing that from working on modern risc hardware (or intel, though please shoot me if I'm ever forced back onto it).

7 hours agoMangoToupe

Time to dig up a classic story about Tom Knight, who designed the first prototype of the Lisp Machine at MIT in the mid-70's. It's in the form of a classic Zen koan. This copy comes from https://jargondb.org/some_ai_koans but I've seen plenty of variations floating around.

A novice was trying to fix a broken Lisp machine by turning the power off and on.

Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”

Knight turned the machine off and on.

The machine worked.

16 hours agormunn

That's one of the funniest and most enlightening classic AI Koans, originally from the ITS file "AI:HUMOR;AI KOANS".

Here's another Moon story from the humor directory:

https://github.com/PDP-10/its/blob/master/doc/humor/moon's.g...

Moon's I.T.S. CRASH PROCEDURE document from his home directory, which goes into much more detail than just turning it off and on:

https://github.com/PDP-10/its/blob/master/doc/moon/klproc.11

And some cool Emacs lore:

https://github.com/PDP-10/its/blob/master/doc/eak/emacs.lore

Reposting this from the 2014 HN discussion of "Ergonomics of the Symbolics Lisp Machine":

https://news.ycombinator.com/item?id=7878679

http://lispm.de/symbolics-lisp-machine-ergonomics

https://news.ycombinator.com/item?id=7879364

eudox on June 11, 2014

Related: A huge collections of images showing Symbolics UI and the software written for it:

http://lispm.de/symbolics-ui-examples/symbolics-ui-examples

agumonkey on June 11, 2014

Nice, but I wouldn't confuse static images with the underlying semantic graph of live objects that's not visible in pictures.

DonHopkins on June 14, 2014

Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.

I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":

There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.

It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).

He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!

Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:

ftp://ftp.ai.sri.com/pub/mailing-lists/slug/900531/msg00339.html

Also eudox posted this link:

Related: A huge collections of images showing Symbolics UI and the software written for it:

http://lispm.de/symbolics-ui-examples/symbolics-ui-examples....

12 hours agoDonHopkins

Everybody knows, you have to wait at least 5 tau.

16 hours agof1shy

This puts the koan in a completely different light. Thank you.

4 hours agokragen
[deleted]
15 hours ago

>> ‘It was the development environment’

> No, it wasn’t.

I kind of think it was. The best argument I think is embodied in Kent Pitman's comments in this usenet thread [1] where he argues that for the Lisp Machine romantics (at least the subset that include him) what they are really referring to is the total integration of the software, and he gives some pretty good examples of the benefits they bring. He freely admits there's not any reason why the experience could not be reproduced on other systems, it's that it hasn't been that is the problem.

I found his two specific examples particularly interesting. Search for

    * Tags Multiple Query Replace From Buffer
and

    * Source Compare
which are how he introduced them. He also describes "One of the most common ways to get a foothold in Genera for debugging" which I find pretty appealing, and still not available in any modern systems.

[1] https://groups.google.com/g/comp.lang.lisp/c/XpvUwF2xKbk/m/X...

8 hours agoEdwardCoffin

A few years ago I was learning lisp and I mentioned it to my uncle who had been an inspiration to me getting into programming. It turns out he wrote a tcp/ip stack for the symbolics lisp machine when he worked at Xerox. They had some sort of government contract that had to be done in lisp on the symbolics and deep in a very long contract it said that the interface had to be tcp/ip which the symbolics didn’t support out of the box. He said to me his boss came to him one day and the conversation went something like this:

Boss: Hey there, you like learning new things right?

Him (sensing a trap): Errr, yes.

Boss: But you don’t program in lisp do you?

Him (relieved, thinking he’s getting out of something): No.

Boss: Good thing they sent these (gesturing at a literal bookshelf full of manuals that came with the symbolics).

So he had to write a tcp stack. He said it was really cool because it had time travel debugging, the ability hit a breakpoint, walk the execution backwards, change variables and resume etc. This is in the 1980s. Way ahead of its time.

2 hours agoseanhunter

I liked the article, but I found the random remark about RISC vs CISC to be very similar to what the author is complaining about. The difference between the Apple M series and AMD's Zen series is NOT a RISC vs CISC issue. In fact, many would argue it's fair to say that ARM is not RISC and x86-64 is not CISC. These terms were used to refer to machines vastly different from what we have today, and the RISC vs CISC debate, like the LISP machine debate, really only lasted like 5 years. The fact is, we are all using out-of-order superscalar hardware where the decoder(s) of the CPU is not even close to the main thing consuming power and area on these chips. Under the hood they are all doing pretty much the same thing. But because it has a name and a marketable "war" and that people can easily understand the difference between fixed-width vs variable-width encodings, people overestimate the significance of the one part they understand compared to the internal engineering choices and process node choices that actually matter that people don't know about or understand. Unfortunately a lot of people hear the RISC vs CISC bedtime story and think there's no microcode on their M series chips.

You can go read about the real differences on sites like Chips and Cheese, but those aren't pop-sciencey and fun! It's mostly boring engineering details like the size of reorder buffers and the TSMC process node and it takes more than 5 minutes to learn. You can't just pick it up one day like a children's story with a clear conclusion and moral of the story. Just stop. If I can acquire all of your CPU microarchitecture knowledge from a Linus Tech tips video, you shouldn't have an opinion on it.

If you look at the finished product and you prefer the M series, that's great. But that doesn't mean you understand why it's different from the Zen series.

12 hours agoValidark

There seem to be very real differences between x86 and ARM not only in the designs they make easy, but also in the difficulty of making higher-performance designs.

It's telling that ARM, Apple, and Qualcomm have all shipped designs that are physically smaller, faster, and consume way less power vs AMD and Intel. Even ARM's medium cores have had higher IPC than same-generation x86 big cores since at least A78. SiFive's latest RISC-V cores are looking to match or exceed x86 IPC too. x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).

ISA matters.

x86 is quite constrained by its decoders with Intel's 6 and 8-wide cores being massive and sucking an unbelievable amount of power and AMD choosing a hyper-complex 2x4 decoder implementation with a performance bottleneck in serial throughput. Meanwhile, we see 6-wide

32-bit ARM is a lot more simple than x86, but ARM claimed a massive 75% reduction in decoder size switching to 64-bit-only in A715 while increasing throughput. Things like uop cache aren't free. They take die area and power. Even worse, somebody has to spend a bunch of time designing and verifying these workarounds which balloons costs and increases time to market.

Another way the ISA matters is memory models. ARM uses barriers/fences which are only added where needed. x86 uses much tighter memory model that implies a lot of things the developers and compiler didn't actually need/want and that impact performance. The solution (not sure if x86 actually does this) is doing deep analysis of which implicit barriers can be provably ignored and speculating on the rest. Once again though, wiring in all these various proofs into the CPU is complicated and error-prone which slows things down while bloating circuitry, using extra die area/power, and sucking up time/money that could be spent in more meaningful ways.

While the theoretical performance mountain is the same, taking the stairs with ARM or RISC-V is going to be much easier/faster than trying to climb up the cliff faces.

7 hours agohajile

How does that square with the fact that there is no dramatic performance loss for x86 emulation on ARM?

4 hours agolayer8

> In fact, many would argue it's fair to say that ARM is not RISC

It isn't now... ;-)

It's interesting to look at how close old ARM2/ARM3 code was to 6502 machine code. It's not totally unfair to think of the original ARM chip as a 32-bit 6502 with scads of registers.

And, for fairly obvious reasons!

12 hours agoErroneousBosh

But even ARM1 had some concessions to pragmatics, like push/pop many registers (with a pretty clever microcoded implementation!), shifted rigsters/rotated immediates as operands, and auto-incrementing/decrementing address registers for loads/stores.

Stephen Furber has extended discussion of the trade-offs involved in those decisions in his "VLSI RISC Architecture and Organization" (and also pretty much admits that having PC as a GPR is a bad idea: hardware is noticeably complicated for rather small gains on the software side).

9 hours agoJoker_vD

This document

https://userpages.umbc.edu/%7Evijay/mashey.on.risc.html

explains a lot of "what happened in the 1980s?" particularly why VAX and 68k were abandoned by their manufacturers. The last table shows how processors that had really baroque addressing modes, particularly involving indirection, did not survive. The old 360 architecture was by no means RISC but it had simple addressing modes and that helped it survive.

A Lisp-optimized processor would be likely to have indirection and generally complex ways how instructions can fail which gets in the way of efficient pipelined implementations. People like to talk about "separation of specification and implementation" but Common Lisp was designed with one eye on the problem of running it efficiently on the "32-bit" architectures of the 1980s and did OK on the 68k which was big then and also with the various RISC architectures and x86 which is simple enough that it is practical to rewrite the instruction stream into microinstructions which can be easily executed.

6 hours agoPaulHoule

I'm a lisp machine romantic, but only for the software side. The hardware was neat, but nowadays I just want a more stable, graphically capable emacs that extends down through and out across more of userspace.

13 hours agologicprog

> emacs that extends down through and out across more of userspace

Making something like that has turned into a lifetime project for me. Implemented a freestanding lisp on top of Linux's stable system call interface. It's gotten to the point it has delimited continuations.

12 hours agomatheusmoreira

Oh my god, that's so cool? Could I see by any chance? (Edit: found links on your profile, will read more)

10 hours agologicprog

I call it the lone programming language.

https://github.com/lone-lang/lone/

It's a lisp interpreter with zero dependencies targeting Linux exclusively.

I've written about a few of its development milestones:

https://www.matheusmoreira.com/articles/self-contained-lone-...

https://www.matheusmoreira.com/articles/delimited-continuati...

I'm particularly proud of my ELF hack to allow the interpreter to introspect into a lisp code section at runtime without any /proc/self/exe shenanigans. Wish other languages would adopt it.

Top comment and its replies talk about linking the lisp code into a self-contained, easily distributable application:

https://news.ycombinator.com/item?id=45989721

I think I addressed that problem adequately. I can create applications by copying the interpreter and patching in some special ELF segments containing lisp modules. The mold linker even added features to make it easy and optimal.

Since there is no libc nonsense, Linux compatibility depends only on the system calls used. Theoretically, applications could target kernels from the 90s.

My Linux system call philosophy:

https://www.matheusmoreira.com/articles/linux-system-calls

At some point I even tried adding a linux_system_call builtin to GCC itself but unfortunately that effort didn't pan out.

an hour agomatheusmoreira

> I just want a more stable

Emacs is incredibly stable. Most problems happen in custom-made packages. I don't even remember Emacs ever segfaulting for me on Linux. On Mac it can happen, but very rarely. I don't ever remember losing my data in Emacs - even when I deliberately kill the process, it recovers the unsaved changes.

6 hours agoiLemming

Symbolics’ big fumble was thinking their CPU was their special sauce for way too long.

They showed signs that some people there understood that their development environment was it, but it obviously never fully got through to decision-makers: They had CLOE, a 386 PC deployment story in partnership with Gold Hill, but they’d have been far better served by acquiring Gold Hill and porting Genera to the 386 PC architecture.

16 hours agoeschaton

Xerox/Venue tried porting Interlisp (the Lisp machine environment developed at Xerox PARC) to both Unix workstations and commodity PC hardware, but it doesn't seem like that was a commercial success. Venue remained a tiny company providing support to existing Interlisp customers until its head developer died in the late 2000s and they wrapped up operations. The Unix/PC ports seem to have mostly been used as a way to run legacy Interlisp software on newer hardware rather than attracting anyone new to the Lisp machine world. I don't see why Symbolics doing the same thing as Xerox would have produced any different results. The real problem was that investment in expert systems/Lisp dried up as a whole. I don't know whether any of the Lisp vendors could have done anything to combat those market forces.

5 hours agondiddy

The environment lasted a long time as the basis for other Xerox products, such as their office automation system and as a front end for their printing systems. However, it wasn’t so much ported as the virtual machine was. (Just like Symbolics did with OpenGenera on Alpha.)

What I’m suggesting is that they could have done a full port to the hardware; OpenGenera is still an Ivory CPU emulator. In 1986-7 you could get an AT-compatible 80386 system running at 16-25MHz that supported 8-32MB of RAM for 10-20% the price of a Symbolics workstation, and while it might not run Lisp quite as fast as a 3600 series system, it would still be fast enough for both deployment and development—and the next generation would run Lisp at comparable performance.

5 minutes agoeschaton

I don't really understand why lisp was so intrinsically tied to expert systems and AI. It seems to me that Scheme (and, to an extent, common lisp or other lisps) are pretty good platforms for experimenting with software ideas; long before Jupiter notebooks existed.

2 hours agobigfishrunning

To be fair to Symbolics: a lot of companies back then thought their CPU was the secret sauce. Some still do...

13 hours agojacquesm

I think it would have been easier to port the MIT/LMI/TI environment to standard hardware as it was still 32-bit.

5 hours agorjsw

For those unaware, Symbolics eventually "pivoted" to DEC Alpha, a supposedly "open" architecture, which is how Genera became Open Genera, like OpenVMS. (And still, like OpenVMS, heavily proprietary.)

16 hours agoanonnon

Wasn’t the “open” at the time meaning “open system” as a system that is open for external connections (aka networking) and not so much open as in “open source”?

16 hours agof1shy

> Wasn’t the “open” at the time meaning “open system” as a system that is open for external connections (aka networking) and not so much open as in “open source”?

Networking was the initial impetus, but the phrase came to include programming interfaces, which is why POSIX was considered such a big deal. The idea was to promote interoperability and portability, as oposed to manufacturer-specific islands like those from IBM and DEC.

14 hours agoinejge

No, it meant industry standards, instead of proprietary ones, that is why POSIX, Motif, and others are under The Open Group.

13 hours agopjmlp

I was both Alpha being quasi-open itself, like OpenPOWER today, and like earlier PDP minis had been, whereas VAX had been pretty locked down, and OpenVMS getting POSIX compatibility (admittedly probably more the latter than the former, but DEC was big on branding things "open" at the time, partly because they were losing ground):

https://www.digiater.nl/openvms/decus/vmslt05a/vu/alpha_hist...

> Although Alpha was declared an "open architecture" right from the start, there was no consortium to develop it. All R&D actions were handled by DEC itself, and sometimes in cooperation with Mitsubishi. In fact, though the architecture was free de jure, most important hardware designs of it were pretty much closed de facto, and had to be paid-licensed (if possible at all). So, it wasn't that thing helping to promote the architecture. To mention, soon after introduction of EV4, DEC's high management offered to license manufacturing rights to Intel, Motorola, NEC, and Texas Instruments. But all these companies were involved in different projects and were of very little to no interest in EV4, so they refused. Perhaps, the conditions could be also unacceptable, or something else. Mistake #5.

15 hours agoanonnon

Yes, but also. OpenGenera was ported to x86 some time ago.

15 hours agolarsbrinkhoff

Lisp is alive as ever in Emacs and Common Lisp, and Clojure and Racket

15 hours agoGalaxyNova

And Tcl lives on in GDB, sure, just as M4 is forever alive with ubiquitous use of autotools.

9 hours agoJoker_vD

Not quite the same thing. All the software I mentioned above have communities that constantly push to improve the ecosystem.

5 hours agoGalaxyNova

Lisp - historically - did not work well with others. Did not share spaces, did not coexist with other systems particularly well. Or if it did, it would wrap them very carefully in "unsafe" and keep as much to the boundaries as possible.

It's not like it's the only system that suffers this, but "working well with others" is a big key to success in almost every field.

I'm absolutely fascinated by what worked and was possible in that venue, just like I find rust code fascinating. These days lisp is much more workable, as they slowly get over the "must coexist with other software". There are still things that are really hard to put in other computer languages.

8 hours agoteunispeters

> slowly get over the "must coexist with other software"

I dunno, as a Lisper I don't even have to think very hard - virtually any platform available to me, I can write almost anything in Lisp - for JVM and .Net - with Clojure; for Lua with Fennel; for Flutter with ClojureDart; Python - libpython-clj; C/C++ - Jade, CL, Carp and Jank; BEAM - Clojerl and LFE; Shell-scripting - babashka; For targeting js there are multiple options - clojurescript, nbb, squint.

Knowing some Lisp today is as practical as it gets. I really feel like a true polyglot coder - switching between different Lisps, even for drastically dissimilar platforms incurs virtually zero overhead while jumping even between JS and TS is always a headache.

7 hours agoiLemming

These days Lisp works pretty well with C because C has a defined ABI. That was historically not the case for C++, so to call C++ functions you needed to first wrap them in C.

C++ might be easier now; I don't know.

8 hours agodreamcompiler

"sort of". C++ is a lot more stable ABI these days, but linking still means looking into name mangling and data types. At least it no longer seems to be changing between compiler patches, as with - say - earlier GCC. (gcc 2 through 4 were not fun for this)

From a look a little, it seems rust has this pretty reliably - probably helped by sharing link environments with LLVM.

(I've only explored this a little from time to time). Mostly my work is all C and a bit of C++.

7 hours agoteunispeters

I do a lot of work in Gambit, which integrates very well with C, C++, and Objective-C. But that's because it transpiles to C source. Gambit does a lot of other stuff these days, including x86 and even JavaScript compilation, but its roots as a scheme-to-C compiler are still in evidence.

20 minutes agobitwize

This just reminds me of the people who whine about Betamax (or CCS) being better.

FWIW: Technology Connections did a teardown of why Betamax wasn't better than VHS: https://www.youtube.com/watch?v=_oJs8-I9WtA&list=PLv0jwu7G_D...

And the whole series if you actually enjoy watching these things: https://www.youtube.com/playlist?list=PLv0jwu7G_DFUrcyMYAkUP...

5 hours agogwbas1c

I liked betamax better, sorry. The tapes were more compact and used less storage space. Can't argue with that. I also liked that you could use betamax with a Sony PCM F1 processor to record digital audio before the advent of the DAT format (digital audio tape). Can't argue with that. But when was the last time I even thought about betamax? Much more front of mind are the vagaries of blu-ray formats; and I rarely think about them either.

5 hours agowaffletower

Are you joking? (Or otherwise trying to prove my point?)

> The tapes were more compact and used less storage space.

That generally is considered the "death nail" of the format. People generally chose VHS because they could record 6-9 hours on a single tape, (while on vacation,) but the smaller size of the Betamax cassette limited it to shorter recordings.

It also impacted quality of feature length movies: They used the fastest tape speed on VHS, but had to be a slower tape speed on Betamax, negating the supposed quality improvement on Betamax.

> I also liked that you could use betamax with a Sony PCM F1 processor to record digital audio before the advent of the DAT format (digital audio tape)

Most people (consumers) never used their VCRs to record and play back digital audio, they used CDs and cassettes.

The PCM F1 was a professional / prosumer device, not a consumer device like a CD player. I assume that people who were using it were going to have a separate VCR for studio use than their living room (VHS), and weren't going to decide between VHS vs Betamax for pairing with their PCM F1.

3 hours agogwbas1c

funny no mention about the texas instruments explorer: https://en.wikipedia.org/wiki/Texas_Instruments_Explorer

i barely got to play with one for a few hours during an "ai" course, so i didn't really figure much of it out but ... oh yeah, it was "cool"! also way-way-way over my budget. i then kept an eye for a while on the atari transputer workstation but no luck, it never really took off.

anyway, i find this article quite out of place. what hordes of romantically spoiled lisp machine nostalgia fanatics harassed this poor guy to the extreme that he had to go on this (pretty pointless) disparaging spree?

6 hours agoznort_

The author has owned Lisp Machines himself, maybe still does.

5 hours agorjsw

To me, it was never about the hardware. It was not even about LISP. It is about "clean design" and what a great computing environment was capable of, and still would be, had its potential not been shredded by the advent of cheap addicting hardware combined with an "operating system" so "simple and elegant" that even today, a program simply segfaults leaving you with nothing (instead of showing at least an inspectable stacktrace). So "simple and elegant" that the only two data formats end users are dealing with are "copy & paste text", "files", and "screenshots". An operating system so "pure" that every program lives in its own uninteroperable walled garden, that understands nothing about the environment and data loaded around it. We lost a whole computing world and it might still take ages getting that back.

8 hours agolgrapenthin

I don’t want an Open Genera machine in a portable case with a battery, though. I want Apple’s software to match the quality of their hardware.

5 hours agoemchammer

Fun fact: NeXT's Interface Builder was originally built in Lisp. So Apple software was really good at one point, in part because someone wanted to bring the Lisp machine to the NeXT environment.

15 minutes agobitwize

If you ship debug symbols with your binary, you do get a core dump with an inspectable stacktrace…

8 hours agodieortin

In many ways, [GRiSP](https://www.grisp.org/) feels like Lisp machines’ spiritual successor.

6 hours agoArubis

A lot of this could be said about specialized machines in general. I remember visiting the local university last century where a guy was demonstrating a US-made Word Processor machine they had bought, and around the same time a local company was developing something similar. And they looked very cool indeed. But in both cases I thought.. "eh, won't that be total overkill now when we can see standard word processing software on standard computers already arriving? Even if a normal PC doesn't look that cool?" And, as predicted (and I most certainly couldn't be the only one predicting that), the US company as well as the local one folded. At least the company I worked for got to hire some good people from there when the inevitable happened.

It's hard to find where to draw the line when it comes to specialized hardware, and the line moves forth and back all the time. From personal experience it went from something like "multiple input boards, but handle the real time Very Fast interrupts on the minicomputer". And spend six months shaving off half a millisecond so that it worked (we're in the eighties here). Next step - shift those boards into a dedicated box, let it handle the interrupts and DMA and all that, and just do the data demuxing on the computer. Next step (and I wasn't involved in that): Do all the demuxing in the box, let the computer sit back and just shove all of that to disk. And that's the step which went too far, the box got slow. Next step: Make the box simpler again, do all of the heavy demuxing and assembling on the computer, computers are fast after all..

And so on and so forth.

14 hours agoTor3

The Lisp environments are definitely around, in LispWorks and Allegro Common Lisp.

16 hours agopjmlp

And Emacs. Sure Elisp isn't the best lisp around (Personally I would give that title to Common Lisp), Emacs is a good Lisp environment.

13 hours agomichaelanckaert

I'd say elisp is a competitive lisp in its own right, at least at the language level. There's even a promising gradual typing system: https://github.com/emacs-elsa/Elsa

4 hours agoquotemstr

Which one is most similar to the entirety of Genera and also has a reasonable price for hobbyist use?

6 hours agoSomeHacker44

And the Lem editor (an Emacs-like in Common Lisp).

9 hours agovindarel

Rather just "an Emacs" - it was a family of editors, after all, with GNU Emacs being the only one living offspring.

9 hours agolycopodiopsida

Also Maxima.

12 hours ago_rpxpx

Okay they're dead, but I think the interesting thing here is the relationship between hardware and the way mathematicians (potentially) think about problem solving. The established practices massively constrain the solutions we find, but I do wonder what a Turing Machine would look like if FPGAs had been around in 1930. FPGAs keep getting used to implement processors, but using one to make a c interpreter and then using it to run a vision library is probably not the best way to use FPGAs to recognise tanks with a drone. Which is, presumably, what a Zala Lancet is doing with it's FPGA.

14 hours agoPeteragain

Some things have been tried; some things continue to be tried.

- Naylor and Runciman (2007) ”The Reduceron: Widening the von Neumann Bottleneck for Graph Reduction using an FPGA”: https://mn416.github.io/reduceron-project/reduceron.pdf

- Burrows (2009) “A combinator processor”: https://q4.github.io/dissertations/eb379.pdf

- Ramsay and Stewart (2023) “Heron: Modern Hardware Graph Reduction”: https://dl.acm.org/doi/10.1145/3652561.3652564

- Nicklisch-Franken and Feizerakhmanov (2024) “Massimult: A Novel Parallel CPU Architecture Based on Combinator Reduction”: https://arxiv.org/abs/2412.02765v1

- Xie, Ramsay, Stewart, and Loidl (2025) “From Haskell to a New Structured Combinator Processor” (KappaMutor): https://link.springer.com/chapter/10.1007/978-3-031-99751-8_...

More: https://haflang.github.io/history.html

11 hours agomietek

Wow! Thanks! It was a half a thought but that interaction is right up there with "the big red button" and makes the last 20 years of enshitification all worth while!

6 hours agoPeteragain

> Okay they're dead

jesus christ dont say that around here, youll be swamped by fanatical emacs users describing various bits of lisp theyve written over the years and what they each do. it will send you insane

9 hours agoskeezyjefferson

“ I am just really bored by Lisp Machine romantics at this point: they should go away. I expect they never will.”

What? They’re awesome. They present a vision of the future that never happened. And I don’t think anyone serious expects lisp machines to come back btw.

16 hours agokarlgkk

For me what's inspiring about lisp machines is not any particular implementation detail, but the very idea that working with a computer can be this immersive, holistic experience where everything is accessible and workable through a single, but multidimensional human affordance; language. That the APIs and code within a computer system can be the ergonomic inward counterpart to rich accessible user interfaces, just as a man works with his hands and looks with his eyes, but easily turns inward to think and imagine. This is what I felt when I got that leaked Genera image going in a linux VM several years ago.

Its fair enough to say that lisp machines had this or that hardware limitation, or that they weren't really compatible with market needs, but to criticize 'lisp machine romantics' like this article does is to fail to understand what really motivates that romanticism. Maybe you have to be a romantic to really get it. Romanticism is abstract, its about chasing feelings and inspirations that you don't really understand yet. Its about unrealized promises more than its about the actual concrete thing that inspires them.

(I'm also an Amiga romantic, and I think what inspires me about that machine is equally abstract and equally points to a human attitude towards making and using software that seems sadly in decline today)

9 hours agorichardjdare

>They present a vision of the future that never happened

Hauntology strikes again

15 hours agoviccis

See also:

Amiga romantics.

8-bit romantics.

PDP-10 romantics.

Let them stay. Let them romantizice. <glasses tint="rose">

15 hours agolarsbrinkhoff

Without those people lots of history (and hence knowledge) would be lost. I’m happy they are around.

15 hours agof1shy

As an Amiga romantic, I’d say we have no illusions about a late-80s Amiga being a good idea if it existed today. But it captured my imagination (and at just the right age) like nothing else.

14 hours agokhazhoux

You're using an 8-bit machine right now.

12 hours agoErroneousBosh

Smalltalk romantics

10 hours agojdougan

NeWS romantics.

NeXTSTEP romantics.

ITS romantics.

https://donhopkins.medium.com/the-x-windows-disaster-128d398...

>Unfortunately, NeWS and NeXTSTEP were political failures because they suffer from the same two problems: oBNoXiOuS capitalization, and Amiga Persecution Attitude(TM).

It's classic ESR that he shoplifted the ostensibly trademarked term "Amiga Persecution Attitude(TM)" from my Unix-Haters handbook X-Windows Disaster chapter without giving credit, and put it into his corrupted version of the New Hackers Dictionary, incorrectly mutated to "Amiga Persecution Complex", then he rudely redefined it negatively and inaccurately as "The disorder suffered by a particularly egregious variety of bigot".

There's absolutely nothing wrong with feeling persecuted when you really are, and Amiga users had a great thing going but got shit on all the time. Ironic that ESR would name-call perfectly respectable Amiga users "disordered egregious bigots", when they were totally justified to feel persecuted, while ESR's own actual bigotry runs so deep and wide and mean that it's the main thing he's known for now, so much that Thomas Ptacek actually raised $100,000 for charity to stop himself from tweeting vile ESR quotes.

https://github.com/PDP-10/its/blob/master/doc/humor/jargon.6...

Besides ESR's notorious politics and racism and sexism and homophobia (and noxious body odor), he hijacked the original Jargon File from the MIT-AI Lab, and polluted it with his vile ideology and made-up terms that hackers never used, and even tried to redefine what a hacker is, and himself as a hacker, which he definitely is not.

https://news.ycombinator.com/item?id=43139165

>It doesn't help that his infamous "Sex Tips for Geeks" essay comes off like it was written by Pepé Le Pew.

https://news.ycombinator.com/item?id=20382529

>And he hijacked the Hacker's Dictionary and proceeded to inject a bunch of his ideological words and definitions in, that in no way represent the hacker community around the early ARPANET and MIT-AI lab from which it came.

https://medium.com/@donhopkins/mit-ai-lab-tourist-policy-f73...

>And as if hijacking the Hacker's Dictionary wasn't enough, he tried to hijack and corrupt the very meaning of the word "hacker" itself, by presuming to define what a hacker is, by preaching to people about how to become a hacker, by blatantly misrepresenting himself as a great hacker and respected leader of the open source community, even though his programming chops are lackluster and outdated, his social skills are deceptive and manipulative, and his hacker spirit is mean, vindictive, and envious of RMS.

>And then there's his death threat to Bruce Perens that he tried to excuse by explaining that he was only trying to "defame" him.

>In 1999, Debian developer Bruce Perens published an "email threat" that he allegedly received from Raymond. Raymond then "clarified" that he only meant to defame Perens. From this we can assume that he is batshit insane and will fucking kill and or write to anyone that says anything about him or his software. If you are lucky you might get an O'Rielly book about you.

https://news.ycombinator.com/item?id=12320606

>Except that he's made a career of trying to pose as a hacker, define what a hacker is, tell people how to become a hacker, viciously attack the person, the philosophy and the life's work of one of the world's leading and most respected, influential and successful (if not quirky) hackers, and even hijack and distort the definition of the very words in the hacker's dictionary to reflect his own extremist political ideology, which most certainly does not align with the non-bigoted ideology in the Hacker's Manifesto that we're discussing.

https://news.ycombinator.com/item?id=36919981

https://www.dourish.com/goodies/jargon.html

>"This file, jargon.txt, was maintained on MIT-AI for many years, before being published by Guy Steele and others as the Hacker's Dictionary. Many years after the original book went out of print, Eric Raymond picked it up, updated it and republished it as the New Hacker's Dictionary. Unfortunately, in the process, he essentially destroyed what held it together, in various ways: first, by changing its emphasis from Lisp-based to UNIX-based (blithely ignoring the distinctly anti-UNIX aspects of the LISP culture celebrated in the original); second, by watering down what was otherwise the fairly undiluted record of a single cultural group through this kind of mixing; and third, by adding in all sorts of terms which are "jargon" only in the sense that they're technical. This page, however, is pretty much the original, snarfed from MIT-AI around 1988."

7 hours agoDonHopkins

I'm honestly surprised nobody tried to capitalize on the early 2000s Java hype by making some kind of Java box (there were a few things labeled as a Java OS or a Java workstation but none of these were really a "Java Machine")

16 hours agomghackerlady

Sun JavaStation: https://en.wikipedia.org/wiki/JavaStation

16 hours agolukego

I was aware of these, it's kinda what I meant by "None of these were really Java Machines". They were just shitty sparc machines that had Java OS in flash. It didn't have some kind of Java co-processor and still relied on a JVM. Java OS was pretty neat but I wouldn't really consider it a "Java OS" since it was basically just a microkernel that bootstrapped a JVM from what I've read. An actual Java machine IMO would have to at least have some kind of Java co-processor and not rely on a software based JVM

15 hours agomghackerlady

In theory you could say that simcards were / (are?) Tiny java on a chip machines.

15 hours agocalgoo

There were attempts to create a "Java Machine". Apart from JOP that is mentioned in another comment there were other systems that had a whole OS written in Java, a stub interrupt handler written in assembler would call into Java for any event.

7 hours agorjsw

Azul Systems was making Java machines a while ago.

12 hours agomepian

I'm sure the Lisp machines were very impressive compared to a DOS or Unix prompt, but today I can run like ten Amber or Newspeak environments on a constantly networked many-core system I carry around in my pocket. I'm not sure whether the CL folks have created similar web interfaces to the running image but I wouldn't be surprised if they have.

I feel it would be cool to sometime run code on a radiation hardened Forth chip, or some obscure Lisp hardware, but would it be life changing? I doubt it.

13 hours agocess11

IIRC the Open Genera folks said the Alpha RISC code to interpret the Symbolics instruction set, which fit in the Alpha's cache, ran about as fast as you'd expect microcode to run. So in a sense we're all writing microcode now?

It's probably worth reading this Alan Kay comment, which I excerpted from https://www.quora.com/Papers-about-the-Smalltalk-history-ref... on Quora before it started always blocking me as a robot:

> The idea of microcode was invented by Maurice Wilkes, a great pioneer who arguably made the earliest programmable computer — the EDSAC (pace Manchester Baby). The idea depends partly on the existence of a “large enough” memory that is much faster (3–10 times) than the 1st level RAM of the computer.

> A milestone happened when the fast memory for microcoding was made reloadable. s Now programmable functions that worked as quickly as wired functions could be supplied to make a “parametric” meta-machine. This technique was used in all of the Parc computers, both mainframes and personal computers.

> Typical ratios of speed of microcode memory to RAM were about 5x or more, and e.g the first Altos had 4kbytes (1k microinstructions) that could be loaded on the fly. The Alto also had 16 program counters into the microcode and a shared set of registers for doing work. While running, conditions on the Alto — like a disk sector passing, or horizontal retrace pulse on the CRT — were tied to the program counters and these were concurrently scanned to determine the program counter that would be used for the next microinstruction. (We didn’t like or use “interrupts” … )

> This provided “zero-overhead tasking” at the lowest level of the machine, and allowed the Alto to emulate almost everything that used to be the province of wired hardware.

> This made the machine affordable enough that we were able to build almost 2000 of them, and fast enough to do the functionality of 10–15 years in the future.

> Key uses of the microcode were in making suitable “language machines” for the VHLLs we invented and used at Parc (including Smalltalk, Mesa, etc.), doing real time high quality graphical and auditory “animations/synthesis”, and to provide important systems functions (e.g. certain kinds of memory management) as they were invented.

> It’s worth looking at what could have been done with the early 16 bit VLSI CPUs such as the Intel 8086 or the Motorola 68K. These were CISC architectures and were fast enough internally to allow a kind of microcoding to support higher level language processing. This is particularly important to separate what is a kind of interpreter from having its code fetched from the same RAM it is trying to emulate in.

> The 68K in fact, used a kind of “nano-coding”, which could have been directed to reloadability and language processing.

> The big problem back then was that neither Intel nor Motorola knew anything about software, and they didn’t want to learn (and they didn’t).

> The nature of microcode is that architectures which can do it resemble (and anticipated) the RISC architectures. And some of the early supercomputers — like the CDC 6600 — were essentially RISC architectures as well. So there was quite a bit of experience with this way of thinking.

> In the 80s, the ratio between RAM and CPU cycles was closing, and Moore’s Law was starting to allow more transistors per chip. Accessing a faster memory off CPU chip started to pay off less (because going off chip costs in various ways, including speed).

> Meanwhile, it was well known that caching could help most kinds of architectures (a landmark study by Gordon Bell helped this understanding greatly), and that — if you are going to cache — you should have separate caches for instructions and for data.

> Up to a point, an instruction cache can act like a microcode memory for emulating VHLLs. The keys are for it (a) to be large enough to hold the inner loops of the interpreter, (b) to not be flushed spuriously, and (c) for the machine instructions to execute quickly compared to the cache memory cycle.

> Just to point the finger at Intel again, they did a terrible job with their cached architectures, in part because they didn’t understand what could be gained with VHLLs.

> A really interesting design was the first ARM — which was a pretty clean RISC and tidy in size. It could have been used as an emulator by wrapping it with fast instruction memory, but wasn’t. I think this was a “point of view” disconnect. It was a very good design for the purpose of its designers, and there wasn’t enough of a VHLL culture to see how it could be used at levels much higher than C.

> If we cut to today, and look at the systems that could be much better done, we find that the general architectures are still much too much single level ones, that ultimately think that it is good to have the lowest levels in a kind of old style machine code programmed in a language like C.

> A very different way to look at it might be to say: well, we really want zillions of concurrent and safe processes with very fast intermessaging programmed at the highest levels — what kind of architecture would facilitate that? We certainly don’t want either “interrupts” or long latency process switching (that seems crazy to “old Parc people”. We probably want to have “data” and “processing” be really close to each other rather than separated in the early von Neumann ways.

> And so forth. We won’t be able to be perfect in our hardware designs or to anticipate every future need, so we must have ways to restructure the lowest levels when required. One way to do this these days is with FPGAs. And given what it costs to go off chips, microcoding is far from dead as another way to help make the systems that we desire.

> The simple sum up here is that “hardware is just software crystallized early”, and a good systems designer should be able to design at all levels needed, and have the chops to make any of the levels if they can’t be purchased …

4 hours agokragen

"Old man yells at Lisp Machines (And their enthusiasts)"

16 hours agoN_Lens

i.e., at other old men.

14 hours agokazinator

Some of us are older than others (I'm 66.)

10 hours agopfdietz

I yell at clouds professionally, as do a lot of people these days, young and old. It's called "YellOps".

7 hours agoDonHopkins

I'm not so sure it's down to the hardware. With something like 180-bit wide microcode store - a very very horizontal microarchitecture - the hardware sure was specialised, but I think it's fundamentally down to Lisp itself.

I don't know a lot of Lisp. I did some at school as a teenager, on BBC Micros, and it was interesting, but I never did anything really serious with it. I do know about Forth though, so perhaps people with a sense of how both work can correct me here.

Sadly, Forth, much as I love it and have done since I got my hands on a Jupiter Ace when I was about 9 or 10 years old, has not been a success, and probably for the same reasons as Lisp.

It just looks plain weird.

It does. I mean I love how elegant Forth is, you can implement a basic inner interpreter and a few primitives in a couple of hundred lines of assembler and then the rest is just written in Forth in terms of those primitives (okay pages and pages of dw ADDRESS_OF_PRIMITIVE instructions rather Forth proper). I'm told that you can do the same trick with Lisp, and maybe I'll look into that soon.

But the code itself looks weird.

Every language that's currently successful looks like ALGOL.

At uni, I learned Turbo Pascal. That have way to Modula-2 in "real" programming but by then I'd gotten my hands on an account on the Sun boxes and was writing stuff in C. C looked kind of like Pascal once you got round the idea that curly brackets weren't comments any more, so it wasn't a hard transition. I wrote lots of C, masses and masses, and eventually shifted to writing stuff in Python for doing webby stuff and C for DSP. Python... looks kind of like ALGOL, actually, you don't use "begin" and "end", you just indent properly, which you should be doing. Then Go, much later, which looks kind of like Pascal to me, which in turn looks kind of like ALGOL.

And so on.

You write line after line after line of "this thing does this to that", and it works. It's like writing out a recipe, even more so if you declare your ingredients^W variables at the top.

I love Forth, I really want to love Lisp but I don't know enough about it, but everyone uses languages that look like ALGOL.

In the late 1960s Citroën developed a car where the steering and speed were controlled by a single joystick mounted roughly where the steering wheel would be. No throttle, no clutch, no gears, just a joystick with force feedback to increase the amount of force needed to steer as the car sped up. Very comfortable, very natural, even more so when the joystick was mounted in the centre console like in some aircraft. Buuuuut, everyone uses steering wheels and pedals. It was too weird for people.

12 hours agoErroneousBosh

> Sadly, Forth, much as I love it and have done since I got my hands on a Jupiter Ace when I was about 9 or 10 years old, has not been a success, and probably for the same reasons as Lisp.

I don't like when anything short of taking over the world counts as failure. Forth has been an enormous success! Forth has visited asteroids, run factories and booted millions of computers. It has done well, and if it's heading off into the sunset it should be remembered for what it did rather than what it didn't do. I would be beyond thrilled if my language did a tenth as well as Forth.

8 hours agokryptiskt

I didn't say it was a failure, I just said it wasn't a success.

It fits a particular ecological niche, but these days there's almost no reason to do things that way. In the olden days of the early 90s when I needed to write embedded code to run on what was basically a Z80 SBC, it was easier to write a Forth for it and assemble it natively on a clunky old CP/M machine (I used a Kaypro of some sort at work, but an Osborne 1 at home) than it was to struggle on with the crappy (like, really crappy) MS-DOS cross-assembler on the PCs we had.

Now of course I could emulate every single computer in the entire company on a ten quid embedded board, all at the same time.

5 hours agoErroneousBosh

I am mostly a Java programmer but I really enjoy writing Lisp code. It’s not really weird in my opinion, but the programmer must format the code nicely for it to be readable. If you do that it looks like any other language to me , except it has very little syntax so feels a bit more pure. Similar to Forth which has essentially no syntax, just white spaces between tokens. In Lisp it’s just lists all the way. Just so you know, in lisp you can do procedural programming, ie line after line of instructions… it’s just not the common way to do things, people write much more functional code than in most languages.

10 hours agobrabel

> I’d be saying that in a few years there are going to be a lot of huge farms of GPUs going very cheap if you can afford the power. People could be looking at whether those can be used for anything more interesting than the huge neural networks they were designed for.

Author falls into the same trap he talks about in the article. AI is not going away, we are not going back to the pre-AI world.

14 hours agodist-epoch

I don't read it that way.

The author is saying that those special purpose machines will age out quickly when the task of advanced computing shifts (again).

You seem to be making the assumption that "the huge neural networks they were designed for" are the only way to build AI. Things could shift under our feet again.

The author (and I) have seen too many people say that only to be proved very wrong shortly thereafter. This means that it doesn't quite have the logical force that one might think to assert that this time we have the approach right (ignore the previous 7 times somebody else said just the same thing).

2 hours agoted_dunning

AI will not go away, I agree. But many of the companies now betting the farm on AI are going to lose, and there will be server farms going for sale cheap. I'm hearing more and more people outside the tech world talk about the AI bubble, and predicting it's going to pop. When that happens and investors lose confidence, suddenly companies who need the next round of financing to pay off their current debts won't get it, and will go under.

I can't predict when the shakeout will be, but I can predict that not every AI company is going to survive when it happens. The ones that do survive will be the ones that found a viable niche people are willing to pay for, just as the dot-com bubble bursting didn't kill Paypal, eBay, and so on. But there are definitely going to be some companies going bankrupt, that's pretty clear even at this point.

14 hours agormunn

> I'm hearing more and more people outside the tech world talk about the AI bubble, and predicting it's going to pop

I'm juuust about old enough to remember the end of the Lisp Machine bubble (we had one or two at uni in the early 90s, and they were archaic by then). But obviously Lisp machines were the wrong way to go, even if they were a necessary step - obviously, hardware-mediated permanent object storage is the way forwards! POP! Ah, maybe not. Okay but can't you see we need to run all this on a massive transputer plane? POP! Oh. Okay how about this, we actually treat the microcode as the machine language, so the user-facing opcodes are like 256 bits long, and then we translate other instruction sets into that on the fly, like this - the Transmeta Crusoe! It's going to revolutionise everything! POP! Ah, what? Okay well how about...

And we're only up to the early 2000s.

It's bubbles, all the way back. Many of these things were indeed necessary steps - if only so We Learned Not To Do That Again - but ultimately are a footnote in history.

In 30 years' time people will have blog posts about how in the mid-2020s people had this thing where they used huge sheds full of graphics cards to run not-working-properly Boolean algebra to generate page after page after page of pictures of wonky-looking dogs and Santa Clauses, and we'll look at that with the same bemused nostalgia as we do with the line printer Snoopy calendars today.

12 hours agoErroneousBosh

Lisp machines, Transputers, Transmeta, even RISC were all academic-driven bubbles. They were spun out of university research projects. (Transmeta went indirectly via Bell Labs and Sun, but it was still based on academic ideas.)

The culture was nerdy, and the product promises were too abstract to make sense outside of Nerdania.

They were fundamentally different to the dot com bubble, which was hype-driven, back when "You can shop online!" was a novelty.

The current AI bubble is an interesting hybrid. The tech is wobbly research-grade, but it's been hyped by a cut-throat marketing engine aimed at very specific pain points - addictive social contact for younger proles, "auto-marketing team" for marketers, and "cut staffing and make more money" promises for management.

11 hours agoTheOtherHobbes

All true, but all the ones you list were very small bubbles. The dotcom bubble is the only one comparable (in size and industry-wide enthusiasm) to the LLM bubble.

8 hours agoAnimalMuppet

And that popped too. Lots of people who spend good money on "buycatfoodonline.com" and pitched that part of it as the clever bit of the idea. Hey, I came up with the idea, all you have to do is write the site, that's easy isn't it?

No, as it turns out.

But it was fun while it lasted.

3 hours agoErroneousBosh

Well, the only one in our modern tech sector. There have been famous crashes in the past related to earlier technologies. The Panic of 1873 was caused by a railroad bubble, and arguably the Great Depression was at least partially due to a bubble from electrification and motorization.

5 hours agopfdietz

Most will fail, but I don't say this because I'm a pessimist: it's just that for every AI business idea, there's always at least 10 different competitors.

13 hours agowhstl

> I'm hearing more and more people outside the tech world talk about the AI bubble, and predicting it's going to pop

You know what they say about when the taxi driver is giving you strong financial opinions.