> time to run it after not touching it for 4 years
> Two hours of my life gone...
Two hours of work after 4 years sounds ... perfectly acceptable?
And it would have run perfectly right away if the node version was specified, so a good learning, too
This feels like making a mountain out of a mole hill
Well, the "solution" ended up as "I gave up and just installed an old Node version and called it a day". So those 2 hours weren't even enough.
I've been using Jekyll/Ruby since 2014 for my website, with a few custom plugins I wrote myself. And I've never really needed to do anything like this. It "just works".
My Go and C programs are the same: "just works". I have some that are close to a decade old.
Good for you, my experience with Jekyll is closer to OP's experience with Node. I have a big website that I built in 2014, with tons of custom plugins, that is now stuck on Jekyll 2.x and Ruby 2.x, and has a ton of hidden C++ dependencies. The way I build it now is using a Dockerfile with Ubuntu 18.04. I probably could update it given enough effort, but I was rather thinking of rewriting it in Astro.js or Next.js.
This is the issue I have with the "build vs buy (or import)" aspect of today's programming.
There are countless gems, libraries or packages out there that make your life easier and development so much faster.
But software (in my experience) always lives longer than you expect it to, so you need to be sure that your dependencies will be maintained for that lifetime (or have enough time to do the maintenance or plug in the replacements yourself).
If you're looking for a stable target you should not even consider Next.
Just avoid JavaScript frameworks altogether.
[deleted]
Yes indeed, that is the solution to modern IT problems - never update your Ubuntu 18 containers and you're set.
(Wish I was joking, but sadly I'm serious.)
Not sure if I'd call out Jekyll as a paragon of stability. The last time I touched it, I made sure to write up detailed notes. In fairness, it's the only time I interact w/ Ruby.
Those mostly seem the standard Ruby/Jekyll/GitHub Pages setup instructions?
I don't love how bundler works by the way; I think it should/could be a lot better in many different ways. Same for Jekyll. But once it works, it generally keeps working.
I don't think Jekyll (or Ruby) are a paragon of stability. I'm sure some stuff has broken over the years. It just seems to break a lot less than the JS/Node ecosystems.
I dug out a small Rust project from 2016 and with edition = 2018 I got it running again in under 30 minutes, I was kinda surprised. 8 years is ancient in terms of Rust. I have had more problems with certain other crates. But yeah, C/C++ usually don't really compare. 5 years is nothing, it should just work. For Go the big breaking moment was modules. All my pre - 2016?ish code would need some work.
Rust is kinda like c++ here, it's easy until you have a library that has external, non-vendored dependencies.
Every single time I clone anything Go, I first spend a few hours sorting out dependency issues.
This shocks me, what sort of issues do you hit?
+1 on this. I've been using Go almost exclusively for the last 5 ish years partly because this sort of thing never happens.
You lucked into the period when they solved the issues. If you need to work with older projects and can't easily convert them, you're going to have a bad time.
I open up random Go projects from a decade+ ago every week with no issues.
Lucky you, then. I open them every week too, but definitely not without issues.
Mainly weird issues of incompatible dependencies, maybe sometimes something that won’t compile because it’s missing a system dependency? I also feel like it sometimes pulls a newer or older dependency that is the missing some used module.
I still can open my decade-old Java projects, run build with modern Maven/JDK and get working code - in a few minutes. Two hours of dancing with a drum doesn’t feel acceptable to me.
> I still can open my decade-old Java projects, run build with modern Maven/JDK and get working code - in a few minutes.
Try any old Spring project, where anything newer than JDK 8 will be incompatible. The only saving grace is that JDK 8 is still available, but even it will eventually reach EOL. And then you look at JDK 11 projects and realize that they won't run on anything newer due to Lombok issues, so that's another thing to update and fix.
I think the experience of code rot is universal and increases with the amount of dependencies you have.
> I think the experience of code rot is universal and increases with the amount of dependencies you have.
This is true but there's also a factor from the language/framework in use. Node is especially bad because of it generates huge package dependency trees. Go is especially good because of the large stdlib (which I use to minimize deps in https://github.com/contribsys/faktory) and excellent backwards compatibility.
Plus even if you were using lots of deps, the Go code would almost certainly keep working, thanks to minimal version selection and the Go module mirror. (And of course for maximum reliability, could even easily vendor deps with "go mod vendor".)
> Go is especially good because of the large stdlib
Come on… compare it with what java and python have.
Java can't even read/write json without third party libraries.
Tell me a single thing that Oracle has added to the standard library. As far as I can tell, more and more of what was once standard is now getting offloaded and trademark-washed with the eclipse and apache foundations.
What does Java or Python have in their stdlib that go doesn’t?
Well python has syslog, mmap, just to name a couple of things.
They had mmap, but it was dropped at some point, I guess.
Spring generates bytecode, and Lombok is a language extension. You're effectively writing code in another language that's very similar to Java.
The company I work at has a ton of projects stuck on ancient spring versions and Java 8 (or Java 6 in one instance). They still insist on Spring despite being essentially unable to upgrade to a version newer than a decade old.
> Spring generates bytecode, and Lombok is a language extension. You're effectively writing code in another language that's very similar to Java.
That's the reality of using the language in production projects.
Same as how you're likely to see more than just Python or Node being used in projects that list them in the tech stack, because there's native dependencies and tooling used.
Old Spring versions < 6.1 are obsolete, so you do have to upgrade Spring in many scenarios as well. No security fixes for them unless you pay for support.
I spent an hour or two figuring out how to even download Java, whether I need to give Oracle my home address, use a third party JDK etc. Then it turned out the standard built in GUI library I needed was no longer standard or built in. (I also used it ten years ago and it was a much better experience then.)
JavaFX? I’ve been there too, had to install it with maven and waste a few hours getting the maven xml just right for it to bundle the entire thing into a runnable jar (one that doesn’t require shipping extra files). Not the most convenient, but I get that they wanted to split it off the JDK release cycle.
>I spent an hour or two figuring out how to even download Java, whether I need to give Oracle my home address, use a third party JDK etc.
My son found a disk with some of my old java project from college 20 years ago and that's about what it took to run them, first figuring out how to even download java and then making some minor changes to get them running. I think we gave up trying to get the actual applet based ones to run.
I know this is a late response, but for anyone curious you would need to use an official Oracle or Sun JDK/JRE from Java 8 or older. OpenJDK doesn't include support for applets.
You also need a browser that has NPAPI. IE 11 was the most modern browser I am aware of that still supported applets.
The old GUI framework mentioned in the GP might have been Swing. It is still included in most JDKs and allows for cross platform desktop GUI application development with no other dependencies outside the JDK. Finding documentation on how to do GUIs in Swing is getting increasingly difficult though.
IIRC Oracle makes it really hard to find the download. Admittedly we didn't spend more than an hour or two messing around with all of it and getting the applets to run wasn't a high priority or anything.
applet. thats a name i havent heard in a very long time.
Well, I would spend probably similar time to get started on Pascal, which I used last time in the previous century. I would not blame Pascal for it though.
Maven, maybe, but Gradle absolutely not. If you don't have the exact version of Gradle that you used before, you're in for the same kind of misery documented above, with the same end state: just stick to the old version and deal with the upgrade later.
If you use the the Gradle wrapper, it will use the correct version and download it if necessary. If I'm not mistaken, that's the recommended approach.
Well, I‘m not talking about Gradle, right? Sometimes conservative choice is what gets the job done.
Right, I'm just clarifying for others who may not know the difference that Node doesn't have a monopoly on instability.
There are a very small number of projects that specifically make it their goal to be backwards-compatible effectively indefinitely, and Maven is one of those. It's part of what people who hate it hate about it, but for others it's the main selling point.
Well, the article is about npm, a package manager for node. Vendoring dependencies into source is a choice, albeit one that I don't often reach for.
> Well, the article is about npm, a package manager for node.
And Maven is a package manager for Java. The main difference IMO? The usual way to do things in Maven is to always use exact versions for the dependencies. When I specify I want some dependency at version 1.2.3, Maven will use version 1.2.3 of that dependency even if 1.2.4 or later already exists.
Pinning to exact versions has been supported in npm for most, if not all of its life.
That’s the usual way to do things in most teams working on app code I’ve been a part of (as opposed to library code where version ranges are preferable).
Maven is also stable and requires only JDK to run. You can unzip it to desired location and add to PATH - it will be ready to go in a couple of minutes.
It will not build all old projects out of the box, of course. Specific versions of plugins may not be compatible with it or some dependencies may break on modern JDK. But chances of hitting this issue are much lower than in JS/NPM ecosystem.
Might be an unpopular opinion but I like it that way. It means that I can pull an old Gradle version and the build should succeed. Unlike maven where I’ve been bitten by plugin xml syntax changes and such.
`make` works pretty well on ... everything I've written that I kept since then.
`cmake ..` otoh, tends not to.
`make` itself works fine, that hasn't changed in decades. That doesn't mean the tooling `make` scripts kick into gear still exist, or work, or do the same things, though. Makefiles don't fix the dependency on a fifteen year old C compiler.
If node saved you 8 seconds per working day over those 4 years, you’d still be ahead spending 2 more hours on that node project every 4 years.
C# devs can open decade+ old solutions without issues, maybe this is just "normal" for the JavaScript ecosystem, but there absolutely exist other ecosystems which don't absolutely waste your time in this way.
Not true for the entire C# ecosystem. I tried rebooting a Xamarin project I coded a couple of years ago. Had to spend weeks upgrading it cause Microsoft decided to discontinue Xamarin and force everyone to use
NET MAUI
This has to do with specific framework and does not translate to the overall experience (for example targeting iOS is pain because Apple deprecates versions quickly, so downstream dependencies have to match this too).
You can open and build a back-end application that targets e.g. netcoreapp2.1 (6 years old target) just fine, it might require installing you an archived SDK for the build to succeed (which you can download regardless, it will complain that it is EOL though) but it's an otherwise simple procedure.
For library code it's even easier - if you have netstandard2.0 target, it will work anywhere from .NET Framework 4.6.1 to the latest version without requiring any maintenance effort whatsoever.
On Windows, Visual Studio will happily work with .NET Framework 3.0 (which is ancient) and more.
If you used pure Node.js, you wouldn't have any problems whatsoever, too.
Unlike Node projects, your dependencies don't break - they stay on nuget.org, and the projects compile just like they did when they were first implemented. Upgrading dependencies and project target is subject to the same restrictions as in other good languages.
I know for sure that I can clone a random project on Github, hit `dotnet build` and usually expect it to work on the first try. The rate of bitrot for average unmaintained JS project cannot be compared. The average dependency graph of a .NET project is going to be 10 times smaller too.
This is true for NodeJS as well - dependencies on NPM cannot be removed, so at long as NPM keeps running, and as long as you've got a lockfile that precisely specifies all dependencies, you shouldn't run into problems running older projects. (Certainly I have never had an issue with this - the problems, if they occur, are always around upgrading. And like you say, this is subject to much the same issues in every ecosystem).
If this isn't enough, there are tools like Yarn which encourage vendoring dependencies to ensure that as long as you have a copy of the repository, the yarn CLI, and a valid version of NodeJS, you can always run the project.
Yeah, that's nice, unfortunately it's the built-in Microsoft stuff like WPF that keeps breaking between versions.
Note Microsoft still removes downloads of previous versions of Visual Studio that aren't the latest - now you can only download VS2022 unless you have some mysterious paid account that has access to those - I don't have in either my personal or my corporate so I don't know the details
This is true and is a pain if you’re stuck on something like Windows 7/8.1 (or 10 in the future). The installer being a web installer that auto-updates on start also means that downloading an older one doesn’t help at all. The only way seems to be to pirate the Enterprise offline installer which of course isn’t really an option.
As it happens I've recently upgraded a 2 year old project with both node and C#
It was open much the same in both. If you're happy using outdated and unsupported components with security issues AND you can get hold of the right version of the dev tools and plugins AND your hiding environment still supports the old platform, you can maintain the old version with minimal change. But should any professional developer do this?
It depends.
In domains where software is expected to run for years, if not decades, it's common to archive the entire toolset, along with libraries and SDK's in case you need to fix a bug/add a feature 10 years later. Obviously, in this case you can't have dependencies sitting somewhere on a server you don't control.
There are also situations where this is forced by regulation: you need to recreate build xxx.yyy.zzz 7 years after release because a customer reported a serious bug and it needs to be reported to the relevant regulatory agency and investigated.
Not any more. Security vulnerabilities in dependencies now break the build by default.
It's arguable whether this is the correct decision, but it makes things slightly harder than they used to be
> C# devs can open decade+ old solutions without issues
For some definition of "without issues"...
I wish I lived in the world you described but trying to onboard a new dev onto an existing (edit: ancient) C# project at my job is frequently a multi-day endeavor.
I think there's much bigger deviations from long term stability the closer you get to a GUI as things deviate further from ideals into brute force hacks for stuff to look right.
Can remember trying to update a Unity project so it'd be still buildable for 64-bit Mac devices. The very first version bump I done resulted in several critical libraries no longer working and there being no clear alternatives to swap in.
As someone who has migrated old ASP/silverlight projects and BHOs... seriously, this is complete bullshit.
This depends highly on what dependencies are in your C# solution.
Same for his node project. If he'd stuck with dependencies that are just plain js - I'd bet money his project would have installed & built just fine.
By the time you're hitting the native addon apis and doing things like compiling python and C/C++ code... you're going to feel all the pains of those ecosystems too.
Lol, I left C# because I couldn't solve this issue and in Node.js it's particularly easy - just keep a nvmrc file and a dependency lockfile.
Sounds like you are way too used to the javascript ecosystem if you think getting an old project to build should take hours...
What ecosystem are you comparing to?
Any C/C++ project with even mild complexity has a good chance of being extremely difficult to build due to either missing libraries that have to be installed manually, system incompatibilities, or compiler issues.
Python has like 28 competing package managers and install options, half of which are deprecated or incompatible. I can't even run `pip install` at all anymore on Debian.
Even Rust, which is usually great and has modern packaging and built-in dependency management, often has issues building old projects due to breaking changes to the compiler.
All this is to try to say that I don't think this is some problem unique to JS at all - but rather a side effect of complex interconnected systems that change often.
A big reason Docker and containers in general became so popular was because it makes this problem a lot less difficult by bundling much of the environment into the container, and Docker is very much agnostic to the language and ecosystem running inside it.
Java has a great ecosystem. It’s well thought out and I can compile and run 10 year old projects no problem. In fact, I wish everyone had just copied Java’s model instead of inventing their own worse model.
I love Python but it has a terrible package ecosystem with mediocre tooling that has only gotten worse with time.
JavaScript has gotten better but it seems they are just re-learning things that were long figured out.
When I see new package managers, I just see a list of problems that they forgot to account for. Which I find strange when there have been many package managers that you can learn from. Why are you re-inventing the wheel?
In JetBrains's Developer Ecosystem 2023 survey, 50% of developers were still regularly working in Java 8 [0]—the exact kind of "stick with the old version of the runtime" solution described in TFA.
Java 8 is 10 years old. If you had a project with a Java version that was recent 4 years ago (11 - 14), you could run it without any problems or changes.
Because they made the design choice to stop making large breaking changes to the language and tooling. Java 8 to 9 wasn't easier than Java 8 to 17 is, it's getting off of Java 8 that is hard because they made the choice to break so much in 9.
Node does not promise indefinite backwards compatibility, which is a design choice that they've made that allows them to shed old baggage, the same way that the Java developers chose to shed baggage in 8->9. Neither choice is inherently better, but you do have to understand which choice a language's designers were making during the time window in question when you go to run it later.
"Java has a great ecosystem. It’s well thought out and I can compile and run 10 year old projects no problem."
We just had to workaround breaking changes in a patch version update of Spring Boot. Maybe it was true in 2005, but certainly not the case today. I know of products that are stuck in Java 1.8 and not because they are too lazy to upgrade.
I've been involved in bringing real old Java 1.4 and 6 and whatnot up to 17 and from classic app servers into cloud, can take a bit of work but it's pretty straightforward, mostly switching out deprecated methods to their successors and copying over boilerplate config from similar applications.
Depends on the frameworks in use. I have done the same as you in bringing code that was originally started on 1.4 up to 17 and now 21 and it just took grunt work, but not too crazy since most of it was bog standard Java.
However, some other projects around here using different application frameworks are stuck since the frameworks aren’t maintained or upgraded in ways that aren’t compatible anymore.
Looking into old Java code, it is hard to remember a time before enums and what a pain that is to deal with int constants instead of typed constants.
I am not sure you should put ant or maven as shining examples here, but I am kinda warming up to Gradle, at least without Groovy being involved.
What do you get from Gradle that Maven cannot offer?
Javascipt is a horrible language because it basically is missing a standard library so you need external dependancies even for the most basic things that are already present in other languages.
Python has a very rich standard library. You can do a lot with libc, if you had a c++ Qt project then it would provide you with basically everything you could ever need.
> Javascipt is a horrible language because it basically is missing a standard library so you need external dependancies even for the most basic things that are already present in other languages
That's not the only reason. :)
Horrible syntax full of inconsistencies, bolted on type system with TypeScript helps but will always be bolted on, quirks everywhere, as if `null` was not bad enough they also have `undefined`, I can go on.
I simply avoid it for anything but small enhancements scripts on otherwise static HTML pages.
It's okay for you to have the opinions you do, but I have zero problems programming very complex systems with Javascript, even without Typescript (before Typescript ever existed). Javascript has always been the easiest language to build anything with for me. And yes, I know a dozen other languages including C, C++, C#, Python, Go, various flavors of Assembly, and more - but Javascript is still my favorite. YMMV.
> It's okay for you to have the opinions you do
Likewise.
> I know [...] C, C++, C#, Python, Go, various flavors of Assembly
That's good. But these are all languages that either lack strong typing and or are themselves rather quirky.
Only C# and Go stand out, IMHO, as languages that are recently designed. Even Python did not have user defined classes in the first versions, and some things thus feel off (__len__, __init__, etc.).
Also C# and Go still have implicit nulls all over the place. Their designs show ignorance for modern language design. Sum-types, explicit null, immutability, sound type systems -- all lacking in all langs you mention.
So what languages do have these IMHO "Game changers"? OCaml/ReScript/ReasonML, Haskell, Elm, Rust, Gleam, F#, Scala, Kotlin, ...
Those languages really showed _me_ something important: how it could be better.
There is another group of languages that also sits on a unique place in the solution space: the LISPs (incl. Racket, Schemes and Clojure). I found it very worth while to learn to program with them as well.
In the last few years C# did away with implicit nulls. Nullable and non-nullable object references are disambiguated with T? and T. There are multiple keywords and expressions to further make it nice to work with these. You would be correct to note that there are "nullability holes" in certain edge-case scenarios, particularly around JSON serialization. But other than that it's a pretty smooth sailing.
If you do use C#, you may also want to add <WarningsAsErrors>nullable</WarningsAsErrors> to .csproj too.
I know it's a bit like Kotlin. I heard though that C#'s move still has some std lib bits that are nullable.
The idea is not to never have nulls. It is pointless (ha) - the way to understand T? vs T in C# is like an optional.
The entirety of standard library is annotated since long time ago. All new and not so new projects are also null-aware. Pretty much either completely legacy libraries or libraries that explicitly removed Nullable: enable that is set by default for all new project templates do not have those.
As I mentioned previously - it isn't perfect, but the level of "good enough" of NRTs in .NET is such that the nullability is a solved problem.
I prefer Assembly before any of the fringe languages you mentioned. There are no types in Assembly, and I rather like it that way. If you know how to work with data, then there's very little confusion about what you're doing.
And there's a reason practically nobody uses the languages you mentioned, and Javascript is so wildly popular. Most people don't really like or need type nagging systems. Sure, if you're trying to launch a rocket or doing something like building medical equipment or something else that requires covering your ass, then yeah, sure, go ahead and type the hell out of it. But for most programming tasks the languages you mentioned are overkill and frankly too obscure to use.
JS is popular due to it being the only browser lang. Did you forget that?
Also: TS popularity shows that not everyone in the JS community agree with you.
For server side web dev you will find that statically typed langs (Java, Kotlin, C#, Go) are a big chunk of the pie. Sure it also comes down to taste, but if you work in a large team, having stronger types can greatly help to keep the codebase in shape. (better IDE refactor tools, clearer for noob, harder to hide/ check in rubbish to git)
If you are afraid by obscurity, have a look at Kotlin!
Typescript is still nowhere near as popular as Javascript, and likely never will be. It's overkill for the vast number of uses of Javascript.
JS's uses have been growing steadily. The language was clearly not "designed" (I saw a talk about the first 10d of JS, little actual designing happened).
TS is to the rescue when you have a big JS project, because that's what JS is not good at: big projects and large teams.
Except big JS projects with large teams existed well before Typescript ever did, and it never stopped anyone from producing good results. Typescript really isn't making anything possible that wasn't possible before Typescript existed.
I’d love a link to this talk if you happen to have it handy!
only one type :p
A standard library is not a "language" feature.
I check out C projects which have not been updated in 15 years and run make.
I have C code that I wrote in 1995 that still builds. There are many warnings, however.
You can't use 'pip install' in debian because they chose to do that during the transition from python2 to python3. You should use 'pip3 install' which is provided by package python3-pip from debian.
One can argue that this decision should be revised by debian but you should not install packages on system python installation for working into projects. Always use virtual environment.
No that does not work either. You get an error like this:
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
As far as I can understand, they did this on purpose to dissuade users from installing packages globally to avoid conflicts with other Python environments.
Anyway, I'm not trying to argue about if that decision is right or not - I just wanted to use it as an example for my case that the JS ecosystem isn't alone and may even be far from the worst when it comes to this kind of issue.
I understand that, you can use `--break-system-packages` or change configuration `python3 -m pip config set global.break-system-packages true`.
Python is different here because in many linux distributions, there are many tools that rely on you system python. Python unlike node is not limited (in practice) to web applications. that's why you have to be more careful. So while I understand you are using this as an example, I don't feel that your comparison is apple to an apple.
>Python unlike node is not limited (in practice) to web applications. that's why you have to be more careful.
They may or may not be running Node.js specifically, but I believe that many Linux distributions, as well as Windows, include JavaScript code in core applications. I don't see this as particularly different, except that they might choose to assume a single standard system Python that is able to be modified by standard Python development, whereas I would rarely expect that to be the case with however each component chooses to execute JavaScript.
Apps that rely on OS provided Webview and electron apps are totally different situation. This is exactly what I said. And no, they don't use any standard nodejs installation like python. And they are different as I said. so this is still apples to orange comparison.
>Apps that rely on OS provided Webview and electron apps are totally different situation.
No, they're not. I'm talking about core apps and services that are essential to a functional operating system. This is exactly the same situation. The difference is choices made by the OS and language ecosystem about how to manage dependencies in various use-cases. It is an apples to oranges comparison because of those decisions and not because of the language.
Which apps would that be ? I'm pretty sure there's zero node server running on my desktops & laptops at the moment
My slightly heretical opinion is that Debian would have been better off removing system pip entirely. The system python is for the system.
My not so heretical opinion is that PIP should behave like NPM by default, and work under a local environment subdirectory, just like "npm install" already creates a "node_modules" directory where to put all files, without the user needing to specify how and where and which env tool to use.
Ah, but that would require that the python interpreter look first in the local directory in case there's a virtualenv there, which would mean your system could break depending on which directory you ran bits of it from. Less than ideal.
It's better all round to just assume that unless you're building something to be a part of the system itself, that the system interpreters just aren't for you. There's a special case for shells where they're actually UI, but I've seen so much effort wasted over the years trying to let system interpreters do double-duty as both system tools and development environments that I've come to the conclusion that it's simply not worth the hassle.
That's the idea, yes. Do you have by any chance any experience with Node.js? Running a JS script is usually done in two steps:
1. npm install
2. node my_script.js
First one downloads and installs all dependencies listed in a package.json file into a node_modules local subdir. This is equivalent to creating a pip venv, activating the venv, and running pip install against a requirements.txt file.
Second one runs the interpreter with the script file and against the locally downloaded dependencies.
I.e. the local env dirs in Node.js are not a suggestion that you need to learn about, make choices, and actually use; they are just how the tool works by default, which makes the experience of using Node.js with NPM by far much better and less confusing than the default experience of using Python with PIP.
I have years of painful experience with node.js, yes. The critical difference between node and python is that there are no system-provided scripts on my system which have `#!/usr/bin/node` as a first line.
There are a load of scripts with `#!/usr/bin/python` (or similar) in `/bin`, which means package resolution can't look first in a local subdir otherwise all bets are off. Again: your system will break depending on which directory you run bits of it from. The system-provided process would be loading dependencies that it was not tested against. In case this is unclear, you do not want that to happen. It would be bad. On my system I can see python scripts involved in driver management. I do not want them to do unexpected things. It would be bad.
Python package management is a mess in lots of ways, but this particular choice isn't one of them.
> which means package resolution can't look first in a local subdir
Sure it can, it's just a matter of examining where it's being run from.
I wrote python-wool to load packages from a venv if it finds one.
Yep. That comes under the category of "not default". Although now you've pointed it out I'll probably be using it.
Python is a scripting language. I shouldn’t need to faff about with environments if I want to run my script on another machine.
Encountered this too. So annoying.
I don't know if any API does this, but I often wished that APIs I used could mark up just a tiny, tiny subset as @FutureSafe, like the opposite of tagging deprecated code, so that for smaller projects you could stick to only or mostly those parts and know that you can come back after 2 years or 20 years and things still work. Maybe throw in a compiler flag to verify that nothing not-@FutureSafe is used by accident. Sometimes you just want to write something small, once, and not have to actively maintain it forever. Outside of shell-scripts or retro-platform code you can barely write Hello World for any target today and feel confident that it will still run six months from now.
Libraries in the project fixes this whole issue for C/C++. As for compiler issues, just run it with the same compiler. It really shouldn't take more than 20 mins of setup.
> Libraries in the project fixes this whole issue for C/C++.
Yeah, make sure no-one can ever fix your security vulnerabilities.
> As for compiler issues, just run it with the same compiler.
And when the same compiler doesn't exist for your new machine?
Freezing everything makes things easier in the short term, but much harder in the long term.
This is not even JS specific. All of Python / Ruby / other changing runtimes will require some upkeep. Even C recently needs some attention because clang updated default errors.
[deleted]
Even some of my Rust projects end up in this state, where updating one library ends up with needing to update interacting libraries.
That sounds pretty frustrating.
When I find my shell scripts from 20+ years ago, they still just run as intended.
[deleted][deleted]
It's also two hours that would have been completely avoided if the author were familiar enough with Node to know to pin the version and not try to install 4 years of updates in one shot.
Most who are here saying that X, Y, or Z ecosystem "compiles and runs" fine after 4 years are talking about the time it takes to resume an old project in a language they're very familiar with running the same dependency versions, not the time it takes to version bump a project on a language that you don't know well without actually having it running first on the old version.
I can open my 4-year-old Node projects and run them just fine, but that's because I use the tools that the ecosystem provides for ensuring that I can do so (nvm, .nvmrc, engines field in package.json).
The author didn't update all dependencies, they just tried running it on a newer version of Node itself. That is definitely a use case included when most people talk about an ecosystem compiling and running fine after several years.
In some ecosystems, yes, backwards compatibility is a given, but not in most. Python versions behave in much the same way as Node, with you having to make sure you're using the same Python version this time as last time in order to be able to install the same dependency versions. Java has been better in recent years, but Java 8->9 can take several hours to get working on even a small project.
Pinning versions should’ve been the default, then.
> Two hours of work after 4 years sounds ... perfectly acceptable?
Does it, though? Node wasn't exactly new 4 years ago, and plenty of other languages would offer a better experience for even older code -- Java, C, C++ to name a few.
> Java
50% of Java developers are still regularly working in Java 8 [0], which is the same solution that the author could have arrived at immediately—when starting up an old project after 4 years, use the same version you ran last time, don't try to update before you even have the thing running.
> C, C++
Not my experience, but maybe it depends on your operating system and distro? In my experience sorting through the C libs and versions that you need to install on your system to build a new project can easily take a few hours.
In exchange for needing to run C? How many hours would it take to build a Node app equivalent in C, I wonder.
0 would be fine. I'd take 0. This could all have been avoided if the interpreter version had been recorded by default somewhere. That's all this needed.
Other ecosystems usually do not have problems to the extent the author had.
I am deep in the Python ecosystem, and I love Python, but I have to admit that Python has the same issue. Reviving a medium-size project after 4 or more years usually means I have to catch up on a lot of new surprising deprecations. That's not because there's anything wrong with Python; it's more of an economic issue: the authors of active libraries have little or no economic incentive to support old, deprecated versions, so they just don't. That's life in the modern world. It is a deep problem that should theoretically affect every large software ecosystem because very few library authors can predict the future with great accuracy, and very few open source library authors have any significant incentive to support old ideas.
> That's life in the modern world. It is a deep problem that should theoretically affect every large software ecosystem because very few library authors can predict the future with great accuracy, and very few open source library authors have any significant incentive to support old ideas.
I disagree. This is an easy problem to avoid with minimal due diligence, people just choose convenience and make unnecessary tradeoffs.
* Use the standard library (ironically not available for Node projects). It will be built with better backwards compatibility almost every time. What deprecations do occur will likely be VERY WELL documented with much quicker adaptions.
* Limit third party dependencies. Do you really need an ORM for your apps 40 sql queries? How long would it take you to scaffold it with GenerativeAI then make it production-worthy without the ORM? 1 hour? 5 hours? 20 hours?
* Pick technologies with better track records. Maybe don't use Beta software like Swift Data for your iOS App. Maybe choose golang for your API even though it'll take a little bit longer to build it.
And this is how you end up with rewriting the world and spending more time rewriting dozens of existing libraries to avoid adding them as dependencies and less time working on the problem you're actually trying to solve because you're fixing the same dozen bugs that the first person already went through the trouble of fixing for you had you simply used their library instead of eschewing it and having to learn everything that they had already learned for you. Often times because the problem space is deeper than you could have known before getting into the weeds and hopefully you don't get bit by sunk cost and decide to do yourself a favor and just use a library instead of continuing to work on solving problems that aren't related to what you set out to do.
There's a balance to be struck between LeftPad scenarios and "Now there are 37 competing libraries".
Exactly. The right thing to do is study each dependency and decide whether the reward of having the problem solved quickly is worth the many risks of adding dependencies.
I'll acknowledge here that there seems to be a significant difference between Python projects and Node projects: in my experience, a small Python project has a handful of dependencies and maybe a dozen sub-dependencies, while a small Node project usually has a handful of dependencies and a few hundred sub-dependencies. That's where Python's "batteries included" motto does seem to help.
> There's a balance to be struck between LeftPad scenarios and "Now there are 37 competing libraries".
I think we're actually in agreement. My assertion is that for projects which want to avoid constant maintenance, particularly small projects, you can make architectural decisions some of which could significantly improve the maintenance outcome. Of course there are trade-offs to those, and if you make the wrong architectural decisions it can cause more harm than good.
Maybe I'm glib for calling it "easy" but for many leftpad scenarios it really is a "holy crap why did you think that was ok" scenario in my experience. Lets avoid those scenarios when we can.
> How long would it take you to scaffold it with GenerativeAI then make it production-worthy without the ORM?
Having a machine do codegen to map your queries to objects is still an ORM, except now it's nondeterministic and not updateable.
(mind you, I come from C# where you simply use LINQ+EF without worry, or occasionally Dapper for smaller cases)
> Use the standard library (ironically not available for Node projects)
???
Node.js has an extensive stdlib [1].
Contrast that with Python which practically requires a 3rd party HTTP client.
> That's not because there's anything wrong with Python
It's absolutely because there's something wrong with Python, the package management, and also the type safety. JVM languages haven't had these problems for 20+ years.
> I am deep in the Python ecosystem, and I love Python, but I have to admit that Python has the same issue.
The same problem in Python is much easier now because you can ask the uv resolver to limit itself to some earlier point in time.
You can do `uv pip install --editable . --exclude-newer=2022-01-01` and you will end up with a resolution from two years ago. Since uv can also install older python versions automatically you can easily bisect you to a newer point.
Will uv even be around in 4+ years? No idea.
I don't know either, but think even if it's not, whatever will replace it, will at least have to achieve feature parity.
Maybe they can try to get the node version into the package-lock tomorrow? This seems like an opportunity to improve the ecosystem, rather than a biting critique.
Or, instead of responding to sunk costs by getting sunk deeper into the muck, just cut your losses, ditch Node and its proprietary/non-standard APIs and unstable featureset, and use a standard runtime.
The author of the blog post is trying to run a static site generator. A static site generator doesn't need to be able to do anything that Node provides that can't be done with the World Wide Wruntime (which they're already going to use to verify the correctness of the SSG output). So use that runtime and tools that target it, not Node.
Which is fine until your host doesn’t support older versions of node.
I just got burned by an old js (vue 2) project. I ended up re writing it using good old ssr in Django with htmx and alpine when necessary. Now it’ll run until the end of time. It doesn’t even have a build step.
I sympathize with you, I had one too. Luckily it was small.
It seems luck of the draw. My old React projects (old as in 2018) still work great with class components. I guess the Vue guy did say he would be more revolutionary, when he launched it.
Exactly, 2 hours after 4 years is perfectly fine. And all those saying "well my c project did it without a problem!" yeah that's why the entire world is mostly using c for large modern, complex and ever-changing applications; right? Most of the C stuff people are claiming compiles fine after so long have no external dependencies and are really small, simple projects...
This whole thread just reeks of JS hate. I'm not perfectly happy with the language either, but nor am I perfectly happy with any other. If my employer wanted me to use C# or Go or whatever instead I'd be perfectly okay with that as well.
> Two hours of work after 4 years sounds ... perfectly acceptable?
It's really not. I'm in the same situation with a Go + React project I haven't really touched in 3 years. The Go part is just `go build`, it still works after 3 years and I have a build in a fraction of a second. The React project doesn't build anymore at all. I used Parcel 2 at the time, and it turns out it's incompatible with a Mac M1 for some reason, and it's hard to update in my case. I also used Antdesign and some components (icons specifically) apparently disappeared. It should not be so hard.
I feel like you've just proffered a second success story. It can get worse. I've worked on software whose "dusting off" involved swapping the entire dep tree because the remote that Bower was using no longer existed, or manually compiling pinned versions of dependencies that Homebrew no longer offered. Sometimes it took days to get everything working.
Go would be much worse than Node.js in most cases since dependencies directly reference GitHub projects which could have been deleted whereas NPM has a cache and immutability guarantee which makes old NPM modules available and unchanged.
You rely way less on dependencies when writing Go code though. That’s in big parts why it doesn’t break nearly as much (or at all).
Go also has the goproxy
it took two hours just to get the project running as it was 4 years ago. wait how much time it will take to upgrade everything to new versions.
and dare i say this is the lucky case. i had problems reactivating some older project because some dependencies were not version locked and newer versions of those were incompatible with other versioned ones. then it took forever to figure out the right versions. and i think i even had situations where the old version wasn't even available anymore (perhaps because of a linux dependency that wasn't available anymore on my newer linux system)
It took 2 hours to realize that project build for a specific version of node should be run with that version of node. And even that was self inflicted since author didn't vet dependencies and used something build as a C++ node addon instead of actual JS (my bet it was to have a slightly easier time writing CSS).
And even though it builds, there’s no guarantee it’ll work properly.
Also, Gatsby has dependencies that aren’t even Node. I have had it break too.
An anecdote — I recently upgraded an old NodeJS project written using v14 to 22. It used an old version of Svelte for the UI. It wasn’t too bad, took about 90 minutes to upgrade a bunch of Svelte 3 components to Svelte 5, with some assistance from Copilot.
This was with zero awareness of Svelte 5 (I only knew it was released — I wanted to upgrade because Node 14 is EOL now and I’d noticed some npm audit warnings.) So I feel I also learned something in the process.
I would love the author to test and old java/maven project. Node is a paradise compared to that stack.
Why do you think so? I have 10+ years old Java/Maven projects that build and run fine.
Maybe you are fortunate or I'm not because in projects I have experienced missing o changed maven repository urls and changed deps packages names or url. And was an hell to fix.
The only problems I've run into are related to certain package repos that went offline. So you have to track down suitable versions of certain packages on other repos.
OTOH with Node I always find myself in dependency hell with dealing with old projects.
In my experience this only happen if the project does not have a lock file and does not specify a node version (two good practice that must be followed).
I'm talking about upgrading dependencies. Something that's rarely easy in a JS/TS project with lots of dependencies.
Each to their own, I don't find it acceptable - while other platforms have updates needed, it's not to the same level.
Yep, this could have been sorted by one line in a .tool-versions file and using mise or asdf.
Exactly, you would have to compare it to other package managers to make it fair.
Npm at this point is probably the most used and most worked on package manager in history. If it is still one of the worst ones that is actually kind of interesting philosophically.
I agree. It is not weird if you try to run old code through new env / vm / framework / compiler it can break.
Locking env version is important.
Double points for using experimental / POC technology like gatsby or nextjs. They are expected to burn and fail
> Two hours of work after 4 years sounds ... perfectly acceptable?
Pefectly acceptable? Perfectly? Really? I have 10 year old C and Go projects that build and run fine as if nothing has changed. I can upgrade the dependencies if I want to but that's on me. The projects themselves have no problem building and running fine.
Just to try it out, I took one of my old C projects written in 2010, last code update in 2012. It's a machine learning utility to train maximum entropy rankers. I was a bit worried because it used CMake (rather than a vanilla Makefile). It configured and built without any issues (and super-fast).
How many external dependencies/libraries does it use? Ie remote dependencies not stored on your machine?
How complex is the project? I know ML utilities like that can be as simple as 50-100 lines.
Lol, Go was notorious for breaking with its "pull from main"/lack of lockfile approach.
Did you read it? The author did not actually resolve the issue, only figured out that it should build with older Node version.
You’re absolutely right. My rational brain agrees and chalks it up to poor project management. However… emotions run high when you have zero idea why something isn’t working and the process of elimination is pretty taxing. So the point for me is venting / maybe someone will read this and remember to write their node version down!
> Two hours of my life gone, just to pick up where I left off.
If I had only wasted two hours every time I had to use npm for some reason I'd be significantly ahead of where I am now.
Doesn't that apply to any it-issues you don't know how to solve?
no, most of the time when a README says:
> run command xyz
I run xyz and I'm on to the next step in an average of ten or fifteen minutes (because yeah, stuff still goes wrong some times).
But if the instruction is to invoke npm, there's always some crazy side quest involved. It doesn't get any better with time because next time around it's a totally different side quest. I can't even offer any criticism towards fixing it, it just sucks for unique new reason every time.
The worst part isn't just that it's nearly impossible to run/update an outdated JS project, but that this process will repeat itself ad infinitum.
On the flip side, anything that uses vanilla JS without a build will most likely run just fine, probably till the end of human civilization.
I truly believe some flavour of "Javascript Classic" (some future state of JS before some big shift in syntax/mass migration to something else), C and x86 instructions will follow humanity for the rest of time. There will be javascript somewhere aboard the interstellar spaceships of the future, and we will still complain about it.
I think it was 'A Deepness In The Sky' that posited so many layers of legacy underlying the starship control systems of the era that one of the most crucial positions on a ship was that of 'Programmer-Archeologist.'
That sounds great. Added to my list.
I adore the idea of somewhere on the Star Trek Enterprise, underneath the isolinear chips, the EPS conduits, the warp containment field control mechanisms, somewhere, if you dig far enough, override enough, you can get an LCARS screen that is all black, with a small `$` in the upper left and a blinking text cursor, waiting for a command.
I am already a programmer archeologist, and it's one of the most rewarding parts of my job. Though my most recent project is regrettably well written and maintained. :)
im going to try and have this attitude
Well, even Fortran is still around us in some lapack code in numpy and in a lot of the stuff behing scipy, so, a lot closer than a lot of people can imagine.
Basically a lot of AI depends on a bunch of absurdly optimized
numeric libraries writen in Fortran.
Fortran is well into the way at becoming a centenarian programing language at 74 years of age.
My grandchildren will live to see Vernor Vinge's programmer-archeologists troubleshooting PHP issues on the Wordpress install responsible for life-support around Alpha Centauri.
Just bobble some programmers now.
I sometimes think about one of the Star Trek episodes where the ship was getting attacked by a "SQL injection", and I think that's pretty realistic
Yeah, starting to believe the hacking scene in the Matrix where the machine city was still running on IPv4 wasn't a blunder but foresight.
There will also be someone playing Tetris, Doom and Final Fantasy VI on their neural interface, long after all modern games have been lost to time (and DRM).
JavaScript will be killed off by WebAssembly.
Zombie JavaScript will be reduced to being glue code and then not even that.
The Webassembly dev experience is awful, and it needs to be glued together with JavaScript anyway. 99% of web code doesn't care about the performance difference, and anything GUI is wasting more time jumping back and forth to JavaScript to save any time at all.
> The Webassembly dev experience is awful
It really isn't. Wasm is just a compilation target now.
Network effects are very strong for languages.
Better query languages than SQL could exist, but there's so much existing code and expertise out there that it's not worth the effort. Better backend languages than Java can & do exist but don't have the same enterprise popularity.
Developers, projects and companies have an immense incentives to target the most popular programming language.
JavaScript's one advantage was that it was the privileged language in the browser. It has lost that now.
JavaScript has entered its Walking Dead phase. It will gradually be displaced by all languages compiling to WebAssembly.
JavaScript’s main advantage is being commonly familiar to most software engineers.
The English language similarly lost its position as the preeminent imperial language a long time ago, so too with Latin & Rome. It takes a long time for a popular language to die because everyone wants to speak what everybody else speaks.
WASM has a lot of shortcomings. You can't even update the DOM without doing a worker dance.
only after they make DOM access easier. see previous discussion here:
JavaScript will be killed off by webassembly for about 5 years now
The difference lately is the number of tools that are now in place for WebAssembly development and the new extensions to WebAssembly (WasmGC, Memory64, etc.).
Despite 28 years of effort at optimization, JavaScript is outperformed by WebAssembly. There's not much coming back from that:
YMMV but I had a 4 year old project whose only dependencies were socketio and express and it booted right up. So stick to stable, mature projects and you're likely to be fine.
Just watch out because socketio must be version matched for client-server or you will get the most annoying errors and state inconsistencies in the world. That's a scary production update let me tell you. Version 2.3.0 still scares me to this day after trying to upgrade that without production downtime.
> it's nearly impossible to run/update an outdated JS project
You corrected yourself, but it's worth emphasizing here: a _NodeJS_ project, you mean.
Unless you're using non-standard APIs, stuff written to run in the browser generally keeps working just as well as it did before, no matter whether it was written 2 years ago or 10.
What if your project is old enough to predate the modern "just use vanilla js, it's fine"? :tableflip:
j/k - I'm slowly removing all the Zepto code I have and it's usually a relatively quick search&replace.
This nearly impossible task took the author two hours
He gave up and went back to an old version. So we don't know if it was possible or not.
Or until Google decides to change things to be more standards compliant, regardless of the collateral damage
I call this phenomenon "node rot". Judging by the comments here, it seems like a universal experience.
My favorite is the way that Python projects rot. Not only does Python's setuptools give you all the fun that node-gyp does, the common practice of versioning packages with packagename>=1.25.5 means you're almost guaranteed breakages as pip installs newer versions of packages than what the project was built with.
And project specifying "requires python 3.X+" instead of Version X to Y is also a major culprit I often encounter.
Most of the times it will not work with the newest shiny python, which I only notice after already installing it and then having searched search the Github issues.
That happens to me all the time. It helped cement my habit of binding the python version to the project with direnv and a flake.nix so I end up switching to the right version when I cd to the project dir.
Oh, one of the worst forms of torture is definitely trying to get a random Python AI project from GitHub running locally. There's almost always a conflict between versions Python, Cuda, Pytorch, and a hodgepodge of pip and conda packages. Publishing a requirements.txt is the bare miminum everybody usually does, but that's usually not enough to reconstruct the environment. The ecosystem should just standardize to using declaratively prebuilt container environments or something.
Granted, my experience is mostly from the GPT-2 era, so I'm not sure if it's still this painful.
Don’t know if this would help your case or not, but jart’s llamafile seems like it would be useful
I just today tried reviving an old v12.4 node project and node-gyp is trying to use python2 which I don't even have on my Macbook anymore.
That Python 2 crap hit me as well when working on an older project. It used node-sass which depended on node-gyp. Dealing with upgrading that was a pain because I had to upgrade so much more to get versions of everything that played nice together.
What really pissed me off about it was that Python 2 was already known to be nearing EOL when our project was started, so node-gyp should have been upgraded to work with Python 3 by then. And even more annoying was that node-gyp already had Node to run on, so why in the world was it coded to depend on Python at all!?
did you experience this breakage using poetry too?
This will always be an issue for the node community - it’s endemic to the JavaScript shipping / speed culture and the package management philosophy.
Go is much, much better on these terms, although not perfect.
I’d venture a guess that Perl 5 is outstanding here, although it’s been a few years since I tried to run an old Perl project. CPAN was dog slow, but other than that, everything worked first try.
I’d also bet Tcl is nearly perfect on the ‘try this 10 year old repo’ test
CPAN.pm is not the fastest, no, though it generally spends most of its time running each distribution's tests before installing it, which while it does have a certain "start the install and go for lunch" to it is an excellent canary for if something's changed underneath you *before* you end up having to go spelunking your own code.
App::cpanminus (cpanm) is noticeably lighter, App::cpm (cpm) does parallel builds and skips tests by default.
An approach I've become quite fond of is using cpm to install fast into the local::lib I'm actually going to use, then creating a scratch setup in /home/tmp or similar and running cpanm in that under tmux/abduco/etc. to do a second install that *does* run the tests so I have those results to refer to later but don't have to wait for them right now.
(if I ever write a cpan client of my own, it's going to have a mode where it does a cpm-like install process and then backgrounds a test running process that logs somewhere well known so this approach becomes a single command, but I keep getting distracted by other projects ;)
Clojure too, by all accounts. I'd say Common Lisp but they're in the weird position of code itself being rampantly portable across time but the ecosystem around it being astonishingly immature.
Things have improved a lot with the introduction of Quicklisp, but I'd have to agree when compared to others.
CL is still one of the nicest languages there is, and the only language that skirts the line between being some combination of dynamic and interpreted yet typed and compiled.
It is showing its age though, particularly around the edges like what you're saying.
Quicklisp is a start, and I like that it's a Debian-style distribution because it can at least move towards making the assertion that everything in the distribution is mutually compatible. The problem is that the underlying ecosystem has no culture of saying "version X of library A is only compatible with versions Y+ of library B". You can put that info in the asdf system definition but I don't see many examples of it.
The other problem is rate of updates, and that's a symptom of it basically being on one person's shoulders to keep it ticking over. I can't readily think of another major language ecosystem with that characteristic. It just seems really fragmented.
Quicklisp is merely a registry for getting libraries, not shipping code. Download what you want and use asdf.
Yes, that's what makes it immature. There doesn't seem to be anything in the ecosystem to handle version resolution. Qlot is a start towards that in that you can at least specify alternatives to get yourself out of a hole, but you still ended up in the hole in the first place.
I think it’s a good thing. I don’t want npm for Common Lisp.
Which bit is it that you object to?
Edit to add: this wasn't intended as a gotcha question, so apologies if it came across as one. I have issues with a lot of details about how npm works and the ecosystem it supports. I think it's possible to avoid them, and aim for something more like a bundler or a cargo, but again there are issues there (certainly in the former's case, I have less experience of the latter). Getting to a good answer that works for CL means understanding both the problem space and the known solutions elsewhere.
It might be that "a better quicklisp" is enough?
Sure, I didn’t take that negatively.
Let me start with facts:
- npm actually downloads multiple copies of each library, when needed to satisfy conflicting version requirements.
- this is only possible due toruntime features of JavaScript. In most languages like C this causes symbol collisions.
- I think this is a problem in Common Lisp too due to packages being global. Maybe there is a fancy way to rebuild packages at load time.
- this is why the Debian style release makes sense. Either everything loads together, or not.
Opinions
- I want to know all my dependencies. I treat them as my own source, so tar downloaded is close to my mental model.
- For c projects I usually have a make file with curl commands tied to exactly url. If I want to update I manually change the url.
- quicklisp already has a nice way to make an isolated folder just containing your code and its dependencies to be loaded with asdf. It gets out of the way once you have downloaded your libraries.
Yes, that "feature" of npm isn't something you'll find elsewhere. It's not inherent to the problem of version resolution. Just about anywhere else you'll just get an error message that the version resolution isn't possible, if there's no available combination to satisfy all the requirements.
That one design attribute of npm probably more any other feels like they did it because they could, not because it was a particularly good idea.
Go’s minimum version selection is the way and I don’t understand why other ecosystems haven’t adopted it. You’re be able to compile an old project with all the library dependencies it had at the time it was released. It might have security issues, but at least you start with a version that works and then can go about upgrading it.
It also helps that if some library dependency generated Go code using a tool, the Go source code is checked in and you don’t have to run their tool.
Getting the exact dependencies it had at release is a solved problem in Node and most other languages with lock files too.
It's just no guarantee that those old versions work on the new system, or with the outside world as it exists by time of installation - which can be as true for Go as any other language. If the XYZ service API client still gets you version 1.2.37, that's not actually any help if 1.2.37 calls endpoints that the XYZ service has removed. Or a cgo package that binds to a version of OpenSSL that is no longer installed on your system, etc.
This is why I say it's a cultural problem, not a technical problem. In goland, changing API calls in minor versions is pretty much a sin. At least it's something you'd do .. carefully, probably with apologies. In node, it's extremely routine to re-pin to newer modules without worry.
Some time ago, I wanted to update Arch, on a server running some python project I had inherited. Long story short, it relied on something that relied on something that etc., and then it turned out certain components that were needed for the upgrade process had been taken offline. Now the system can’t be changed, unless there’s significant work done to the code, and that’s too expensive. It runs on request in a container now, while it lasts.
back in the day you were supposed to check in your compiler into version control (not the lockfile, the whole distribution).
I used to think that people emailing screenshots of corporate dashboards were idiots. I now think that's actually genius - a frozen in time view which you can't regenerate but will be available until the end of time if you need it. (Hello, Exchange admins!)
My hot take is that lock files and nested dependencies induce fragility. If packages were required to work with wide range of dependencies then that would force the ecosystem to build the packages in more robust way. Basically I think the dependency trees built with modern package managers in a sense over-constrain the environment, making it all sorts of difficult to work with.
On the other hand, the other extreme induces stuff like autoconf which is not that great either. Trying to have your code be compatible with absolutely everything is probably not good, although arguably platforms these days are generally much more stable and consistent than they were in the heydays of autoconf.
I truly think it's just because the engineers that started working with node were ... young. They wanted to rapidly iterate, and so crufty old habits like this weren't what they wanted or felt they needed.
What's been interesting is watching these devs age 10 years, and still mostly decide it's better to start new frameworks rather than treat legacy code as an asset. That feels to me like a generational shift. And I'm not shaking my cane and saying they're wrong -- a modern LLM can parse an API document and get you 95% of the way to your goal most of the time pretty quickly -- but I propose it's truly a cultural difference, and I suspect it won't wash out as people age, just create different benefits and costs.
I've had a fair amount of trouble with Perl/cpan simply because of the sheer number of XS (compiled C extension) modules in the ecosystem. For even a medium sized perl project that e.g. talks to databases or whatnot, building it after a long time requires you to spend tedious hours getting the right development headers/libraries for the compiled components, fussing with compiler flags, dealing with C ABI symbols that were deprecated in the interim, etc.
To be fair, Python and Ruby also have this problem (for newer Pythons, popular extension modules at recent versions are more likely to Just Work due to wheels, but if you're building old code for the first time in 3+ years, all the old problems come back with a vengeance). It's more of a "scripting language that got popular enough that ordinary projects have a deep tree of transitives, many of which are compiled on-site" issue than a Perl specific problem.
You're talking about what's wrong with the NPM ecosystem, not JS.
Yeah I'd expect 20yo Perl5 stuff to work without issues.
A few weeks ago I was experimenting with a sound generation dsl/runtime called Csound and even most 30yo sources were working as long as they didn't use some obsolete UI.
It’s the same with R. The only thing preventing many ancient packages from running under new versions of R and vice-versa is the fact that the package author simply set the minimum version to whatever they happened to be using at the time.
I recently migrated a project to Node.js 8 (!) to Node.js 14 (hopefully just the beginning), and I can relate to this post.
In the JS ecosystem, I'm aware that Meteor is one major framework that takes backwards-compatibility seriously. Updating a project on an ancient version to a less-ancient version usually is not too hard. They try to keep APIs the same and introduce compatibility packages where possible.
Meteor 2.16 to Meteor 3 introduced major breaking changes due to an underlying technical issue that had no workaround. They had to refactor the whole project from using Fibers-based concurrency to typical async/await.
node-gyp in general has also been a source of issues in the past for me as well.
More recently, ESLint changed their configuration file format and all existing tutorials suddenly became outdated.
I firmly believe the ecosystem does not have to be like this, and we would save a lot of man-hours by being more committed to API stability where possible.
> Meteor 2.16 to Meteor 3 introduced major breaking changes
That's when I picked up the Node/React ecosystem...
> node-gyp errors > downgrade to 12.2
This is what I did until Vercel decided to not support Node 12 anymore...
After node 10, things started to stabilize. These day a node upgrade is a tiny commit just changing the version number in the Dockerfile and package.json
I've actually had a node project go bad in a mere 4 months. It must be a new record. That was about 4-5 years ago though.
Hopefully the ecosystem as improved since then, but it was nearly impossible to get going.
Some packages had been changed and the version number overwritten with incompatible packages, and the conflicts were plenty.
One of the things I'm intrigued by is that JS people, and the other couple of ecosystems where this is a big problem, go out to learn another language (as a good T-shaped developer does), and then start posting frantic questions to the new language's communities about how this popular library hasn't had a commit in six weeks, is it dead, oh my gosh wtf aaaaaaaaaaa.
It's OK. Not every language ecosystem is so busted that you can reliably expect a project not to work if someone isn't staring at it weekly and building it over and over again just in case. Now, it's always a risk, sure, no language anywhere is immune to the issue [1], but there's plenty of languages where you can encounter things from 5 years ago and your default presumption is that it's probably still working as well now as it did then. It may be wrong, but it's an OK default presumption.
[1]: Well... no language in common use anyhow. There's some really fringe stuff that uses what is basically content-based references for code dependencies, but I'm not aware of anything that I'd call "production quality" that even remotely looks like that, and is immune to someone just plain making an error with the semantic versioning or whatever.
> frantic questions to the new language's communities about how this popular library hasn't had a commit in six weeks
Lol, my perspective is almost the opposite. If it's got a lot of commits in the last six weeks, I need to look for something that's stable. Unless there's a good reason for so many commits; I feel like that many commits means it's in active development, which implies the requirements and interfaces aren't yet determined and who wants to rely on that?
These JS developers would probably shiver at seeing many Common Lisp repos with a last commit like 12 years ago and still working like a charm.
I’m curious, how do you measure the pulse of a project that old? Do people still talk about it? Or that not even necessary — use it until it breaks and otherwise don’t think about it?
Why do you want your building materials to have a pulse?
Ideally, in adopting dependencies, you should be looking for a mature utility whose design was clear and implementation is complete.
If it's open source, you should be able to read and unserstand the code yourself, and you should make an earnest effort to do so, in case it has faults you wouldn't usually allow in your own code and in case you need to fork it at some point.
This lets you you build well-designed, stable, maintainable, clear things yourself.
The alternate, building your project on a random collection of "living" projects undergoing active development is how you banish yourself to perpetual maintenance, build failures and CVE warnings that have nothing to do with your work, surprise regressions when you update your referenced version (you are, at least, pinning your versions??), etc
Something like a HTTP 1.1 client is something you might expect would be a pretty stable thing that doesn't need too many updates, right?
But I would not assume that a HTTP client that has been untouched in 12 years supports SNI, for example, which means that actually it might be totally useless for a lot of modern sites (certainly Android did not support SNI 12 years ago).
You're going to put it behind nginx anyways, right? So why does it even matter?
Client, not server.
If it has an issue tracker, you can look in there for things that look like real issues and are unaddressed.
If there's no issue tracker, you can YOLO and try it and see if it works, or you can look around at the code and see if it looks reasonable.
Even if there are unaddressed issues, you can always use it and fix it when it breaks. If it's reasonable enough, it's a good start anyway. And at least my assumption with open source is I'm going to be fixing it when it breaks, so lack of a pulse is better than churn.
Maybe "pulse" could be transitive? Like, if a project doesn't have many recent commits, but many projects using it have recent commits.
[deleted]
Node is bad but the worst I have seen is Android
How about node on android?
Delete this comment right now, don't give them ideas.
Too late, we already have react native
And Nativescript.
It's a double whammy
I would expect most Java projects from 20 years ago to compile and run with zero issues.
Absolutely not. Not on the client side anyway.
I know of one application by a large multinational that requires java in the browser to run. Almost impossible to run now because of security restrictions.
well java on the desktop and java in the browser are two entirely different beasts. the problem here is not java but the changes that have been made in the browser.
We do have some very old and likely lost all sources "client apps" that are a single JAR and date from around 2003-2004, written in Swing. They still work.
Of course when they stop working they will be phased out, but we have been expecting their death for years now and not happening yet.
The ecosystem has not improved since then.
the problem was your research before using the packages then. So much bad engineering and architectural planning is blamed on the tools, not the human using it.
Wow, that’s honestly impressive.
If there was an option to guarantee versions could exist for X amount of years (maybe even months?) then that would greatly help the stability of projects.
I would heavily recommend to avoid NodeJS packages that depend on node-gyp. Node-gyp powered dependencies are very seldomly worth the hassle.
If you must depend on node-gyp, perhaps use dev containers so at least every developer in your team can work most of the time.
I don't even know what node-gyp is, but I know it appears regularly in error messages to know it causes problems.
I don't even develop against Node, it has just crept into our front-end build toolchain.
It's the JS equivalent of allowing native bindings (like JNI in Java).
So I'm pretty uninformed about the guts of node-gyp, and why it's used, but if people need to bring in dependancies from outside javascript... could WASM be a good fit there? Could store the binaries instead, and ship those... and in theory (correct me if I'm wrong) that shouldn't be much of a security issue due to the security model of WASM modules... or at least equal to the risk of running arbitrary build commands on your machine from a random node package.
In principle, yes. In practice, the problem is that getting some random native library or tool compile with wasm as a target is not always easy. E.g. anything that relied on pthreads was out until fairly recently.
In practice you're just kinda stuck with it because whatever NPM package you're using is using that under the hood. One of my project depends on it because of postgres DB bindings, there would be no easy way for me to get rid of it without either finding another binding (which that is the official one) or rebuilding it myself which will just take too much time and effort for what it's worth
Pardon my ignorance, but wouldn’t that rule out most image processing packages that depend on (and often build during install) imagemagick as the backend? A long time ago I tried to avoid it in a project but really couldn’t find any decent node image processing package that wouldn’t at some point depend on it. Maybe I just didn’t look far enough?
one of the most crucial packages that use node-gyp are bcrypt and argon2. Both are needed heavily for password hashing while implementing authentication and while pure js alternatives are available, they run terribly
i did mention "and while pure js alternatives are available, they run terribly"
Slow is much faster than it not working at all. If this is a project that you might not touch for months or years, perhaps having fast bcrypt is not that important.
That would be a good argument to not implement authentication again and go with a solid authentication and authorisation software like Keycloak, Zitadel, or Ory Kratos.
if only integrating keycloak was simple eh?
If you are dealing with argon2 and bcrypt, I think you coud manage some JWT hell.
You know, I ran into something similar recently with a static site engine (Zola). Was moving to a new host and figured I'd just copy and run the binary, only to have it fail due to linking OpenSSL. I had customized the internals years ago and stupidly never committed it anywhere, and attempting to build it fresh ran into issues with yanked crates.
Since it's just a binary though, I wound up grabbing the OpenSSL from the old box and patching the binary to just point to that instead. Thing runs fine after that.
This is all, of course, still totally stupid - but I did find myself thinking how much worse comparable events in JS have been for me over the years. What would have been easily an entire afternoon ended up taking 15 minutes - and a chunk of that was just double checking commands I'd long forgotten.
That is nothing..Try building your Android project after leaving it idle for a week. Or better yet, try building you react native project you left for 2 days.
OMG I feel this in my soul. Try looking at one of the gradle files wrong in a kotlin multiplatform app with shared ui.
Hold on, you had to do binary surgery using an OpenSSL version from an old box you had? I salute the dedication.
Looks to me he just copied a shared library and changed the search path.
Also, if it was statically linked, he wouldn't have that one problem. (Could have others, but not that one.)
How exactly does one do that. Sounds exciting!
Not the OP but what sometimes works is as easy as:
```
ldd your-binary
```
on the old host and then copy all the thing that is referenced, put into ./foo and then start like so on new host: `LD_LIBRARY_PATH=./foo ./your-binary`. (may include typos, from memory)
Disclaimer: Also please don't do this with network-facing services, security applies, etc.pp. but it's a good trick to know.
I used it for something early this year, it was working then.
A sewing needle, a spare magnet, and a very steady hand.
I think you copy the library file and add it to you load path
patchelf
but don't forget to make sure your new path is fewer characters than the original one so you don't overwrite any of the library
What’s the issue with yanked crates? It should still build from your lockfile, even if it contains yanked crates.
Assuming you actually committed the lockfile...
Never underestimate the potential of past-you to have accidentally missed a tiny but essential step in a way that won't have made a noticeable difference at the time, yeah.
I think the other foot-gun here is that the official recommendation is that library crates not check-in their lockfile (I assume because that prevents downstream crates from freely taking updates to common dependencies).
This has the unfortunate side-effect that while downstream apps might still build, the library itself (and any examples in the library repo) may not compile after some time...
This is why Nix (with flakes), in a git repository, will refuse to use a lockfile that isn't being tracked by git.
Nix also breaks if you try to update it, and you'll eventually have to update for security issues.
I'm stuck with a defunct Nix project I can't update, because crane and fenix flakes made breaking changes, and nix is giving me incomprehensible errors. I've spent enough time googling the errors that I'd be quicker to start over with a nix-less VM.
Yes, that's an excellent idea, and will free up time to deal with the *other* stupid mistakes past-me made :D
All of the problems here ultimately came down to packages that used the native Node API. You don't need python or deal with C++ to run JavaScript.
Node is an active project. If you build against the native API and don't pin your version to avoid breaking changes between versions, this is what happens. In my experience, JS very rarely breaks between major Node versions, but almost every native package requires a new major update.
This isn't a Node specific problem. Go ahead and upgrade your Go or Python version.
I always try to remember to put the node version in my package.json - but I do agree that the dependency chain on node-gyp has been a blight on node packages for awhile. Really wonder how that wart became such a critical tool used by so many packages.
node-gyp is a huge source of these issues for Node projects, especially older ones.
For those reading this who don't know much about node - node-gyp is how you pull in native code libraries to Node projects, typically for performance reasons. You get the same sorts of build issues with it that you can get whenever you start having binary, or source, dependencies, and you need the entire toolchain to be "Just Right(tm)".
I run into this issue with older Node projects on ARM Mac machines (Still!), but I run into similar issues with Python projects as well. Heck some days I still find older versions of native libraries that don't have working ARM builds for MacOS!
Node used to have a lot more native modules, in newer code you typically don't see as much of that, and accordingly this is much less of an issue now days.
> I always try to remember to put the node version in my package.json
This 100x over!
> For those reading this who don't know much about node
I would prefer to remain blissfully ignorant, thank you!
IMHO TypeScript is the best mainstream language to write code in right now. It is incredibly expressive and feature rich, and you can model in almost any paradigm you like. The ecosystem around it allows you to choose whatever blend of runtime vs compile time type safety you prefer. Lots of people just runtime type check at their endpoint boundaries, and use compile time for everything internal to a service, but again, the choice is yours.
The Node+Express backend ecosystem is also incredibly powerful. Node is light weight, the most naïve code can handle a thousand RPS on the cheapest of machines, and you can get an entire server up and running with CORS+Auth+JSON endpoints in just 5 or 6 lines of code, and none of that code has any DI magic or XML configuration files.
JS/TS is horrible for numeric stuff, but it is great for everything else.
Why did you click on "The tragedy of running an old Node project" then
>Really wonder how that wart became such a critical tool used by so many packages.
The original dream for Node was that it would simply be a glue wrapper around libuv that allowed for easy packaging/sharing of modules written in C++. But everyone just started writing everything in JS, and the ecosystem ended up as a mish-mash of native/non-native. Ryan Dahl stated this was indeed his biggest mistake/regret with Node, thus we have Deno now.
> But everyone just started writing everything in JS, and the ecosystem ended up as a mish-mash of native/non-native.
Because the native written stuff breaks all the darn time and it creates cross-plat nightmares.
My stress levels are inversely proportional to how many native packages I have to try to get building within a project, be that project in Python, Java, or JS.
JS+Node runs on everything. Prepackaged C++ libraries always seem to be missing at least one target platform that I need!
The CPAN 'Alien' infrastructure is great for this, you have pseudo-modules that you can depend on that use vendor packages if available and build the damn thing for you if not.
It's considered ... rude ... in most cases to write a module that needs to build against a native library without also having an Alien dist to handle making sure said library is there by the time it's trying to build against it.
Opinions on perl as a *language* ... vary, let us say ... but I wish people who didn't like writing perl would at least look at how our infrastructure works and steal more of the good parts to make dealing with their preferred language less painful.
I'm a huge Zig fan! Thank you for making native programming fun again! Zig is the exception to native build systems being painful.
But even a great build system doesn't help when old native libraries don't support newer hardware or OSs. At some point the high level -> native abstractions break and then builds break. :(
What is the Deno solution though? (I assume it's not sharing modules written in C++?)
Deno's solution is coming out years later when JS is fast enough that there is no need to involve C++ for most applications.
Good old node-gyp. I have absolutely no idea what it even is but it has been giving me errors for what feels like a decade. Mostly via front end build stuff from various projects I have worked on
Same. One day I'll find out what it is.
if you want to know, it's a fork of google gyp, which is a C/C++ project/build system generator. I.e. it's a bit similar to CMAKE, a tool to describe native code projects and what needs to be built in order to make executables and dlls.
It's a python codebase, largely abandoned by google. They used to use it for building Chrome.
i forgot all about node-gyp. The only memories I do have of it are the errors and thinking about gimps.
I'm pretty sanguine about languages and frameworks, but I draw the line at node. I have seen so many horrors visited by dependencies, often to do just one thing where 2 or 3 lines of code would do the job anyway.
When I was managing teams, whatever the language, I would ban any new dependencies which I didn't personally agree with. A lack of control just creates a nightmare.
Was that kind of control well-received by your teams? Out of context, it sounds like it would be pretty rough to be an engineer on a team where your manager had sole control over what tools you could use - I suppose it might make sense for junior devs or a very small codebase, but I would caution against taking that stance in a team where you want to facilitate mutual trust
Provided the manager only rarely exercises the power, and is open to being persuaded not to, having somebody able to veto risky dependencies can be really quite useful.
Normally when I'm the one with that power we rapidly get to a general understanding of what's small enough that I (a) probably won't care (b) will take responsibility for tweaking the schedule to makre time to get rid of it if I do.
And 'big' dependencies are generally best discussed amongst the entire team until consensus is reached before introducing one anyway.
Well back then there were fewer options, but the result was that completed products were easy to work with. Perhaps we live in different times.
Not defending node here (I had OP's experience almost verbatim), but I had a much worse experience with trying to compile PaulStretch (a C++ project). The dependencies were specified as a bunch of random third party URLs, half of which had gone offline. I ended up giving up after a few hours, and then finding a fork that Just Works.
This is one reason why whenever I build a new project, I build it inside of a Docker container.
That way, the project has just the dependencies it needs, and I know I can rebuild it at some point in the future and will be unlikely to run into problems when I do.
This works as long as your Dockerfile is reasonably reproducible and does its best to lock dependencies. However, this approach has failed me a couple of times in the past. For example, I rebuilt a container some weeks later, in the meantime a new version of clang had been released that just so happened to break my build due to a bug.
I personally use Nix these days, but the complexity is too high for me to recommend it to everyone for every software project.
Yeah, Nix pretty much solves this problem. The other day I wanted to try a really old version of spaCy for fun/historic interest. spaCy 1.8.2 installed freshly from the binary cache on NixOS-unstable as if it was still April 2017.
My first step now when trying to resurrect old projects is to create a Docker container for it - that way I can install any old versions of anything (like node or PHP) I need without having to worry about it polluting anything else on the system.
This is literally every "hot new thing" since 2000.
It is systemic. Part of it is due to too many people creating systems on the fly with too little forethought, but also because there aren't enough "really smart people" working on long term solutions. Just hacks, done by hacks. What did you expect when the people writing the systems don't have long term experience?
The problem itself is old, but the extent to which it pervades different ecosystems varies. It's largely a cultural thing, and the problem with JS/Node ecosystem specifically is that most of the community (or, perhaps, rather most of the library/framework authors) accepts this kind of thing as normal.
I joined a node project that was stuck on 0.12 while 7.0 was being developed. It was a shit show to get us to 6. As I recall, 10 was a little tricky, 12 and 16 had a lot of head scratchers. I finished the 16 upgrade more than a year after the last person tried, and it was a dumb luck epiphany that kept it that short.
I had a similar experience with emberJS when it was still young. Every time I picked the project up I had one to two hours of upgrade work to get it to run again, and I just had a couple hours to work on it. So half my time went to maintenance and it wasn’t sustainable.
I’m trying a related idea now in elixir and José may be a saint. Though I fear a Java 5 moment in their future, where the levee breaks and a flood of changes come at once.
I’ve started to adopt Nix devShells to help keep a record of each project’s dependencies.
If Nix is too heavy, the learning curve for tools like asdf-vm and mise is much lower and offers similar benefits.
I really wish there was a good equivalent for Windows.
I had this exact problem with multiple Node blog engines in the past. Constant version breakage was incredibly frustrating. I eventually moved to Hugo. A single binary which I committed with the blog files. Zero issues even years later. I can build the blog on any new machine within seconds. Which was the other revelation of Hugo. 10 seconds to build an 800+ post blog vs minutes using Hexo or similar.
We run node code that's 10 year old. No one dares to touch it; we just run it in docker and hope nothing goes wrong.
First thing I would have done is upgrade the version of Gatsby to latest. Did the author try that?
If upgrading is difficult because of 4 years of breaking changes, blame Gatsby for not being backwards compatible. Also blame your original choice of going with a hokey framework.
Speaking of hokey framework: 167 dependencies and 3000 versions of Gatsby in npm.
The first thing I would have done is check the version of Node.
That's a quick fix, upgrading a framework is a guaranteed min hour of poking around before the system is even running.
I dunno why `engines` isn't in every `package.json` file, would certainly have saved me hours of nonsense.
blaming anything or anyone gets you exactly zero seconds closer to getting the job done.
You could perhaps reframe "blame" as identifying the source of the problem and iunderstand why it can be a useful excercise (aslo none of us here are trying to solve the problem really, just wasting time on the internet). In this case Node and it's atendant ecosystem are certainly a part of the problem but I would agree that Gatsby is a bigger part of the issue as they don't seem to have any interest in taming the Node dependecy management beast. I've had to dig into Gatsby projects mere months old and it really was like opening a can of worms.
> In this case Node and it's atendant ecosystem are certainly a part of the problem but I would agree that Gatsby is a bigger part of the issue
I disagree completely. Regardless of what you think of Gatsby, Node versioning is a simple problem that affects nearly every javascript project. It should always be the first thing you check.
I made this dumb obvious mistake again just last week, looking for a little time to audit all my `package.json` files for `engines`.
It's not just personal blogs as many of y'all know.. this is a daily struggle in any SDE mainly working in web .. I am constantly lamenting the fact I spend like 70% of my time at my main job (large financial company) just trying to get my environment , or the application's environment in a position to actually develop on the application itself .. It's insane.
I f'n love javascript .. it's allowed a lot of us a doorway into software engineering where a lot of us realize how ... 'special' js and related web development are ;) .. But man it can make you really want to smash the computer some days.
I think many issues goes with bad version or, in some edge cases, not vendor dependencies. I'd good and bad experientes on multiple programming languages, some bad examples:
- bump patch or minor version from a react package but the maintainer rewrote the entire project breaking a lot of things, following semver it's bad to expect things don't break like that for such version;
- another example, the ruby gem is removed/yanked from rubygems.org and you'd to find a fork available
On the end, we need to ensure the good practices from software engineering about tests and good release management, the last btw is decades old
node-sass is to blame for like 95% of these node-gyp issues in my experience, it's not that much grief to deal with but it's hard to grasp how it was allowed to hang around so terribly for so long
> it's hard to grasp how it was allowed to hang around so terribly for so long
It's not hard. The explanation is simply that the dart version of SASS (the sass npm package) is much slower than node-sass.
At one point it was like 20x slower and I believe right now it's about 2-3x slower.
Made a site using nanogen ( https://github.com/doug2k1/nanogen ) about 7 years ago ... Anytime I setup a new machine I do a npm install on whatever version of node I end up on, and it..... Just works.
Beat SSG I've found, and all from a medium or dev article on a SSG in 40 lines or less.
node-gyp was a mistake, building of native addons should have been an explicit separate step all along.
i could not tell from the article whether this was a site with a backend using node.js or if it was just a frontend depending on node.js for the build tools.
for the latter i get around the problem by avoiding build tools altogether. i use a frontend framework that i can load directly into the browser, and use without needing any tools to manage dependencies. the benefit from that is that it will ensure that my site will keep running for years to come, even if i leave it dormant for some time. the downside is that it is probably less optimized. but for smaller sites that aren't under continuous maintenance this is a reasonable tradeoff. i built all my recent sites that way using a prebuilt version of the aurelia framework.
incidentally just today i tried to research if i could build a site with svelte that way. well, it turns out that although it should theoretically be possible, i was unable to find a prebuilt version to do so after a few hours of searching. for vuejs i found one within minutes. i'll be learning vuejs now.
I've been experimenting recently, with quite some success, with having a 'libs.js' file that pulls in and re-exports everything external I want, and providing a script that applies 'bun build' to just that.
I haven't yet decided if/how I want to include a prebuilt version of it in the repo, I *think* I may go the approach of having a commit that modifies libs.js and/or the lockfile and then an immediately following one that commits an updated prebuild ... oh, huh, actually, I should probably also consider doing those two commits on a branch, then forcing a merge commit so they land on master atomically but it's easy to tease out the human changes and the regen changes by poking inside said merge commit ... yeah, like I say, still thinking about exactly how to do this, don't mind me.
Also for even simpler cases I've been using the preact-htm prebuild directly, since htm gives a lit-style html() tagged literal consuming function that can produce vnodes for preact so I can mess around without needing something that understands jsx between my editor and my browser window.
vue's component system is IIRC noticeably less nice to work with if you don't have a compile step, but it's still pretty nice even without that so please don't think I'm trying to dissuade you here :)
note that my websites/apps are completely free of anything to build. the source that is used for development which goes into the repo is what is being loaded into the browser without change.
saving a prebuilt version of code that needs building is of course also helpful, and much better than having to rely on keeping your build tools working. but when you want to make changes to the site you have to either deal with the build tools anyways or work with the prebuilt version which may not be as practical.
either way i would simply save the prebuilt version to a branch and if any changes are made in that branch cherrypick them over to the dev branch if they even can be use, which i am not sure about. i'd probably rather avoid making changes to the prebuilt code in the first place
how does lack of a compile step affect the code? are there things i won't be able to do if i don't compile? i haven't started yet, and my website is not very complex so i think i'll manage either way, but i am curious. can you link to an example?
I understood what you were doing, yes.
However you said you're using a prebuilt version of the aurelia framework - and I'm using only using a prebuilt version of the frameworks I'm using.
Which seems pretty equivalent to me, except that your framework prebuild is created by somebody else running a script and my framework prebuild is created by me running a script. Either way the result is a single framework file to load that gets treated as basically a black box from then on.
My actual application sources are being direct loaded unchanged just like yours are.
The difference is basically syntax - vue recommends (and most example and/or real world vue apps I've seen use) their Single File Component syntax, and *that* requires a build step - see here for the difference in that and the build-free definition syntax: https://vuejs.org/guide/essentials/component-basics.html#def...
As I hope I managed to make clear, not a big deal at all in the grander scheme of things, just something to be aware of.
"your framework prebuild is created by somebody else running a script and my framework prebuild is created by me running a script."
you are right, i didn't read it that way, but i get it now.
i forgot about that aspect of vue, but if i read that correctly the joke is on them because the single file component syntax is one reason i initially rejected vue. i felt that it would make editing harder because editors would need special support to handle the file format. although looking at it now, it doesn't look so bad. it's just html with inline javascript.
anyways it looks like instead of inline templates i can also reference external templates and that would let me structure the code the way i want.
you are right, it's not a big deal in the grand scheme of things, and even from someone being prejudiced against the component syntax, it is a tiny issue compared to all the other good or bad choices a framework can make.
thank you for the link.
Yeah, I can see how you read my initial comment, and given the combination of "I could probably have been clearer" and "my approach does not seem to be one people take very often" it makes sense for you to've read it how you did, hence me turning the verbosity up to 11 for my second attempt at explaining it :D
Given the javascript is still inside a <script> tag I would presume any editor that can handle a normal HTML page with some inline javascript wouldn't notice the difference, yeah. Hadn't honestly though of that since one of the first things I do in any editor is turn all the file format handling stuff off because I'm a curmudgeon who thinks in https://github.com/n-t-roff/heirloom-ex-vi
Yes re external templates; my usual approach to 'keeping the code and template close together' is to have them open in adjacent 80x24 xterms.
You might find mobx of interest - I tend to use that for state modeling no matter the framework doing the rendering - everybody seems to be getting very excited about 'reactive signals' these days and ... they basically all have their own implementation that feels, to me, like Yet Another NIH Of A Tiny Subset Of MobX ... except invariably missing at least one feature that I really wanted.
Which is how I ended up with my own libs.js on a current project, the sum total of which is
export { render, createElement, options as 'preactOptions', Fragment } from 'preact';
export { observable, action, computed, flow, createAtom, Reaction } from 'mobx';
export { observer } from 'mobx-preact';
and then on the fairly sporadic occasions that I need to adjust the exports I have a shell script that does a tiny bit of bookkeeping and then runs
bun build --format=esm src/web/libs.js >bundle/web/libs.js
so I run that, and then go back to forgetting that the build process (and the node_modules directory it's sourcing those libraries from) exists.
I also have about 20 lines' worth of custom dev server that will serve the bundle/ file preferentially over the src/ file if it exists, plus a couple other minor things.
But this is, for me, all about keeping things as simple as possible barring some slight effort towards ergonomics, plus knowing that I understand every part of what's going on so I don't end up stuck in a "one of my abstractions is leaking and I've no -ing idea how or why" type situation (hence also my very minimalist choice of editor, any time I try a more clever one I fairly rapidly end up in a situation where the tab completion does the wrong thing so I just type everything out anyway, or where the syntax hilighting produces colours that give me a headache, or ... just colour me a curmudgeon who learned his chops on ancient BSDi and Solaris systems, I don't expect anybody else to want to use my dev environment but it works for me).
Anyway. None of this is to try and convince you of anything much, I just thought you might find my setup vaguely interesting. I'll stop waffling now :D
Given the javascript is still inside a <script> tag I would presume any editor that can handle a normal HTML page with some inline javascript wouldn't notice the difference
i agree. it's been a while that i looked at vuejs, and i don't know why i came away with a different impression before. must have not looked closely enough.
i haven't seen mobx before, but when i read the description i wonder why i need it. my preferred framework is aurelia and as far as i can tell aurelia already does what mobx claims to solve, in particular this part (from the mobx github page):
Trying to update a record field? Simply use a normal JavaScript assignment — the reactivity system will detect all your changes and propagate them out to where they are being used
it is actually the primary reason why i like aurelia.
with aurelia i don't even have to mark properties as observable. it figures that out on its own based on the bindings i make in the html template.
though mobx may be interesting when i work with other frameworks that don't do that. i'll have to keep that in mind. (edit: it looks like it may come in handy when statemanagement becomes complex: https://stackoverflow.com/questions/39454579/best-practice-u... )
keeping things as simple as possible barring some slight effort towards ergonomics, plus knowing that I understand every part of what's going on
yeah, i like that too.
I just thought you might find my setup vaguely interesting
i do indeed. thanks a lot for that. my own setup is actually also quite simplistic, but not deliberately so. it's mostly lazyness. i simply don't want to be bothered to put a lot of effort into a better dev setup. i'd rather work on actual code. so i start with plain vim without any addons, and only slowly change stuff when i run into a problem that really bothers me. solaris, AIX, irix is where i started.
When I start with "plain" vim the first thing I do is drop in https://trout.me.uk/X11/vimrc to nerf it back to as close to ex-vi as I can get - because my muscle memory comes from classic vis (between the mostly-BSDi job and the mostly-Solaris job I did devbox sysadmin for a software house, so I got to handle Solaris, AIX, IRIX, Tru64, SCO OpenSewer, and a couple others all at the same time ;) my experience of ex-vi versus vim is "when I typo in ex-vi it beeps at me, when I typo in vim it activates yet another feature I didn't know existed and breaks my concentration."
I took a quick look at aurelia after you mentioned it and am clearly going to have to take a deeper look at some point, I'm curious how it's handling all that under the hood, and for cases it handles well it does indeed look really rather nice (though being me I'll need to take it apart before considering using it, automagic reactivity is really cool but only when I can reliably dry-run its path through the framework in my head as I'm writing the code that (ab)uses it).
(so, cheers for mentioning aurelia, all the best conversations involve both/all people involved coming away with extra things/ideas to poke at :)
Anyway. To mobx:
I think the best may to understand mobx before you've actually used it is roughly "it not only provides simple reactivity stuff, when stuff starts getting more complicated you'll find that the more powerful tools you wanted are already there, implemented, and will show you what they're doing in the devtools out of the box."
As an example, computed() is very handy (and the one mobx feature other than just "reacting to changes" that most frameworks' reactivity implementations *do* copy, although I have vague memories of them not always copying it as thoroughly as I'd like) - so this is a bit of a "too simple" example but we're in HN comments so
class SomeData {
@observable rawData = [];
@observable sortBy = 'someField';
@computed get data () {
let { sortBy, rawData } = this;
return rawData.toSorted((a, b) => (a[sortBy] - b[sortBy]));
}
}
(sort function designed for numerics only, dumbass example is dumbass)
and then assuming your display component is tracking its dependencies somehow, a change to the `someData.sortBy` field will automatically expire the cached `data` element and notify the component so it can re-render, at which point `data` gets recalculated and re-cached (and all this will show up as events in the devtools if you've stuck them into your page somewhere).
Since aurelia is basically taking apart your bindings to figure out what to observe, I think it wouldn't be able to track that, and you'd instead have to make sure that when `sortBy` gets changed it fires a trigger to recalculate the `data` field (and then aurelia *would* notice that getting set to a new value).
(aurelia may be smarter than I think, but the only way I can think of for it to *be* the necessary amount of smarter would basically be to implement a subset of mobx's makeAutoObservable and I don't think it does that and I don't think given aurelia's (clearly consciously chosen) aesthetic it *should* do that)
I'm tending towards having viewstate kinda objects as a layer and then model objects behind them, so `rawData` would delegate to the model objects in the above example - then the code that modifies the data I'd be persisting to a backend doesn't have to think/know about how it's going to be displayed, but if you modify it - `addTodo` or an `editTodo` or similar - the change notifications will propagate outwards, expiring things in the viewstate, causing things in the view to notice and re-render themselves as required.
This is especially noticeable to me when e.g. I'm writing a data viewer onto a table of log entries (or something in that vague area, *handwaves vigorously*) and I want to have a 'Refresh' button - that triggers something in the model layer that does a `fetch()` call or whatever, and then sets the new model data to whatever the backend sends back ... and then everything re-renders, keeping all my display choices intact because the viewstate layer didn't get touched.
My experience of learning mobx included a number of "oh for crying out loud, I've implemented half of this feature the hard way in three projects before now" moments - and *that* was what made me fall in love with it.
You might also want to bookmark (and then mostly ignore for the moment) mobx-keystone, which is Even More Complicated under the hood, but provides fairly simple syntax for declaring a tree of reactive state classes that provides (typed if you're using typescript) constructors for you, tree snapshots if you want to be able to save your session state to the backend to resume later, and mutation event logging as JSON-able objects that also includes reverse versions so you can get full undo with a fairly minimal amount of effort (I never, ever want to write undo functionality by hand again, that always required a significant amount of bourbon). Not necessarily to actually *use*, mind you, but seeing what keystone is capable of was useful to my understanding of what mobx itself was capable of.
I ... there's definitely a "would rather be writing code than yak shaving my setup" aspect to my choices, but it's definitely also a question of having as few layers as possible between me and What Actually Happens so things reliably behave how I expect them to behave. Debugging is much more fun for me when I can be confident I'm debugging *my* code rather than there being a decent chance I'm actually debugging how I've misunderstood something else two layers down.
(also, if you're not doing anything particularly strenuous, I'd suggest having a look at bun as an alternative to node, all my 'real world' and/or work stuff is node and I'm not particularly offended by it, but bun is definitely a smoother and more DWIM experience for me, at least so far)
i get you on vi. i was there (3000 years ago ;-), although only at the tail end of it in the early 90s. i switched from emacs to vi because of my sysadmin job.
i am really curious what you make of aurelia once you take it apart. please ping me if it is not to much trouble. i discovered aurelia when angularjs 1 was being redeveloped into angular 2. rob eisenberg was invited to the angular team based on his work on durandal but when his ideas were not accepted he left again and built aurelia instead. aurelia is not perfect either, but it was way better than angular 2 or vue at the time.
on your mobx example, you are right, aurelia doesn't track everything. it may be able to handle some level of complexity but i certainly had situations where i had to explicitly send a signal to get aurelia to update values.
> node-gyp
We're in ... let's call it a transitional period at work. I've got something like a dozen versions of node being managed by asdf. And in half of the projects I work on regularly, I consistently get warnings about this particular project failing to build.
One day, I'll actually look up what it actually is, and what it does, and why it's being built, but is apparently optional.
It's basically a set of tools to make building native modules easier, that said modules then use to deal with their binding to C/C++/etc. code.
Everybody complains about it, and understandably so, but if it didn't exist you'd probably instead have one set of potential similar problems *per* native module which has a good chance of not actually being better overall.
The counterargument is, I guess, "well, only people who can write their own high quality build setup in-tree should be writing things that bind to external code," and I do sometimes dream of that, but it's not hard to see the downsides of living in *that* world instead either.
Anybody have experience with pulling/running a docker image after many years?
I'd like to see the consensus on most durable software packaging method.
2020 is not "old".
"Not Invented Here" is whats going on. Developers of this age need to learn this.
A recent exmaple is the RPI Foundation nullifying thousands of internet tutorials renaming "/boot" vs "/bootfs". Ask yourself a serious question, did that actually improve anything? No it did not.
Dealing with node-gyp cost me at least 5 hours a month in the 2010s. I'm so very happy to not see those errors in my console anymore.
Native code in an npm module should be regarded as a massive red flag.
Cool that you managed to get it running after just 2 hours. Same thing applies to python projects, a little note in README saves so much time in the future, I always try and use virtual environments and specify a specific python version so I can just nuke and reinstall everything
OP was trying to install an old dep tree of gatsby on a different node target. These kinds of massive libraries break all the time. Look how big its dependency tree is: https://npmgraph.js.org/?q=gatsby
Acknowledging this is absolutely awful, and also commenting that a project .nvmrc file is your friend!
...until the node version you locked can't be downloaded anymore, or hasn't ever existed for your CPU arch.
I see you too had to run node v14(? my memory fails me somewhat) on Apple Silicon hardware...
FWIW, I run Node 12 painlessly on Apple Silicon using fnm, so you might be thinking of a few versions before that.
shivers
I had to build some project that uses some Ruby package manager. I forgot already what the package manager is called. I got some error about "you don't have all the dev tools". So I installed what Google told me "dev tools" was. Then it still told me that I needed more dev tools. Stackoverflow had some question about this package manager. For Windows (Linux here). 20+ answers, mostly for Mac. All in the style of "this random thing worked for me". All with at least one upvote. Some answer about "I needed to symlink this system library".
Gave up.
Then I ran `devbox init` and installed whatever it told me that was needed. `devbox shell`.
At first I thought it would be a decade old project, but 4 years isn't old by any standards is it?
Anyways, npm ci should have been the first attempt, not npm install so that it installs the same package versions defined in the package-lock.json. Then as others have mentioned, pin your node versions. If you're afraid of this happening again, npm pack is your friend.
In the end, op could have done a bit more. BUT I'll give it to him that when bindings are involved, these things take more time than they should
Node/JS seems particularly fragile in this regard thanks to the complicated maze of dependencies and sub dependencies and flavour of the month framework syndrome
I think this was the biggest mistake that Java made. Breaking reverse compatibility after Java 8 means that thousands of organizations will never leave that version. There is an entire industry based around maintaining Java 8. Eventually there will be two versions of Java, just like Python 2 and 3. There will be Java 8 and Java 698.
This goes for both node and python: Avoid native extensions. For python this is less feasible due to its inherently poor performance, so limit yourself to the crucial ones like numpy. For node, there are few good reasons why you would need a native extension. Unless you have your node version pinned, it will try to find the binary for your node version, fail, then attempt to build it on your system, which will most likely fail as well.
As a rule of thumb if you are jumping into a legacy node project the first thing you should try is a sequentially older node version. Could've reduced his two hours to two minutes.
My take is one should be especially wary of packages that depend on C libraries and need compliation. You get to be extremely bound to what the OS distribution has to offer. If that's the case, docker is probably the safest solution.
Node.js (or more accurately, the entire Javascript ecosystem) changes, but the tropes don't.
This is the reason I ripped out Gatsby from every project where I could. Every six months I’d spend an entire evening fixing obscure problems that shouldn’t even exist just to get things running. And that’s not even considering actual breaking changes of which there were plenty.
I still have to maintain many old projects that are using Node 7, but many dependencies are no longer available. Every update needs to patch the docker image manually from the last runnable docker image.
Lately I have been revisiting some older golang tool I wrote since before they introduced go modules.
"go mod init" + identify a working dependency version was all I had to do on any of those 10+ year old projects (5 minute work tops)
Can't help but feel that this is a massive nothing-burger. You wouldn't generally expect your Java project to run if you use an incompatible version of the JVM, nor would you generally expect your C++ project to build if you swap one compiler for a different one. Etc, always specify what your project relies on, whether it's in the readme or in the dependency tree.
> nor would you generally expect your C++ project to build if you swap one compiler for a different one.
it's absolutely the expectation for C++ projects to support building on GCC, Clang and MSVC
Maybe these days this is the case, last I used C++ about a decade ago though, this was most definitely not guaranteed to be so. This thread [0] is a good quick overview of the type of issues you used to run into, and often you didn't even need anything fancy to hit those edge cases. If life has improved in the meanwhile, that's good of course.
You should save your deps in your SCM! Microsoft is giving away ownership to existing packages if you tell them you will use it for a TypeScript project.
What? Does Microsoft control npm?
Yes. npm is a subsidiary of GitHub, a subsidiary of Microsoft
[deleted]
We use package.lock and docker image with local folder binding to run legacy node projects. Eg. docker run -v local:inner node:12 command
The tragedy of running an̶ ̶o̶l̶d̶ Node project.
I had to chuckle after I read your username. Kudos.
On the one hand, it's not that terrible and most *of* the terrible is from people making silly choices.
On the other hand, there's a reason I regularly get annoyed enough at it to call it nope.js.
On the gripping hand, I mostly write perl, which argues for a different but unique set of masochistic tendencies on my part.
(you just have to remember that what 'perl' *really* stands for is 'Perenially Eclectic Rubbish Lister' and then you will have appropriate expectations and can settle back and have fun ;)
For sure. This is the number one reason I am switching as many projects as I can to HTMX.
Sibling comments say in so many words, it's no big deal bro, just update. But it is a big deal over time if you have dozens of cold-blooded projects to deal with.
Gatsby and htmx have approximately zero overlap
The point is NPM. HTMX helps you avoid NPM.
OP is tired after 2 hours of work.
OP is tired after two hours of stupid bullshit that should not ever occur. Stuff like that's enough to make a man quit a job, or indeed a career.
Running a new Node project is nearly as problematic… The ecosystem is broken
This could equally be written about old Android projects.
FWIW, I've mostly maintained long term PHP projects, and I've had nearly unaltered codebases running for ~25 years since php3. No frameworks, just core PHP. People dump on PHP, but it's a very good tool if you're focused on maintainable output and pick the right functional APIs to cede to mature unix tools. You can expect decades of solid, maintainable output.
Experienced programmers will not pick up a "built on shifting sand" stack, because they can acutely perceive the pain and suffering before it happens, generally from past experience. With fast-crumbling stacks, you need to execute quickly and move on, and treat the whole codebase as an expiring entity. Stacks I personally try to avoid: anything node/javascripty, anything Androidy, anything iDevicey.
Those who don't understand Unix are condemned to reinvent it, poorly. - Henry Spencer
You spent only two hours on this and you think it’s too much?
Also, do not run shit on a node version that is years out of date and out of service. Also, update your damn packages. I know I sound cranky, but running anything internet facing with god knows how many vulnerabilities in is an exceedingly bad idea.
Having CI would have avoided this problem.
What you mean here is actually deps pinning + vendoring.
It's way out of scope of basic CI concept.
CI doesn't help when all the dependencies aren't locked, or where some deps disappear, or ...
Not for running the project locally
I think you're missing the point.
CI solves it because it proves that it can build in the pipeline, using a well defined environment.
No guessing at which node version you need or any other dependencies that may be required.
Have you tried DevContainer before ?
yeah? now try running 4 years old React project, it's a hell on earth.
next.js user? :D
DHH has said this experience is a big reason Rails is pursuing a no-build approach.
can you elaborate or link to an article that explains what this means?
> time to run it after not touching it for 4 years
> Two hours of my life gone...
Two hours of work after 4 years sounds ... perfectly acceptable?
And it would have run perfectly right away if the node version was specified, so a good learning, too
This feels like making a mountain out of a mole hill
Well, the "solution" ended up as "I gave up and just installed an old Node version and called it a day". So those 2 hours weren't even enough.
I've been using Jekyll/Ruby since 2014 for my website, with a few custom plugins I wrote myself. And I've never really needed to do anything like this. It "just works".
My Go and C programs are the same: "just works". I have some that are close to a decade old.
Good for you, my experience with Jekyll is closer to OP's experience with Node. I have a big website that I built in 2014, with tons of custom plugins, that is now stuck on Jekyll 2.x and Ruby 2.x, and has a ton of hidden C++ dependencies. The way I build it now is using a Dockerfile with Ubuntu 18.04. I probably could update it given enough effort, but I was rather thinking of rewriting it in Astro.js or Next.js.
This is the issue I have with the "build vs buy (or import)" aspect of today's programming.
There are countless gems, libraries or packages out there that make your life easier and development so much faster.
But software (in my experience) always lives longer than you expect it to, so you need to be sure that your dependencies will be maintained for that lifetime (or have enough time to do the maintenance or plug in the replacements yourself).
If you're looking for a stable target you should not even consider Next.
Just avoid JavaScript frameworks altogether.
Yes indeed, that is the solution to modern IT problems - never update your Ubuntu 18 containers and you're set.
(Wish I was joking, but sadly I'm serious.)
Not sure if I'd call out Jekyll as a paragon of stability. The last time I touched it, I made sure to write up detailed notes. In fairness, it's the only time I interact w/ Ruby.
https://jeberle.github.io/jekyll-setup
Those mostly seem the standard Ruby/Jekyll/GitHub Pages setup instructions?
I don't love how bundler works by the way; I think it should/could be a lot better in many different ways. Same for Jekyll. But once it works, it generally keeps working.
I don't think Jekyll (or Ruby) are a paragon of stability. I'm sure some stuff has broken over the years. It just seems to break a lot less than the JS/Node ecosystems.
I dug out a small Rust project from 2016 and with edition = 2018 I got it running again in under 30 minutes, I was kinda surprised. 8 years is ancient in terms of Rust. I have had more problems with certain other crates. But yeah, C/C++ usually don't really compare. 5 years is nothing, it should just work. For Go the big breaking moment was modules. All my pre - 2016?ish code would need some work.
Rust is kinda like c++ here, it's easy until you have a library that has external, non-vendored dependencies.
Every single time I clone anything Go, I first spend a few hours sorting out dependency issues.
This shocks me, what sort of issues do you hit?
+1 on this. I've been using Go almost exclusively for the last 5 ish years partly because this sort of thing never happens.
You lucked into the period when they solved the issues. If you need to work with older projects and can't easily convert them, you're going to have a bad time.
I open up random Go projects from a decade+ ago every week with no issues.
Lucky you, then. I open them every week too, but definitely not without issues.
Mainly weird issues of incompatible dependencies, maybe sometimes something that won’t compile because it’s missing a system dependency? I also feel like it sometimes pulls a newer or older dependency that is the missing some used module.
I still can open my decade-old Java projects, run build with modern Maven/JDK and get working code - in a few minutes. Two hours of dancing with a drum doesn’t feel acceptable to me.
> I still can open my decade-old Java projects, run build with modern Maven/JDK and get working code - in a few minutes.
Try any old Spring project, where anything newer than JDK 8 will be incompatible. The only saving grace is that JDK 8 is still available, but even it will eventually reach EOL. And then you look at JDK 11 projects and realize that they won't run on anything newer due to Lombok issues, so that's another thing to update and fix.
I think the experience of code rot is universal and increases with the amount of dependencies you have.
> I think the experience of code rot is universal and increases with the amount of dependencies you have.
This is true but there's also a factor from the language/framework in use. Node is especially bad because of it generates huge package dependency trees. Go is especially good because of the large stdlib (which I use to minimize deps in https://github.com/contribsys/faktory) and excellent backwards compatibility.
Plus even if you were using lots of deps, the Go code would almost certainly keep working, thanks to minimal version selection and the Go module mirror. (And of course for maximum reliability, could even easily vendor deps with "go mod vendor".)
> Go is especially good because of the large stdlib
Come on… compare it with what java and python have.
Java can't even read/write json without third party libraries.
Tell me a single thing that Oracle has added to the standard library. As far as I can tell, more and more of what was once standard is now getting offloaded and trademark-washed with the eclipse and apache foundations.
What does Java or Python have in their stdlib that go doesn’t?
Well python has syslog, mmap, just to name a couple of things.
They had mmap, but it was dropped at some point, I guess.
They do have syslog though https://pkg.go.dev/log/syslog
Spring generates bytecode, and Lombok is a language extension. You're effectively writing code in another language that's very similar to Java.
The company I work at has a ton of projects stuck on ancient spring versions and Java 8 (or Java 6 in one instance). They still insist on Spring despite being essentially unable to upgrade to a version newer than a decade old.
> Spring generates bytecode, and Lombok is a language extension. You're effectively writing code in another language that's very similar to Java.
That's the reality of using the language in production projects.
Same as how you're likely to see more than just Python or Node being used in projects that list them in the tech stack, because there's native dependencies and tooling used.
Old Spring versions < 6.1 are obsolete, so you do have to upgrade Spring in many scenarios as well. No security fixes for them unless you pay for support.
I spent an hour or two figuring out how to even download Java, whether I need to give Oracle my home address, use a third party JDK etc. Then it turned out the standard built in GUI library I needed was no longer standard or built in. (I also used it ten years ago and it was a much better experience then.)
JavaFX? I’ve been there too, had to install it with maven and waste a few hours getting the maven xml just right for it to bundle the entire thing into a runnable jar (one that doesn’t require shipping extra files). Not the most convenient, but I get that they wanted to split it off the JDK release cycle.
To save you time in the future:
https://adoptium.net/en-GB/
Haven't used javaFX for a while but this is worth a shot
https://openjfx.io
>I spent an hour or two figuring out how to even download Java, whether I need to give Oracle my home address, use a third party JDK etc.
My son found a disk with some of my old java project from college 20 years ago and that's about what it took to run them, first figuring out how to even download java and then making some minor changes to get them running. I think we gave up trying to get the actual applet based ones to run.
I know this is a late response, but for anyone curious you would need to use an official Oracle or Sun JDK/JRE from Java 8 or older. OpenJDK doesn't include support for applets.
You also need a browser that has NPAPI. IE 11 was the most modern browser I am aware of that still supported applets.
The old GUI framework mentioned in the GP might have been Swing. It is still included in most JDKs and allows for cross platform desktop GUI application development with no other dependencies outside the JDK. Finding documentation on how to do GUIs in Swing is getting increasingly difficult though.
IIRC Oracle makes it really hard to find the download. Admittedly we didn't spend more than an hour or two messing around with all of it and getting the applets to run wasn't a high priority or anything.
applet. thats a name i havent heard in a very long time.
Well, I would spend probably similar time to get started on Pascal, which I used last time in the previous century. I would not blame Pascal for it though.
Maven, maybe, but Gradle absolutely not. If you don't have the exact version of Gradle that you used before, you're in for the same kind of misery documented above, with the same end state: just stick to the old version and deal with the upgrade later.
If you use the the Gradle wrapper, it will use the correct version and download it if necessary. If I'm not mistaken, that's the recommended approach.
Well, I‘m not talking about Gradle, right? Sometimes conservative choice is what gets the job done.
Right, I'm just clarifying for others who may not know the difference that Node doesn't have a monopoly on instability.
There are a very small number of projects that specifically make it their goal to be backwards-compatible effectively indefinitely, and Maven is one of those. It's part of what people who hate it hate about it, but for others it's the main selling point.
Well, the article is about npm, a package manager for node. Vendoring dependencies into source is a choice, albeit one that I don't often reach for.
> Well, the article is about npm, a package manager for node.
And Maven is a package manager for Java. The main difference IMO? The usual way to do things in Maven is to always use exact versions for the dependencies. When I specify I want some dependency at version 1.2.3, Maven will use version 1.2.3 of that dependency even if 1.2.4 or later already exists.
Pinning to exact versions has been supported in npm for most, if not all of its life.
That’s the usual way to do things in most teams working on app code I’ve been a part of (as opposed to library code where version ranges are preferable).
Maven is also stable and requires only JDK to run. You can unzip it to desired location and add to PATH - it will be ready to go in a couple of minutes.
It will not build all old projects out of the box, of course. Specific versions of plugins may not be compatible with it or some dependencies may break on modern JDK. But chances of hitting this issue are much lower than in JS/NPM ecosystem.
Might be an unpopular opinion but I like it that way. It means that I can pull an old Gradle version and the build should succeed. Unlike maven where I’ve been bitten by plugin xml syntax changes and such.
`make` works pretty well on ... everything I've written that I kept since then.
`cmake ..` otoh, tends not to.
`make` itself works fine, that hasn't changed in decades. That doesn't mean the tooling `make` scripts kick into gear still exist, or work, or do the same things, though. Makefiles don't fix the dependency on a fifteen year old C compiler.
If node saved you 8 seconds per working day over those 4 years, you’d still be ahead spending 2 more hours on that node project every 4 years.
C# devs can open decade+ old solutions without issues, maybe this is just "normal" for the JavaScript ecosystem, but there absolutely exist other ecosystems which don't absolutely waste your time in this way.
Not true for the entire C# ecosystem. I tried rebooting a Xamarin project I coded a couple of years ago. Had to spend weeks upgrading it cause Microsoft decided to discontinue Xamarin and force everyone to use NET MAUI
This has to do with specific framework and does not translate to the overall experience (for example targeting iOS is pain because Apple deprecates versions quickly, so downstream dependencies have to match this too).
You can open and build a back-end application that targets e.g. netcoreapp2.1 (6 years old target) just fine, it might require installing you an archived SDK for the build to succeed (which you can download regardless, it will complain that it is EOL though) but it's an otherwise simple procedure.
For library code it's even easier - if you have netstandard2.0 target, it will work anywhere from .NET Framework 4.6.1 to the latest version without requiring any maintenance effort whatsoever.
On Windows, Visual Studio will happily work with .NET Framework 3.0 (which is ancient) and more.
If you used pure Node.js, you wouldn't have any problems whatsoever, too.
Unlike Node projects, your dependencies don't break - they stay on nuget.org, and the projects compile just like they did when they were first implemented. Upgrading dependencies and project target is subject to the same restrictions as in other good languages.
I know for sure that I can clone a random project on Github, hit `dotnet build` and usually expect it to work on the first try. The rate of bitrot for average unmaintained JS project cannot be compared. The average dependency graph of a .NET project is going to be 10 times smaller too.
This is true for NodeJS as well - dependencies on NPM cannot be removed, so at long as NPM keeps running, and as long as you've got a lockfile that precisely specifies all dependencies, you shouldn't run into problems running older projects. (Certainly I have never had an issue with this - the problems, if they occur, are always around upgrading. And like you say, this is subject to much the same issues in every ecosystem).
If this isn't enough, there are tools like Yarn which encourage vendoring dependencies to ensure that as long as you have a copy of the repository, the yarn CLI, and a valid version of NodeJS, you can always run the project.
Yeah, that's nice, unfortunately it's the built-in Microsoft stuff like WPF that keeps breaking between versions.
Note Microsoft still removes downloads of previous versions of Visual Studio that aren't the latest - now you can only download VS2022 unless you have some mysterious paid account that has access to those - I don't have in either my personal or my corporate so I don't know the details
This is true and is a pain if you’re stuck on something like Windows 7/8.1 (or 10 in the future). The installer being a web installer that auto-updates on start also means that downloading an older one doesn’t help at all. The only way seems to be to pirate the Enterprise offline installer which of course isn’t really an option.
As it happens I've recently upgraded a 2 year old project with both node and C#
It was open much the same in both. If you're happy using outdated and unsupported components with security issues AND you can get hold of the right version of the dev tools and plugins AND your hiding environment still supports the old platform, you can maintain the old version with minimal change. But should any professional developer do this?
It depends.
In domains where software is expected to run for years, if not decades, it's common to archive the entire toolset, along with libraries and SDK's in case you need to fix a bug/add a feature 10 years later. Obviously, in this case you can't have dependencies sitting somewhere on a server you don't control.
There are also situations where this is forced by regulation: you need to recreate build xxx.yyy.zzz 7 years after release because a customer reported a serious bug and it needs to be reported to the relevant regulatory agency and investigated.
Not any more. Security vulnerabilities in dependencies now break the build by default.
It's arguable whether this is the correct decision, but it makes things slightly harder than they used to be
> C# devs can open decade+ old solutions without issues
For some definition of "without issues"...
I wish I lived in the world you described but trying to onboard a new dev onto an existing (edit: ancient) C# project at my job is frequently a multi-day endeavor.
I think there's much bigger deviations from long term stability the closer you get to a GUI as things deviate further from ideals into brute force hacks for stuff to look right.
Can remember trying to update a Unity project so it'd be still buildable for 64-bit Mac devices. The very first version bump I done resulted in several critical libraries no longer working and there being no clear alternatives to swap in.
As someone who has migrated old ASP/silverlight projects and BHOs... seriously, this is complete bullshit.
This depends highly on what dependencies are in your C# solution.
Same for his node project. If he'd stuck with dependencies that are just plain js - I'd bet money his project would have installed & built just fine.
By the time you're hitting the native addon apis and doing things like compiling python and C/C++ code... you're going to feel all the pains of those ecosystems too.
Lol, I left C# because I couldn't solve this issue and in Node.js it's particularly easy - just keep a nvmrc file and a dependency lockfile.
Sounds like you are way too used to the javascript ecosystem if you think getting an old project to build should take hours...
What ecosystem are you comparing to?
Any C/C++ project with even mild complexity has a good chance of being extremely difficult to build due to either missing libraries that have to be installed manually, system incompatibilities, or compiler issues.
Python has like 28 competing package managers and install options, half of which are deprecated or incompatible. I can't even run `pip install` at all anymore on Debian.
Even Rust, which is usually great and has modern packaging and built-in dependency management, often has issues building old projects due to breaking changes to the compiler.
All this is to try to say that I don't think this is some problem unique to JS at all - but rather a side effect of complex interconnected systems that change often.
A big reason Docker and containers in general became so popular was because it makes this problem a lot less difficult by bundling much of the environment into the container, and Docker is very much agnostic to the language and ecosystem running inside it.
Java has a great ecosystem. It’s well thought out and I can compile and run 10 year old projects no problem. In fact, I wish everyone had just copied Java’s model instead of inventing their own worse model.
I love Python but it has a terrible package ecosystem with mediocre tooling that has only gotten worse with time.
JavaScript has gotten better but it seems they are just re-learning things that were long figured out.
When I see new package managers, I just see a list of problems that they forgot to account for. Which I find strange when there have been many package managers that you can learn from. Why are you re-inventing the wheel?
In JetBrains's Developer Ecosystem 2023 survey, 50% of developers were still regularly working in Java 8 [0]—the exact kind of "stick with the old version of the runtime" solution described in TFA.
[0] https://www.jetbrains.com/lp/devecosystem-2023/java/
Java 8 is 10 years old. If you had a project with a Java version that was recent 4 years ago (11 - 14), you could run it without any problems or changes.
Because they made the design choice to stop making large breaking changes to the language and tooling. Java 8 to 9 wasn't easier than Java 8 to 17 is, it's getting off of Java 8 that is hard because they made the choice to break so much in 9.
Node does not promise indefinite backwards compatibility, which is a design choice that they've made that allows them to shed old baggage, the same way that the Java developers chose to shed baggage in 8->9. Neither choice is inherently better, but you do have to understand which choice a language's designers were making during the time window in question when you go to run it later.
"Java has a great ecosystem. It’s well thought out and I can compile and run 10 year old projects no problem."
We just had to workaround breaking changes in a patch version update of Spring Boot. Maybe it was true in 2005, but certainly not the case today. I know of products that are stuck in Java 1.8 and not because they are too lazy to upgrade.
I've been involved in bringing real old Java 1.4 and 6 and whatnot up to 17 and from classic app servers into cloud, can take a bit of work but it's pretty straightforward, mostly switching out deprecated methods to their successors and copying over boilerplate config from similar applications.
Depends on the frameworks in use. I have done the same as you in bringing code that was originally started on 1.4 up to 17 and now 21 and it just took grunt work, but not too crazy since most of it was bog standard Java.
However, some other projects around here using different application frameworks are stuck since the frameworks aren’t maintained or upgraded in ways that aren’t compatible anymore.
Looking into old Java code, it is hard to remember a time before enums and what a pain that is to deal with int constants instead of typed constants.
I am not sure you should put ant or maven as shining examples here, but I am kinda warming up to Gradle, at least without Groovy being involved.
What do you get from Gradle that Maven cannot offer?
Javascipt is a horrible language because it basically is missing a standard library so you need external dependancies even for the most basic things that are already present in other languages. Python has a very rich standard library. You can do a lot with libc, if you had a c++ Qt project then it would provide you with basically everything you could ever need.
> Javascipt is a horrible language because it basically is missing a standard library so you need external dependancies even for the most basic things that are already present in other languages
That's not the only reason. :)
Horrible syntax full of inconsistencies, bolted on type system with TypeScript helps but will always be bolted on, quirks everywhere, as if `null` was not bad enough they also have `undefined`, I can go on.
I simply avoid it for anything but small enhancements scripts on otherwise static HTML pages.
It's okay for you to have the opinions you do, but I have zero problems programming very complex systems with Javascript, even without Typescript (before Typescript ever existed). Javascript has always been the easiest language to build anything with for me. And yes, I know a dozen other languages including C, C++, C#, Python, Go, various flavors of Assembly, and more - but Javascript is still my favorite. YMMV.
> It's okay for you to have the opinions you do
Likewise.
> I know [...] C, C++, C#, Python, Go, various flavors of Assembly
That's good. But these are all languages that either lack strong typing and or are themselves rather quirky.
Only C# and Go stand out, IMHO, as languages that are recently designed. Even Python did not have user defined classes in the first versions, and some things thus feel off (__len__, __init__, etc.).
Also C# and Go still have implicit nulls all over the place. Their designs show ignorance for modern language design. Sum-types, explicit null, immutability, sound type systems -- all lacking in all langs you mention.
So what languages do have these IMHO "Game changers"? OCaml/ReScript/ReasonML, Haskell, Elm, Rust, Gleam, F#, Scala, Kotlin, ...
Those languages really showed _me_ something important: how it could be better.
There is another group of languages that also sits on a unique place in the solution space: the LISPs (incl. Racket, Schemes and Clojure). I found it very worth while to learn to program with them as well.
In the last few years C# did away with implicit nulls. Nullable and non-nullable object references are disambiguated with T? and T. There are multiple keywords and expressions to further make it nice to work with these. You would be correct to note that there are "nullability holes" in certain edge-case scenarios, particularly around JSON serialization. But other than that it's a pretty smooth sailing.
If you do use C#, you may also want to add <WarningsAsErrors>nullable</WarningsAsErrors> to .csproj too.
I know it's a bit like Kotlin. I heard though that C#'s move still has some std lib bits that are nullable.
The idea is not to never have nulls. It is pointless (ha) - the way to understand T? vs T in C# is like an optional.
The entirety of standard library is annotated since long time ago. All new and not so new projects are also null-aware. Pretty much either completely legacy libraries or libraries that explicitly removed Nullable: enable that is set by default for all new project templates do not have those.
As I mentioned previously - it isn't perfect, but the level of "good enough" of NRTs in .NET is such that the nullability is a solved problem.
I prefer Assembly before any of the fringe languages you mentioned. There are no types in Assembly, and I rather like it that way. If you know how to work with data, then there's very little confusion about what you're doing.
And there's a reason practically nobody uses the languages you mentioned, and Javascript is so wildly popular. Most people don't really like or need type nagging systems. Sure, if you're trying to launch a rocket or doing something like building medical equipment or something else that requires covering your ass, then yeah, sure, go ahead and type the hell out of it. But for most programming tasks the languages you mentioned are overkill and frankly too obscure to use.
JS is popular due to it being the only browser lang. Did you forget that?
Also: TS popularity shows that not everyone in the JS community agree with you.
For server side web dev you will find that statically typed langs (Java, Kotlin, C#, Go) are a big chunk of the pie. Sure it also comes down to taste, but if you work in a large team, having stronger types can greatly help to keep the codebase in shape. (better IDE refactor tools, clearer for noob, harder to hide/ check in rubbish to git)
If you are afraid by obscurity, have a look at Kotlin!
Typescript is still nowhere near as popular as Javascript, and likely never will be. It's overkill for the vast number of uses of Javascript.
JS's uses have been growing steadily. The language was clearly not "designed" (I saw a talk about the first 10d of JS, little actual designing happened).
TS is to the rescue when you have a big JS project, because that's what JS is not good at: big projects and large teams.
Except big JS projects with large teams existed well before Typescript ever did, and it never stopped anyone from producing good results. Typescript really isn't making anything possible that wasn't possible before Typescript existed.
I’d love a link to this talk if you happen to have it handy!
only one type :p
A standard library is not a "language" feature.
I check out C projects which have not been updated in 15 years and run make.
I have C code that I wrote in 1995 that still builds. There are many warnings, however.
You can't use 'pip install' in debian because they chose to do that during the transition from python2 to python3. You should use 'pip3 install' which is provided by package python3-pip from debian.
One can argue that this decision should be revised by debian but you should not install packages on system python installation for working into projects. Always use virtual environment.
No that does not work either. You get an error like this:
» pip3 install supervisor error: externally-managed-environment
× This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install.
As far as I can understand, they did this on purpose to dissuade users from installing packages globally to avoid conflicts with other Python environments.
Anyway, I'm not trying to argue about if that decision is right or not - I just wanted to use it as an example for my case that the JS ecosystem isn't alone and may even be far from the worst when it comes to this kind of issue.
I understand that, you can use `--break-system-packages` or change configuration `python3 -m pip config set global.break-system-packages true`.
Python is different here because in many linux distributions, there are many tools that rely on you system python. Python unlike node is not limited (in practice) to web applications. that's why you have to be more careful. So while I understand you are using this as an example, I don't feel that your comparison is apple to an apple.
>Python unlike node is not limited (in practice) to web applications. that's why you have to be more careful.
They may or may not be running Node.js specifically, but I believe that many Linux distributions, as well as Windows, include JavaScript code in core applications. I don't see this as particularly different, except that they might choose to assume a single standard system Python that is able to be modified by standard Python development, whereas I would rarely expect that to be the case with however each component chooses to execute JavaScript.
Apps that rely on OS provided Webview and electron apps are totally different situation. This is exactly what I said. And no, they don't use any standard nodejs installation like python. And they are different as I said. so this is still apples to orange comparison.
>Apps that rely on OS provided Webview and electron apps are totally different situation.
No, they're not. I'm talking about core apps and services that are essential to a functional operating system. This is exactly the same situation. The difference is choices made by the OS and language ecosystem about how to manage dependencies in various use-cases. It is an apples to oranges comparison because of those decisions and not because of the language.
Which apps would that be ? I'm pretty sure there's zero node server running on my desktops & laptops at the moment
My slightly heretical opinion is that Debian would have been better off removing system pip entirely. The system python is for the system.
My not so heretical opinion is that PIP should behave like NPM by default, and work under a local environment subdirectory, just like "npm install" already creates a "node_modules" directory where to put all files, without the user needing to specify how and where and which env tool to use.
Ah, but that would require that the python interpreter look first in the local directory in case there's a virtualenv there, which would mean your system could break depending on which directory you ran bits of it from. Less than ideal.
It's better all round to just assume that unless you're building something to be a part of the system itself, that the system interpreters just aren't for you. There's a special case for shells where they're actually UI, but I've seen so much effort wasted over the years trying to let system interpreters do double-duty as both system tools and development environments that I've come to the conclusion that it's simply not worth the hassle.
That's the idea, yes. Do you have by any chance any experience with Node.js? Running a JS script is usually done in two steps:
First one downloads and installs all dependencies listed in a package.json file into a node_modules local subdir. This is equivalent to creating a pip venv, activating the venv, and running pip install against a requirements.txt file.Second one runs the interpreter with the script file and against the locally downloaded dependencies.
I.e. the local env dirs in Node.js are not a suggestion that you need to learn about, make choices, and actually use; they are just how the tool works by default, which makes the experience of using Node.js with NPM by far much better and less confusing than the default experience of using Python with PIP.
I have years of painful experience with node.js, yes. The critical difference between node and python is that there are no system-provided scripts on my system which have `#!/usr/bin/node` as a first line.
There are a load of scripts with `#!/usr/bin/python` (or similar) in `/bin`, which means package resolution can't look first in a local subdir otherwise all bets are off. Again: your system will break depending on which directory you run bits of it from. The system-provided process would be loading dependencies that it was not tested against. In case this is unclear, you do not want that to happen. It would be bad. On my system I can see python scripts involved in driver management. I do not want them to do unexpected things. It would be bad.
Python package management is a mess in lots of ways, but this particular choice isn't one of them.
> which means package resolution can't look first in a local subdir
Sure it can, it's just a matter of examining where it's being run from.
I wrote python-wool to load packages from a venv if it finds one.
https://github.com/fragmede/python-wool
Yep. That comes under the category of "not default". Although now you've pointed it out I'll probably be using it.
Python is a scripting language. I shouldn’t need to faff about with environments if I want to run my script on another machine.
Encountered this too. So annoying.
I don't know if any API does this, but I often wished that APIs I used could mark up just a tiny, tiny subset as @FutureSafe, like the opposite of tagging deprecated code, so that for smaller projects you could stick to only or mostly those parts and know that you can come back after 2 years or 20 years and things still work. Maybe throw in a compiler flag to verify that nothing not-@FutureSafe is used by accident. Sometimes you just want to write something small, once, and not have to actively maintain it forever. Outside of shell-scripts or retro-platform code you can barely write Hello World for any target today and feel confident that it will still run six months from now.
Libraries in the project fixes this whole issue for C/C++. As for compiler issues, just run it with the same compiler. It really shouldn't take more than 20 mins of setup.
> Libraries in the project fixes this whole issue for C/C++.
Yeah, make sure no-one can ever fix your security vulnerabilities.
> As for compiler issues, just run it with the same compiler.
And when the same compiler doesn't exist for your new machine?
Freezing everything makes things easier in the short term, but much harder in the long term.
This is not even JS specific. All of Python / Ruby / other changing runtimes will require some upkeep. Even C recently needs some attention because clang updated default errors.
Even some of my Rust projects end up in this state, where updating one library ends up with needing to update interacting libraries.
That sounds pretty frustrating.
When I find my shell scripts from 20+ years ago, they still just run as intended.
It's also two hours that would have been completely avoided if the author were familiar enough with Node to know to pin the version and not try to install 4 years of updates in one shot.
Most who are here saying that X, Y, or Z ecosystem "compiles and runs" fine after 4 years are talking about the time it takes to resume an old project in a language they're very familiar with running the same dependency versions, not the time it takes to version bump a project on a language that you don't know well without actually having it running first on the old version.
I can open my 4-year-old Node projects and run them just fine, but that's because I use the tools that the ecosystem provides for ensuring that I can do so (nvm, .nvmrc, engines field in package.json).
The author didn't update all dependencies, they just tried running it on a newer version of Node itself. That is definitely a use case included when most people talk about an ecosystem compiling and running fine after several years.
In some ecosystems, yes, backwards compatibility is a given, but not in most. Python versions behave in much the same way as Node, with you having to make sure you're using the same Python version this time as last time in order to be able to install the same dependency versions. Java has been better in recent years, but Java 8->9 can take several hours to get working on even a small project.
Pinning versions should’ve been the default, then.
> Two hours of work after 4 years sounds ... perfectly acceptable?
Does it, though? Node wasn't exactly new 4 years ago, and plenty of other languages would offer a better experience for even older code -- Java, C, C++ to name a few.
> Java
50% of Java developers are still regularly working in Java 8 [0], which is the same solution that the author could have arrived at immediately—when starting up an old project after 4 years, use the same version you ran last time, don't try to update before you even have the thing running.
> C, C++
Not my experience, but maybe it depends on your operating system and distro? In my experience sorting through the C libs and versions that you need to install on your system to build a new project can easily take a few hours.
[0] https://www.jetbrains.com/lp/devecosystem-2023/java/
Define "better experience."
1.5 hours to get running again?
1?
In exchange for needing to run C? How many hours would it take to build a Node app equivalent in C, I wonder.
0 would be fine. I'd take 0. This could all have been avoided if the interpreter version had been recorded by default somewhere. That's all this needed.
Other ecosystems usually do not have problems to the extent the author had.
I am deep in the Python ecosystem, and I love Python, but I have to admit that Python has the same issue. Reviving a medium-size project after 4 or more years usually means I have to catch up on a lot of new surprising deprecations. That's not because there's anything wrong with Python; it's more of an economic issue: the authors of active libraries have little or no economic incentive to support old, deprecated versions, so they just don't. That's life in the modern world. It is a deep problem that should theoretically affect every large software ecosystem because very few library authors can predict the future with great accuracy, and very few open source library authors have any significant incentive to support old ideas.
> That's life in the modern world. It is a deep problem that should theoretically affect every large software ecosystem because very few library authors can predict the future with great accuracy, and very few open source library authors have any significant incentive to support old ideas.
I disagree. This is an easy problem to avoid with minimal due diligence, people just choose convenience and make unnecessary tradeoffs.
* Use the standard library (ironically not available for Node projects). It will be built with better backwards compatibility almost every time. What deprecations do occur will likely be VERY WELL documented with much quicker adaptions.
* Limit third party dependencies. Do you really need an ORM for your apps 40 sql queries? How long would it take you to scaffold it with GenerativeAI then make it production-worthy without the ORM? 1 hour? 5 hours? 20 hours?
* Pick technologies with better track records. Maybe don't use Beta software like Swift Data for your iOS App. Maybe choose golang for your API even though it'll take a little bit longer to build it.
And this is how you end up with rewriting the world and spending more time rewriting dozens of existing libraries to avoid adding them as dependencies and less time working on the problem you're actually trying to solve because you're fixing the same dozen bugs that the first person already went through the trouble of fixing for you had you simply used their library instead of eschewing it and having to learn everything that they had already learned for you. Often times because the problem space is deeper than you could have known before getting into the weeds and hopefully you don't get bit by sunk cost and decide to do yourself a favor and just use a library instead of continuing to work on solving problems that aren't related to what you set out to do.
There's a balance to be struck between LeftPad scenarios and "Now there are 37 competing libraries".
Exactly. The right thing to do is study each dependency and decide whether the reward of having the problem solved quickly is worth the many risks of adding dependencies.
I'll acknowledge here that there seems to be a significant difference between Python projects and Node projects: in my experience, a small Python project has a handful of dependencies and maybe a dozen sub-dependencies, while a small Node project usually has a handful of dependencies and a few hundred sub-dependencies. That's where Python's "batteries included" motto does seem to help.
> There's a balance to be struck between LeftPad scenarios and "Now there are 37 competing libraries".
I think we're actually in agreement. My assertion is that for projects which want to avoid constant maintenance, particularly small projects, you can make architectural decisions some of which could significantly improve the maintenance outcome. Of course there are trade-offs to those, and if you make the wrong architectural decisions it can cause more harm than good.
Maybe I'm glib for calling it "easy" but for many leftpad scenarios it really is a "holy crap why did you think that was ok" scenario in my experience. Lets avoid those scenarios when we can.
> How long would it take you to scaffold it with GenerativeAI then make it production-worthy without the ORM?
Having a machine do codegen to map your queries to objects is still an ORM, except now it's nondeterministic and not updateable.
(mind you, I come from C# where you simply use LINQ+EF without worry, or occasionally Dapper for smaller cases)
> Use the standard library (ironically not available for Node projects)
???
Node.js has an extensive stdlib [1].
Contrast that with Python which practically requires a 3rd party HTTP client.
Or Java that doesn't even have JSON support.
[1] https://github.com/nodejs/node/tree/main/lib
> That's not because there's anything wrong with Python
It's absolutely because there's something wrong with Python, the package management, and also the type safety. JVM languages haven't had these problems for 20+ years.
> I am deep in the Python ecosystem, and I love Python, but I have to admit that Python has the same issue.
The same problem in Python is much easier now because you can ask the uv resolver to limit itself to some earlier point in time.
You can do `uv pip install --editable . --exclude-newer=2022-01-01` and you will end up with a resolution from two years ago. Since uv can also install older python versions automatically you can easily bisect you to a newer point.
Will uv even be around in 4+ years? No idea.
I don't know either, but think even if it's not, whatever will replace it, will at least have to achieve feature parity.
Maybe they can try to get the node version into the package-lock tomorrow? This seems like an opportunity to improve the ecosystem, rather than a biting critique.
Or, instead of responding to sunk costs by getting sunk deeper into the muck, just cut your losses, ditch Node and its proprietary/non-standard APIs and unstable featureset, and use a standard runtime.
The author of the blog post is trying to run a static site generator. A static site generator doesn't need to be able to do anything that Node provides that can't be done with the World Wide Wruntime (which they're already going to use to verify the correctness of the SSG output). So use that runtime and tools that target it, not Node.
Which is fine until your host doesn’t support older versions of node.
I just got burned by an old js (vue 2) project. I ended up re writing it using good old ssr in Django with htmx and alpine when necessary. Now it’ll run until the end of time. It doesn’t even have a build step.
I sympathize with you, I had one too. Luckily it was small.
It seems luck of the draw. My old React projects (old as in 2018) still work great with class components. I guess the Vue guy did say he would be more revolutionary, when he launched it.
Exactly, 2 hours after 4 years is perfectly fine. And all those saying "well my c project did it without a problem!" yeah that's why the entire world is mostly using c for large modern, complex and ever-changing applications; right? Most of the C stuff people are claiming compiles fine after so long have no external dependencies and are really small, simple projects...
This whole thread just reeks of JS hate. I'm not perfectly happy with the language either, but nor am I perfectly happy with any other. If my employer wanted me to use C# or Go or whatever instead I'd be perfectly okay with that as well.
> Two hours of work after 4 years sounds ... perfectly acceptable?
It's really not. I'm in the same situation with a Go + React project I haven't really touched in 3 years. The Go part is just `go build`, it still works after 3 years and I have a build in a fraction of a second. The React project doesn't build anymore at all. I used Parcel 2 at the time, and it turns out it's incompatible with a Mac M1 for some reason, and it's hard to update in my case. I also used Antdesign and some components (icons specifically) apparently disappeared. It should not be so hard.
I feel like you've just proffered a second success story. It can get worse. I've worked on software whose "dusting off" involved swapping the entire dep tree because the remote that Bower was using no longer existed, or manually compiling pinned versions of dependencies that Homebrew no longer offered. Sometimes it took days to get everything working.
Go would be much worse than Node.js in most cases since dependencies directly reference GitHub projects which could have been deleted whereas NPM has a cache and immutability guarantee which makes old NPM modules available and unchanged.
You rely way less on dependencies when writing Go code though. That’s in big parts why it doesn’t break nearly as much (or at all).
Go also has the goproxy
it took two hours just to get the project running as it was 4 years ago. wait how much time it will take to upgrade everything to new versions.
and dare i say this is the lucky case. i had problems reactivating some older project because some dependencies were not version locked and newer versions of those were incompatible with other versioned ones. then it took forever to figure out the right versions. and i think i even had situations where the old version wasn't even available anymore (perhaps because of a linux dependency that wasn't available anymore on my newer linux system)
It took 2 hours to realize that project build for a specific version of node should be run with that version of node. And even that was self inflicted since author didn't vet dependencies and used something build as a C++ node addon instead of actual JS (my bet it was to have a slightly easier time writing CSS).
And even though it builds, there’s no guarantee it’ll work properly.
Also, Gatsby has dependencies that aren’t even Node. I have had it break too.
An anecdote — I recently upgraded an old NodeJS project written using v14 to 22. It used an old version of Svelte for the UI. It wasn’t too bad, took about 90 minutes to upgrade a bunch of Svelte 3 components to Svelte 5, with some assistance from Copilot.
This was with zero awareness of Svelte 5 (I only knew it was released — I wanted to upgrade because Node 14 is EOL now and I’d noticed some npm audit warnings.) So I feel I also learned something in the process.
I would love the author to test and old java/maven project. Node is a paradise compared to that stack.
Why do you think so? I have 10+ years old Java/Maven projects that build and run fine.
Maybe you are fortunate or I'm not because in projects I have experienced missing o changed maven repository urls and changed deps packages names or url. And was an hell to fix.
The only problems I've run into are related to certain package repos that went offline. So you have to track down suitable versions of certain packages on other repos.
OTOH with Node I always find myself in dependency hell with dealing with old projects.
In my experience this only happen if the project does not have a lock file and does not specify a node version (two good practice that must be followed).
I'm talking about upgrading dependencies. Something that's rarely easy in a JS/TS project with lots of dependencies.
Each to their own, I don't find it acceptable - while other platforms have updates needed, it's not to the same level.
Yep, this could have been sorted by one line in a .tool-versions file and using mise or asdf.
Exactly, you would have to compare it to other package managers to make it fair.
Npm at this point is probably the most used and most worked on package manager in history. If it is still one of the worst ones that is actually kind of interesting philosophically.
I agree. It is not weird if you try to run old code through new env / vm / framework / compiler it can break.
Locking env version is important.
Double points for using experimental / POC technology like gatsby or nextjs. They are expected to burn and fail
> Two hours of work after 4 years sounds ... perfectly acceptable?
Pefectly acceptable? Perfectly? Really? I have 10 year old C and Go projects that build and run fine as if nothing has changed. I can upgrade the dependencies if I want to but that's on me. The projects themselves have no problem building and running fine.
Just to try it out, I took one of my old C projects written in 2010, last code update in 2012. It's a machine learning utility to train maximum entropy rankers. I was a bit worried because it used CMake (rather than a vanilla Makefile). It configured and built without any issues (and super-fast).
How many external dependencies/libraries does it use? Ie remote dependencies not stored on your machine?
How complex is the project? I know ML utilities like that can be as simple as 50-100 lines.
Lol, Go was notorious for breaking with its "pull from main"/lack of lockfile approach.
Did you read it? The author did not actually resolve the issue, only figured out that it should build with older Node version.
You’re absolutely right. My rational brain agrees and chalks it up to poor project management. However… emotions run high when you have zero idea why something isn’t working and the process of elimination is pretty taxing. So the point for me is venting / maybe someone will read this and remember to write their node version down!
> Two hours of my life gone, just to pick up where I left off.
If I had only wasted two hours every time I had to use npm for some reason I'd be significantly ahead of where I am now.
Doesn't that apply to any it-issues you don't know how to solve?
no, most of the time when a README says:
> run command xyz
I run xyz and I'm on to the next step in an average of ten or fifteen minutes (because yeah, stuff still goes wrong some times).
But if the instruction is to invoke npm, there's always some crazy side quest involved. It doesn't get any better with time because next time around it's a totally different side quest. I can't even offer any criticism towards fixing it, it just sucks for unique new reason every time.
The worst part isn't just that it's nearly impossible to run/update an outdated JS project, but that this process will repeat itself ad infinitum.
On the flip side, anything that uses vanilla JS without a build will most likely run just fine, probably till the end of human civilization.
I truly believe some flavour of "Javascript Classic" (some future state of JS before some big shift in syntax/mass migration to something else), C and x86 instructions will follow humanity for the rest of time. There will be javascript somewhere aboard the interstellar spaceships of the future, and we will still complain about it.
I think it was 'A Deepness In The Sky' that posited so many layers of legacy underlying the starship control systems of the era that one of the most crucial positions on a ship was that of 'Programmer-Archeologist.'
That sounds great. Added to my list.
I adore the idea of somewhere on the Star Trek Enterprise, underneath the isolinear chips, the EPS conduits, the warp containment field control mechanisms, somewhere, if you dig far enough, override enough, you can get an LCARS screen that is all black, with a small `$` in the upper left and a blinking text cursor, waiting for a command.
I am already a programmer archeologist, and it's one of the most rewarding parts of my job. Though my most recent project is regrettably well written and maintained. :)
im going to try and have this attitude
Well, even Fortran is still around us in some lapack code in numpy and in a lot of the stuff behing scipy, so, a lot closer than a lot of people can imagine.
Basically a lot of AI depends on a bunch of absurdly optimized numeric libraries writen in Fortran.
Fortran is well into the way at becoming a centenarian programing language at 74 years of age.
My grandchildren will live to see Vernor Vinge's programmer-archeologists troubleshooting PHP issues on the Wordpress install responsible for life-support around Alpha Centauri.
Just bobble some programmers now.
I sometimes think about one of the Star Trek episodes where the ship was getting attacked by a "SQL injection", and I think that's pretty realistic
Yeah, starting to believe the hacking scene in the Matrix where the machine city was still running on IPv4 wasn't a blunder but foresight.
There will also be someone playing Tetris, Doom and Final Fantasy VI on their neural interface, long after all modern games have been lost to time (and DRM).
JavaScript will be killed off by WebAssembly.
Zombie JavaScript will be reduced to being glue code and then not even that.
The Webassembly dev experience is awful, and it needs to be glued together with JavaScript anyway. 99% of web code doesn't care about the performance difference, and anything GUI is wasting more time jumping back and forth to JavaScript to save any time at all.
> The Webassembly dev experience is awful
It really isn't. Wasm is just a compilation target now.
Network effects are very strong for languages.
Better query languages than SQL could exist, but there's so much existing code and expertise out there that it's not worth the effort. Better backend languages than Java can & do exist but don't have the same enterprise popularity.
Developers, projects and companies have an immense incentives to target the most popular programming language.
JavaScript's one advantage was that it was the privileged language in the browser. It has lost that now.
JavaScript has entered its Walking Dead phase. It will gradually be displaced by all languages compiling to WebAssembly.
JavaScript’s main advantage is being commonly familiar to most software engineers.
The English language similarly lost its position as the preeminent imperial language a long time ago, so too with Latin & Rome. It takes a long time for a popular language to die because everyone wants to speak what everybody else speaks.
WASM has a lot of shortcomings. You can't even update the DOM without doing a worker dance.
only after they make DOM access easier. see previous discussion here:
https://news.ycombinator.com/item?id=41955585
https://news.ycombinator.com/item?id=41955488
JavaScript will be killed off by webassembly for about 5 years now
The difference lately is the number of tools that are now in place for WebAssembly development and the new extensions to WebAssembly (WasmGC, Memory64, etc.).
Despite 28 years of effort at optimization, JavaScript is outperformed by WebAssembly. There's not much coming back from that:
https://jordaneldredge.com/blog/speeding-up-winamps-music-vi...
https://www.amazon.science/blog/how-prime-video-updates-its-...
https://web.dev/case-studies/google-sheets-wasmgc
?? JavaScript was outperformed by every language since it was introduced.
It doesn’t matter. What matters is what people put in their web pages.
But now JavaScript is no longer the privileged language in the browser.
With WebAssembly every language runs in the browser and runs better.
And what will prove that is adoption. As others have said, that has been “just around the corner” for years.
Javascript is outperformed by WebAssembly in the sense that it runs faster.
A lot of things that bring a lot of value to a lot of people are still much, much faster to build via the JS / TS ecosystem.
It absolutely makes sense that calculation-heavy workloads will be ported to WASM, but there's a lot more to building an app.
> but there's a lot more to building an app.
Like what? Visual UI designers? WebAssembly's got you covered: https://platform.uno/blog/uno-platform-studio-featuring-hot-...
Running Visual Basic in a C# application compiled to WebAssembly? Sure, why not: https://bandysc.github.io/AvaloniaVisualBasic6/
YMMV but I had a 4 year old project whose only dependencies were socketio and express and it booted right up. So stick to stable, mature projects and you're likely to be fine.
Just watch out because socketio must be version matched for client-server or you will get the most annoying errors and state inconsistencies in the world. That's a scary production update let me tell you. Version 2.3.0 still scares me to this day after trying to upgrade that without production downtime.
> it's nearly impossible to run/update an outdated JS project
You corrected yourself, but it's worth emphasizing here: a _NodeJS_ project, you mean.
Unless you're using non-standard APIs, stuff written to run in the browser generally keeps working just as well as it did before, no matter whether it was written 2 years ago or 10.
What if your project is old enough to predate the modern "just use vanilla js, it's fine"? :tableflip:
j/k - I'm slowly removing all the Zepto code I have and it's usually a relatively quick search&replace.
This nearly impossible task took the author two hours
He gave up and went back to an old version. So we don't know if it was possible or not.
Or until Google decides to change things to be more standards compliant, regardless of the collateral damage
I call this phenomenon "node rot". Judging by the comments here, it seems like a universal experience.
My favorite is the way that Python projects rot. Not only does Python's setuptools give you all the fun that node-gyp does, the common practice of versioning packages with packagename>=1.25.5 means you're almost guaranteed breakages as pip installs newer versions of packages than what the project was built with.
And project specifying "requires python 3.X+" instead of Version X to Y is also a major culprit I often encounter.
Most of the times it will not work with the newest shiny python, which I only notice after already installing it and then having searched search the Github issues.
That happens to me all the time. It helped cement my habit of binding the python version to the project with direnv and a flake.nix so I end up switching to the right version when I cd to the project dir.
Oh, one of the worst forms of torture is definitely trying to get a random Python AI project from GitHub running locally. There's almost always a conflict between versions Python, Cuda, Pytorch, and a hodgepodge of pip and conda packages. Publishing a requirements.txt is the bare miminum everybody usually does, but that's usually not enough to reconstruct the environment. The ecosystem should just standardize to using declaratively prebuilt container environments or something.
Granted, my experience is mostly from the GPT-2 era, so I'm not sure if it's still this painful.
Don’t know if this would help your case or not, but jart’s llamafile seems like it would be useful
[6] https://github.com/Mozilla-Ocho/llamafile
I just today tried reviving an old v12.4 node project and node-gyp is trying to use python2 which I don't even have on my Macbook anymore.
That Python 2 crap hit me as well when working on an older project. It used node-sass which depended on node-gyp. Dealing with upgrading that was a pain because I had to upgrade so much more to get versions of everything that played nice together.
What really pissed me off about it was that Python 2 was already known to be nearing EOL when our project was started, so node-gyp should have been upgraded to work with Python 3 by then. And even more annoying was that node-gyp already had Node to run on, so why in the world was it coded to depend on Python at all!?
did you experience this breakage using poetry too?
This will always be an issue for the node community - it’s endemic to the JavaScript shipping / speed culture and the package management philosophy.
Go is much, much better on these terms, although not perfect.
I’d venture a guess that Perl 5 is outstanding here, although it’s been a few years since I tried to run an old Perl project. CPAN was dog slow, but other than that, everything worked first try.
I’d also bet Tcl is nearly perfect on the ‘try this 10 year old repo’ test
CPAN.pm is not the fastest, no, though it generally spends most of its time running each distribution's tests before installing it, which while it does have a certain "start the install and go for lunch" to it is an excellent canary for if something's changed underneath you *before* you end up having to go spelunking your own code.
App::cpanminus (cpanm) is noticeably lighter, App::cpm (cpm) does parallel builds and skips tests by default.
An approach I've become quite fond of is using cpm to install fast into the local::lib I'm actually going to use, then creating a scratch setup in /home/tmp or similar and running cpanm in that under tmux/abduco/etc. to do a second install that *does* run the tests so I have those results to refer to later but don't have to wait for them right now.
(if I ever write a cpan client of my own, it's going to have a mode where it does a cpm-like install process and then backgrounds a test running process that logs somewhere well known so this approach becomes a single command, but I keep getting distracted by other projects ;)
Clojure too, by all accounts. I'd say Common Lisp but they're in the weird position of code itself being rampantly portable across time but the ecosystem around it being astonishingly immature.
Things have improved a lot with the introduction of Quicklisp, but I'd have to agree when compared to others.
CL is still one of the nicest languages there is, and the only language that skirts the line between being some combination of dynamic and interpreted yet typed and compiled.
It is showing its age though, particularly around the edges like what you're saying.
Quicklisp is a start, and I like that it's a Debian-style distribution because it can at least move towards making the assertion that everything in the distribution is mutually compatible. The problem is that the underlying ecosystem has no culture of saying "version X of library A is only compatible with versions Y+ of library B". You can put that info in the asdf system definition but I don't see many examples of it.
The other problem is rate of updates, and that's a symptom of it basically being on one person's shoulders to keep it ticking over. I can't readily think of another major language ecosystem with that characteristic. It just seems really fragmented.
Quicklisp is merely a registry for getting libraries, not shipping code. Download what you want and use asdf.
Yes, that's what makes it immature. There doesn't seem to be anything in the ecosystem to handle version resolution. Qlot is a start towards that in that you can at least specify alternatives to get yourself out of a hole, but you still ended up in the hole in the first place.
I think it’s a good thing. I don’t want npm for Common Lisp.
Which bit is it that you object to?
Edit to add: this wasn't intended as a gotcha question, so apologies if it came across as one. I have issues with a lot of details about how npm works and the ecosystem it supports. I think it's possible to avoid them, and aim for something more like a bundler or a cargo, but again there are issues there (certainly in the former's case, I have less experience of the latter). Getting to a good answer that works for CL means understanding both the problem space and the known solutions elsewhere.
It might be that "a better quicklisp" is enough?
Sure, I didn’t take that negatively.
Let me start with facts:
- npm actually downloads multiple copies of each library, when needed to satisfy conflicting version requirements. - this is only possible due toruntime features of JavaScript. In most languages like C this causes symbol collisions. - I think this is a problem in Common Lisp too due to packages being global. Maybe there is a fancy way to rebuild packages at load time. - this is why the Debian style release makes sense. Either everything loads together, or not.
Opinions - I want to know all my dependencies. I treat them as my own source, so tar downloaded is close to my mental model. - For c projects I usually have a make file with curl commands tied to exactly url. If I want to update I manually change the url. - quicklisp already has a nice way to make an isolated folder just containing your code and its dependencies to be loaded with asdf. It gets out of the way once you have downloaded your libraries.
Yes, that "feature" of npm isn't something you'll find elsewhere. It's not inherent to the problem of version resolution. Just about anywhere else you'll just get an error message that the version resolution isn't possible, if there's no available combination to satisfy all the requirements.
That one design attribute of npm probably more any other feels like they did it because they could, not because it was a particularly good idea.
Go’s minimum version selection is the way and I don’t understand why other ecosystems haven’t adopted it. You’re be able to compile an old project with all the library dependencies it had at the time it was released. It might have security issues, but at least you start with a version that works and then can go about upgrading it.
It also helps that if some library dependency generated Go code using a tool, the Go source code is checked in and you don’t have to run their tool.
Getting the exact dependencies it had at release is a solved problem in Node and most other languages with lock files too.
It's just no guarantee that those old versions work on the new system, or with the outside world as it exists by time of installation - which can be as true for Go as any other language. If the XYZ service API client still gets you version 1.2.37, that's not actually any help if 1.2.37 calls endpoints that the XYZ service has removed. Or a cgo package that binds to a version of OpenSSL that is no longer installed on your system, etc.
This is why I say it's a cultural problem, not a technical problem. In goland, changing API calls in minor versions is pretty much a sin. At least it's something you'd do .. carefully, probably with apologies. In node, it's extremely routine to re-pin to newer modules without worry.
Some time ago, I wanted to update Arch, on a server running some python project I had inherited. Long story short, it relied on something that relied on something that etc., and then it turned out certain components that were needed for the upgrade process had been taken offline. Now the system can’t be changed, unless there’s significant work done to the code, and that’s too expensive. It runs on request in a container now, while it lasts.
back in the day you were supposed to check in your compiler into version control (not the lockfile, the whole distribution).
I used to think that people emailing screenshots of corporate dashboards were idiots. I now think that's actually genius - a frozen in time view which you can't regenerate but will be available until the end of time if you need it. (Hello, Exchange admins!)
My hot take is that lock files and nested dependencies induce fragility. If packages were required to work with wide range of dependencies then that would force the ecosystem to build the packages in more robust way. Basically I think the dependency trees built with modern package managers in a sense over-constrain the environment, making it all sorts of difficult to work with.
On the other hand, the other extreme induces stuff like autoconf which is not that great either. Trying to have your code be compatible with absolutely everything is probably not good, although arguably platforms these days are generally much more stable and consistent than they were in the heydays of autoconf.
I truly think it's just because the engineers that started working with node were ... young. They wanted to rapidly iterate, and so crufty old habits like this weren't what they wanted or felt they needed.
What's been interesting is watching these devs age 10 years, and still mostly decide it's better to start new frameworks rather than treat legacy code as an asset. That feels to me like a generational shift. And I'm not shaking my cane and saying they're wrong -- a modern LLM can parse an API document and get you 95% of the way to your goal most of the time pretty quickly -- but I propose it's truly a cultural difference, and I suspect it won't wash out as people age, just create different benefits and costs.
I've had a fair amount of trouble with Perl/cpan simply because of the sheer number of XS (compiled C extension) modules in the ecosystem. For even a medium sized perl project that e.g. talks to databases or whatnot, building it after a long time requires you to spend tedious hours getting the right development headers/libraries for the compiled components, fussing with compiler flags, dealing with C ABI symbols that were deprecated in the interim, etc.
To be fair, Python and Ruby also have this problem (for newer Pythons, popular extension modules at recent versions are more likely to Just Work due to wheels, but if you're building old code for the first time in 3+ years, all the old problems come back with a vengeance). It's more of a "scripting language that got popular enough that ordinary projects have a deep tree of transitives, many of which are compiled on-site" issue than a Perl specific problem.
You're talking about what's wrong with the NPM ecosystem, not JS.
Previously: You wouldn't conflate Windows development with "C" (and completely discount UNIX along the way) just because of Win32. <https://news.ycombinator.com/item?id=41899671>
Yeah I'd expect 20yo Perl5 stuff to work without issues.
A few weeks ago I was experimenting with a sound generation dsl/runtime called Csound and even most 30yo sources were working as long as they didn't use some obsolete UI.
It’s the same with R. The only thing preventing many ancient packages from running under new versions of R and vice-versa is the fact that the package author simply set the minimum version to whatever they happened to be using at the time.
I recently migrated a project to Node.js 8 (!) to Node.js 14 (hopefully just the beginning), and I can relate to this post.
In the JS ecosystem, I'm aware that Meteor is one major framework that takes backwards-compatibility seriously. Updating a project on an ancient version to a less-ancient version usually is not too hard. They try to keep APIs the same and introduce compatibility packages where possible.
Meteor 2.16 to Meteor 3 introduced major breaking changes due to an underlying technical issue that had no workaround. They had to refactor the whole project from using Fibers-based concurrency to typical async/await.
node-gyp in general has also been a source of issues in the past for me as well.
More recently, ESLint changed their configuration file format and all existing tutorials suddenly became outdated.
I firmly believe the ecosystem does not have to be like this, and we would save a lot of man-hours by being more committed to API stability where possible.
> Meteor 2.16 to Meteor 3 introduced major breaking changes
That's when I picked up the Node/React ecosystem...
> node-gyp errors > downgrade to 12.2
This is what I did until Vercel decided to not support Node 12 anymore...
After node 10, things started to stabilize. These day a node upgrade is a tiny commit just changing the version number in the Dockerfile and package.json
I've actually had a node project go bad in a mere 4 months. It must be a new record. That was about 4-5 years ago though.
Hopefully the ecosystem as improved since then, but it was nearly impossible to get going.
Some packages had been changed and the version number overwritten with incompatible packages, and the conflicts were plenty.
One of the things I'm intrigued by is that JS people, and the other couple of ecosystems where this is a big problem, go out to learn another language (as a good T-shaped developer does), and then start posting frantic questions to the new language's communities about how this popular library hasn't had a commit in six weeks, is it dead, oh my gosh wtf aaaaaaaaaaa.
It's OK. Not every language ecosystem is so busted that you can reliably expect a project not to work if someone isn't staring at it weekly and building it over and over again just in case. Now, it's always a risk, sure, no language anywhere is immune to the issue [1], but there's plenty of languages where you can encounter things from 5 years ago and your default presumption is that it's probably still working as well now as it did then. It may be wrong, but it's an OK default presumption.
[1]: Well... no language in common use anyhow. There's some really fringe stuff that uses what is basically content-based references for code dependencies, but I'm not aware of anything that I'd call "production quality" that even remotely looks like that, and is immune to someone just plain making an error with the semantic versioning or whatever.
> frantic questions to the new language's communities about how this popular library hasn't had a commit in six weeks
Lol, my perspective is almost the opposite. If it's got a lot of commits in the last six weeks, I need to look for something that's stable. Unless there's a good reason for so many commits; I feel like that many commits means it's in active development, which implies the requirements and interfaces aren't yet determined and who wants to rely on that?
These JS developers would probably shiver at seeing many Common Lisp repos with a last commit like 12 years ago and still working like a charm.
I’m curious, how do you measure the pulse of a project that old? Do people still talk about it? Or that not even necessary — use it until it breaks and otherwise don’t think about it?
Why do you want your building materials to have a pulse?
Ideally, in adopting dependencies, you should be looking for a mature utility whose design was clear and implementation is complete.
If it's open source, you should be able to read and unserstand the code yourself, and you should make an earnest effort to do so, in case it has faults you wouldn't usually allow in your own code and in case you need to fork it at some point.
This lets you you build well-designed, stable, maintainable, clear things yourself.
The alternate, building your project on a random collection of "living" projects undergoing active development is how you banish yourself to perpetual maintenance, build failures and CVE warnings that have nothing to do with your work, surprise regressions when you update your referenced version (you are, at least, pinning your versions??), etc
Something like a HTTP 1.1 client is something you might expect would be a pretty stable thing that doesn't need too many updates, right?
But I would not assume that a HTTP client that has been untouched in 12 years supports SNI, for example, which means that actually it might be totally useless for a lot of modern sites (certainly Android did not support SNI 12 years ago).
You're going to put it behind nginx anyways, right? So why does it even matter?
Client, not server.
If it has an issue tracker, you can look in there for things that look like real issues and are unaddressed.
If there's no issue tracker, you can YOLO and try it and see if it works, or you can look around at the code and see if it looks reasonable.
Even if there are unaddressed issues, you can always use it and fix it when it breaks. If it's reasonable enough, it's a good start anyway. And at least my assumption with open source is I'm going to be fixing it when it breaks, so lack of a pulse is better than churn.
Maybe "pulse" could be transitive? Like, if a project doesn't have many recent commits, but many projects using it have recent commits.
Node is bad but the worst I have seen is Android
How about node on android?
Delete this comment right now, don't give them ideas.
Too late, we already have react native
And Nativescript.
It's a double whammy
I would expect most Java projects from 20 years ago to compile and run with zero issues.
Absolutely not. Not on the client side anyway.
I know of one application by a large multinational that requires java in the browser to run. Almost impossible to run now because of security restrictions.
well java on the desktop and java in the browser are two entirely different beasts. the problem here is not java but the changes that have been made in the browser.
We do have some very old and likely lost all sources "client apps" that are a single JAR and date from around 2003-2004, written in Swing. They still work.
Of course when they stop working they will be phased out, but we have been expecting their death for years now and not happening yet.
The ecosystem has not improved since then.
the problem was your research before using the packages then. So much bad engineering and architectural planning is blamed on the tools, not the human using it.
Wow, that’s honestly impressive.
If there was an option to guarantee versions could exist for X amount of years (maybe even months?) then that would greatly help the stability of projects.
I would heavily recommend to avoid NodeJS packages that depend on node-gyp. Node-gyp powered dependencies are very seldomly worth the hassle.
If you must depend on node-gyp, perhaps use dev containers so at least every developer in your team can work most of the time.
I don't even know what node-gyp is, but I know it appears regularly in error messages to know it causes problems.
I don't even develop against Node, it has just crept into our front-end build toolchain.
It's the JS equivalent of allowing native bindings (like JNI in Java).
So I'm pretty uninformed about the guts of node-gyp, and why it's used, but if people need to bring in dependancies from outside javascript... could WASM be a good fit there? Could store the binaries instead, and ship those... and in theory (correct me if I'm wrong) that shouldn't be much of a security issue due to the security model of WASM modules... or at least equal to the risk of running arbitrary build commands on your machine from a random node package.
In principle, yes. In practice, the problem is that getting some random native library or tool compile with wasm as a target is not always easy. E.g. anything that relied on pthreads was out until fairly recently.
In practice you're just kinda stuck with it because whatever NPM package you're using is using that under the hood. One of my project depends on it because of postgres DB bindings, there would be no easy way for me to get rid of it without either finding another binding (which that is the official one) or rebuilding it myself which will just take too much time and effort for what it's worth
Pardon my ignorance, but wouldn’t that rule out most image processing packages that depend on (and often build during install) imagemagick as the backend? A long time ago I tried to avoid it in a project but really couldn’t find any decent node image processing package that wouldn’t at some point depend on it. Maybe I just didn’t look far enough?
one of the most crucial packages that use node-gyp are bcrypt and argon2. Both are needed heavily for password hashing while implementing authentication and while pure js alternatives are available, they run terribly
Use bcryptjs https://www.npmjs.com/package/bcryptjs
i did mention "and while pure js alternatives are available, they run terribly"
Slow is much faster than it not working at all. If this is a project that you might not touch for months or years, perhaps having fast bcrypt is not that important.
That would be a good argument to not implement authentication again and go with a solid authentication and authorisation software like Keycloak, Zitadel, or Ory Kratos.
if only integrating keycloak was simple eh?
If you are dealing with argon2 and bcrypt, I think you coud manage some JWT hell.
You know, I ran into something similar recently with a static site engine (Zola). Was moving to a new host and figured I'd just copy and run the binary, only to have it fail due to linking OpenSSL. I had customized the internals years ago and stupidly never committed it anywhere, and attempting to build it fresh ran into issues with yanked crates.
Since it's just a binary though, I wound up grabbing the OpenSSL from the old box and patching the binary to just point to that instead. Thing runs fine after that.
This is all, of course, still totally stupid - but I did find myself thinking how much worse comparable events in JS have been for me over the years. What would have been easily an entire afternoon ended up taking 15 minutes - and a chunk of that was just double checking commands I'd long forgotten.
That is nothing..Try building your Android project after leaving it idle for a week. Or better yet, try building you react native project you left for 2 days.
OMG I feel this in my soul. Try looking at one of the gradle files wrong in a kotlin multiplatform app with shared ui.
Hold on, you had to do binary surgery using an OpenSSL version from an old box you had? I salute the dedication.
Looks to me he just copied a shared library and changed the search path.
Also, if it was statically linked, he wouldn't have that one problem. (Could have others, but not that one.)
How exactly does one do that. Sounds exciting!
Not the OP but what sometimes works is as easy as:
``` ldd your-binary ``` on the old host and then copy all the thing that is referenced, put into ./foo and then start like so on new host: `LD_LIBRARY_PATH=./foo ./your-binary`. (may include typos, from memory)
A great tool for this used to be https://github.com/intoli/exodus - not sure if it still works.
Disclaimer: Also please don't do this with network-facing services, security applies, etc.pp. but it's a good trick to know.
I used it for something early this year, it was working then.
A sewing needle, a spare magnet, and a very steady hand.
I think you copy the library file and add it to you load path
patchelf
but don't forget to make sure your new path is fewer characters than the original one so you don't overwrite any of the library
What’s the issue with yanked crates? It should still build from your lockfile, even if it contains yanked crates.
Assuming you actually committed the lockfile...
Never underestimate the potential of past-you to have accidentally missed a tiny but essential step in a way that won't have made a noticeable difference at the time, yeah.
I think the other foot-gun here is that the official recommendation is that library crates not check-in their lockfile (I assume because that prevents downstream crates from freely taking updates to common dependencies).
This has the unfortunate side-effect that while downstream apps might still build, the library itself (and any examples in the library repo) may not compile after some time...
This is why Nix (with flakes), in a git repository, will refuse to use a lockfile that isn't being tracked by git.
Nix also breaks if you try to update it, and you'll eventually have to update for security issues.
I'm stuck with a defunct Nix project I can't update, because crane and fenix flakes made breaking changes, and nix is giving me incomprehensible errors. I've spent enough time googling the errors that I'd be quicker to start over with a nix-less VM.
Yes, that's an excellent idea, and will free up time to deal with the *other* stupid mistakes past-me made :D
All of the problems here ultimately came down to packages that used the native Node API. You don't need python or deal with C++ to run JavaScript.
Node is an active project. If you build against the native API and don't pin your version to avoid breaking changes between versions, this is what happens. In my experience, JS very rarely breaks between major Node versions, but almost every native package requires a new major update.
This isn't a Node specific problem. Go ahead and upgrade your Go or Python version.
I always try to remember to put the node version in my package.json - but I do agree that the dependency chain on node-gyp has been a blight on node packages for awhile. Really wonder how that wart became such a critical tool used by so many packages.
node-gyp is a huge source of these issues for Node projects, especially older ones.
For those reading this who don't know much about node - node-gyp is how you pull in native code libraries to Node projects, typically for performance reasons. You get the same sorts of build issues with it that you can get whenever you start having binary, or source, dependencies, and you need the entire toolchain to be "Just Right(tm)".
I run into this issue with older Node projects on ARM Mac machines (Still!), but I run into similar issues with Python projects as well. Heck some days I still find older versions of native libraries that don't have working ARM builds for MacOS!
Node used to have a lot more native modules, in newer code you typically don't see as much of that, and accordingly this is much less of an issue now days.
> I always try to remember to put the node version in my package.json
This 100x over!
> For those reading this who don't know much about node
I would prefer to remain blissfully ignorant, thank you!
IMHO TypeScript is the best mainstream language to write code in right now. It is incredibly expressive and feature rich, and you can model in almost any paradigm you like. The ecosystem around it allows you to choose whatever blend of runtime vs compile time type safety you prefer. Lots of people just runtime type check at their endpoint boundaries, and use compile time for everything internal to a service, but again, the choice is yours.
The Node+Express backend ecosystem is also incredibly powerful. Node is light weight, the most naïve code can handle a thousand RPS on the cheapest of machines, and you can get an entire server up and running with CORS+Auth+JSON endpoints in just 5 or 6 lines of code, and none of that code has any DI magic or XML configuration files.
JS/TS is horrible for numeric stuff, but it is great for everything else.
Why did you click on "The tragedy of running an old Node project" then
>Really wonder how that wart became such a critical tool used by so many packages.
The original dream for Node was that it would simply be a glue wrapper around libuv that allowed for easy packaging/sharing of modules written in C++. But everyone just started writing everything in JS, and the ecosystem ended up as a mish-mash of native/non-native. Ryan Dahl stated this was indeed his biggest mistake/regret with Node, thus we have Deno now.
> But everyone just started writing everything in JS, and the ecosystem ended up as a mish-mash of native/non-native.
Because the native written stuff breaks all the darn time and it creates cross-plat nightmares.
My stress levels are inversely proportional to how many native packages I have to try to get building within a project, be that project in Python, Java, or JS.
JS+Node runs on everything. Prepackaged C++ libraries always seem to be missing at least one target platform that I need!
The CPAN 'Alien' infrastructure is great for this, you have pseudo-modules that you can depend on that use vendor packages if available and build the damn thing for you if not.
It's considered ... rude ... in most cases to write a module that needs to build against a native library without also having an Alien dist to handle making sure said library is there by the time it's trying to build against it.
Opinions on perl as a *language* ... vary, let us say ... but I wish people who didn't like writing perl would at least look at how our infrastructure works and steal more of the good parts to make dealing with their preferred language less painful.
Seamless native builds are quite doable, but the tooling needs to be very deliberately designed around that. For a good example of how far this can be taken, consider https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...
I'm a huge Zig fan! Thank you for making native programming fun again! Zig is the exception to native build systems being painful.
But even a great build system doesn't help when old native libraries don't support newer hardware or OSs. At some point the high level -> native abstractions break and then builds break. :(
What is the Deno solution though? (I assume it's not sharing modules written in C++?)
Deno's solution is coming out years later when JS is fast enough that there is no need to involve C++ for most applications.
Good old node-gyp. I have absolutely no idea what it even is but it has been giving me errors for what feels like a decade. Mostly via front end build stuff from various projects I have worked on
Same. One day I'll find out what it is.
if you want to know, it's a fork of google gyp, which is a C/C++ project/build system generator. I.e. it's a bit similar to CMAKE, a tool to describe native code projects and what needs to be built in order to make executables and dlls.
It's a python codebase, largely abandoned by google. They used to use it for building Chrome.
i forgot all about node-gyp. The only memories I do have of it are the errors and thinking about gimps.
I'm pretty sanguine about languages and frameworks, but I draw the line at node. I have seen so many horrors visited by dependencies, often to do just one thing where 2 or 3 lines of code would do the job anyway.
When I was managing teams, whatever the language, I would ban any new dependencies which I didn't personally agree with. A lack of control just creates a nightmare.
Was that kind of control well-received by your teams? Out of context, it sounds like it would be pretty rough to be an engineer on a team where your manager had sole control over what tools you could use - I suppose it might make sense for junior devs or a very small codebase, but I would caution against taking that stance in a team where you want to facilitate mutual trust
Provided the manager only rarely exercises the power, and is open to being persuaded not to, having somebody able to veto risky dependencies can be really quite useful.
Normally when I'm the one with that power we rapidly get to a general understanding of what's small enough that I (a) probably won't care (b) will take responsibility for tweaking the schedule to makre time to get rid of it if I do.
And 'big' dependencies are generally best discussed amongst the entire team until consensus is reached before introducing one anyway.
Well back then there were fewer options, but the result was that completed products were easy to work with. Perhaps we live in different times.
Not defending node here (I had OP's experience almost verbatim), but I had a much worse experience with trying to compile PaulStretch (a C++ project). The dependencies were specified as a bunch of random third party URLs, half of which had gone offline. I ended up giving up after a few hours, and then finding a fork that Just Works.
This is one reason why whenever I build a new project, I build it inside of a Docker container.
That way, the project has just the dependencies it needs, and I know I can rebuild it at some point in the future and will be unlikely to run into problems when I do.
This works as long as your Dockerfile is reasonably reproducible and does its best to lock dependencies. However, this approach has failed me a couple of times in the past. For example, I rebuilt a container some weeks later, in the meantime a new version of clang had been released that just so happened to break my build due to a bug.
I personally use Nix these days, but the complexity is too high for me to recommend it to everyone for every software project.
Yeah, Nix pretty much solves this problem. The other day I wanted to try a really old version of spaCy for fun/historic interest. spaCy 1.8.2 installed freshly from the binary cache on NixOS-unstable as if it was still April 2017.
My first step now when trying to resurrect old projects is to create a Docker container for it - that way I can install any old versions of anything (like node or PHP) I need without having to worry about it polluting anything else on the system.
This is literally every "hot new thing" since 2000.
It is systemic. Part of it is due to too many people creating systems on the fly with too little forethought, but also because there aren't enough "really smart people" working on long term solutions. Just hacks, done by hacks. What did you expect when the people writing the systems don't have long term experience?
The problem itself is old, but the extent to which it pervades different ecosystems varies. It's largely a cultural thing, and the problem with JS/Node ecosystem specifically is that most of the community (or, perhaps, rather most of the library/framework authors) accepts this kind of thing as normal.
I joined a node project that was stuck on 0.12 while 7.0 was being developed. It was a shit show to get us to 6. As I recall, 10 was a little tricky, 12 and 16 had a lot of head scratchers. I finished the 16 upgrade more than a year after the last person tried, and it was a dumb luck epiphany that kept it that short.
I had a similar experience with emberJS when it was still young. Every time I picked the project up I had one to two hours of upgrade work to get it to run again, and I just had a couple hours to work on it. So half my time went to maintenance and it wasn’t sustainable.
I’m trying a related idea now in elixir and José may be a saint. Though I fear a Java 5 moment in their future, where the levee breaks and a flood of changes come at once.
I’ve started to adopt Nix devShells to help keep a record of each project’s dependencies.
If Nix is too heavy, the learning curve for tools like asdf-vm and mise is much lower and offers similar benefits.
I really wish there was a good equivalent for Windows.
I had this exact problem with multiple Node blog engines in the past. Constant version breakage was incredibly frustrating. I eventually moved to Hugo. A single binary which I committed with the blog files. Zero issues even years later. I can build the blog on any new machine within seconds. Which was the other revelation of Hugo. 10 seconds to build an 800+ post blog vs minutes using Hexo or similar.
We run node code that's 10 year old. No one dares to touch it; we just run it in docker and hope nothing goes wrong.
First thing I would have done is upgrade the version of Gatsby to latest. Did the author try that?
If upgrading is difficult because of 4 years of breaking changes, blame Gatsby for not being backwards compatible. Also blame your original choice of going with a hokey framework.
Speaking of hokey framework: 167 dependencies and 3000 versions of Gatsby in npm.
The first thing I would have done is check the version of Node.
That's a quick fix, upgrading a framework is a guaranteed min hour of poking around before the system is even running.
I dunno why `engines` isn't in every `package.json` file, would certainly have saved me hours of nonsense.
blaming anything or anyone gets you exactly zero seconds closer to getting the job done.
You could perhaps reframe "blame" as identifying the source of the problem and iunderstand why it can be a useful excercise (aslo none of us here are trying to solve the problem really, just wasting time on the internet). In this case Node and it's atendant ecosystem are certainly a part of the problem but I would agree that Gatsby is a bigger part of the issue as they don't seem to have any interest in taming the Node dependecy management beast. I've had to dig into Gatsby projects mere months old and it really was like opening a can of worms.
> In this case Node and it's atendant ecosystem are certainly a part of the problem but I would agree that Gatsby is a bigger part of the issue
I disagree completely. Regardless of what you think of Gatsby, Node versioning is a simple problem that affects nearly every javascript project. It should always be the first thing you check.
I made this dumb obvious mistake again just last week, looking for a little time to audit all my `package.json` files for `engines`.
It's not just personal blogs as many of y'all know.. this is a daily struggle in any SDE mainly working in web .. I am constantly lamenting the fact I spend like 70% of my time at my main job (large financial company) just trying to get my environment , or the application's environment in a position to actually develop on the application itself .. It's insane. I f'n love javascript .. it's allowed a lot of us a doorway into software engineering where a lot of us realize how ... 'special' js and related web development are ;) .. But man it can make you really want to smash the computer some days.
I think many issues goes with bad version or, in some edge cases, not vendor dependencies. I'd good and bad experientes on multiple programming languages, some bad examples:
- bump patch or minor version from a react package but the maintainer rewrote the entire project breaking a lot of things, following semver it's bad to expect things don't break like that for such version; - another example, the ruby gem is removed/yanked from rubygems.org and you'd to find a fork available
On the end, we need to ensure the good practices from software engineering about tests and good release management, the last btw is decades old
node-sass is to blame for like 95% of these node-gyp issues in my experience, it's not that much grief to deal with but it's hard to grasp how it was allowed to hang around so terribly for so long
> it's hard to grasp how it was allowed to hang around so terribly for so long
It's not hard. The explanation is simply that the dart version of SASS (the sass npm package) is much slower than node-sass.
At one point it was like 20x slower and I believe right now it's about 2-3x slower.
https://github.com/sass/dart-sass/issues/1534
Made a site using nanogen ( https://github.com/doug2k1/nanogen ) about 7 years ago ... Anytime I setup a new machine I do a npm install on whatever version of node I end up on, and it..... Just works.
Beat SSG I've found, and all from a medium or dev article on a SSG in 40 lines or less.
node-gyp was a mistake, building of native addons should have been an explicit separate step all along.
i could not tell from the article whether this was a site with a backend using node.js or if it was just a frontend depending on node.js for the build tools.
for the latter i get around the problem by avoiding build tools altogether. i use a frontend framework that i can load directly into the browser, and use without needing any tools to manage dependencies. the benefit from that is that it will ensure that my site will keep running for years to come, even if i leave it dormant for some time. the downside is that it is probably less optimized. but for smaller sites that aren't under continuous maintenance this is a reasonable tradeoff. i built all my recent sites that way using a prebuilt version of the aurelia framework.
incidentally just today i tried to research if i could build a site with svelte that way. well, it turns out that although it should theoretically be possible, i was unable to find a prebuilt version to do so after a few hours of searching. for vuejs i found one within minutes. i'll be learning vuejs now.
see this thread for a discussion on going buildless: https://news.ycombinator.com/item?id=41479365
I've been experimenting recently, with quite some success, with having a 'libs.js' file that pulls in and re-exports everything external I want, and providing a script that applies 'bun build' to just that.
I haven't yet decided if/how I want to include a prebuilt version of it in the repo, I *think* I may go the approach of having a commit that modifies libs.js and/or the lockfile and then an immediately following one that commits an updated prebuild ... oh, huh, actually, I should probably also consider doing those two commits on a branch, then forcing a merge commit so they land on master atomically but it's easy to tease out the human changes and the regen changes by poking inside said merge commit ... yeah, like I say, still thinking about exactly how to do this, don't mind me.
Also for even simpler cases I've been using the preact-htm prebuild directly, since htm gives a lit-style html() tagged literal consuming function that can produce vnodes for preact so I can mess around without needing something that understands jsx between my editor and my browser window.
vue's component system is IIRC noticeably less nice to work with if you don't have a compile step, but it's still pretty nice even without that so please don't think I'm trying to dissuade you here :)
note that my websites/apps are completely free of anything to build. the source that is used for development which goes into the repo is what is being loaded into the browser without change.
saving a prebuilt version of code that needs building is of course also helpful, and much better than having to rely on keeping your build tools working. but when you want to make changes to the site you have to either deal with the build tools anyways or work with the prebuilt version which may not be as practical.
either way i would simply save the prebuilt version to a branch and if any changes are made in that branch cherrypick them over to the dev branch if they even can be use, which i am not sure about. i'd probably rather avoid making changes to the prebuilt code in the first place
how does lack of a compile step affect the code? are there things i won't be able to do if i don't compile? i haven't started yet, and my website is not very complex so i think i'll manage either way, but i am curious. can you link to an example?
I understood what you were doing, yes.
However you said you're using a prebuilt version of the aurelia framework - and I'm using only using a prebuilt version of the frameworks I'm using.
Which seems pretty equivalent to me, except that your framework prebuild is created by somebody else running a script and my framework prebuild is created by me running a script. Either way the result is a single framework file to load that gets treated as basically a black box from then on.
My actual application sources are being direct loaded unchanged just like yours are.
The difference is basically syntax - vue recommends (and most example and/or real world vue apps I've seen use) their Single File Component syntax, and *that* requires a build step - see here for the difference in that and the build-free definition syntax: https://vuejs.org/guide/essentials/component-basics.html#def...
As I hope I managed to make clear, not a big deal at all in the grander scheme of things, just something to be aware of.
"your framework prebuild is created by somebody else running a script and my framework prebuild is created by me running a script."
you are right, i didn't read it that way, but i get it now.
i forgot about that aspect of vue, but if i read that correctly the joke is on them because the single file component syntax is one reason i initially rejected vue. i felt that it would make editing harder because editors would need special support to handle the file format. although looking at it now, it doesn't look so bad. it's just html with inline javascript.
anyways it looks like instead of inline templates i can also reference external templates and that would let me structure the code the way i want.
you are right, it's not a big deal in the grand scheme of things, and even from someone being prejudiced against the component syntax, it is a tiny issue compared to all the other good or bad choices a framework can make.
thank you for the link.
Yeah, I can see how you read my initial comment, and given the combination of "I could probably have been clearer" and "my approach does not seem to be one people take very often" it makes sense for you to've read it how you did, hence me turning the verbosity up to 11 for my second attempt at explaining it :D
Given the javascript is still inside a <script> tag I would presume any editor that can handle a normal HTML page with some inline javascript wouldn't notice the difference, yeah. Hadn't honestly though of that since one of the first things I do in any editor is turn all the file format handling stuff off because I'm a curmudgeon who thinks in https://github.com/n-t-roff/heirloom-ex-vi
Yes re external templates; my usual approach to 'keeping the code and template close together' is to have them open in adjacent 80x24 xterms.
You might find mobx of interest - I tend to use that for state modeling no matter the framework doing the rendering - everybody seems to be getting very excited about 'reactive signals' these days and ... they basically all have their own implementation that feels, to me, like Yet Another NIH Of A Tiny Subset Of MobX ... except invariably missing at least one feature that I really wanted.
Which is how I ended up with my own libs.js on a current project, the sum total of which is
and then on the fairly sporadic occasions that I need to adjust the exports I have a shell script that does a tiny bit of bookkeeping and then runs so I run that, and then go back to forgetting that the build process (and the node_modules directory it's sourcing those libraries from) exists.I also have about 20 lines' worth of custom dev server that will serve the bundle/ file preferentially over the src/ file if it exists, plus a couple other minor things.
But this is, for me, all about keeping things as simple as possible barring some slight effort towards ergonomics, plus knowing that I understand every part of what's going on so I don't end up stuck in a "one of my abstractions is leaking and I've no -ing idea how or why" type situation (hence also my very minimalist choice of editor, any time I try a more clever one I fairly rapidly end up in a situation where the tab completion does the wrong thing so I just type everything out anyway, or where the syntax hilighting produces colours that give me a headache, or ... just colour me a curmudgeon who learned his chops on ancient BSDi and Solaris systems, I don't expect anybody else to want to use my dev environment but it works for me).
Anyway. None of this is to try and convince you of anything much, I just thought you might find my setup vaguely interesting. I'll stop waffling now :D
Given the javascript is still inside a <script> tag I would presume any editor that can handle a normal HTML page with some inline javascript wouldn't notice the difference
i agree. it's been a while that i looked at vuejs, and i don't know why i came away with a different impression before. must have not looked closely enough.
i haven't seen mobx before, but when i read the description i wonder why i need it. my preferred framework is aurelia and as far as i can tell aurelia already does what mobx claims to solve, in particular this part (from the mobx github page):
Trying to update a record field? Simply use a normal JavaScript assignment — the reactivity system will detect all your changes and propagate them out to where they are being used
it is actually the primary reason why i like aurelia. with aurelia i don't even have to mark properties as observable. it figures that out on its own based on the bindings i make in the html template.
though mobx may be interesting when i work with other frameworks that don't do that. i'll have to keep that in mind. (edit: it looks like it may come in handy when statemanagement becomes complex: https://stackoverflow.com/questions/39454579/best-practice-u... )
keeping things as simple as possible barring some slight effort towards ergonomics, plus knowing that I understand every part of what's going on
yeah, i like that too.
I just thought you might find my setup vaguely interesting
i do indeed. thanks a lot for that. my own setup is actually also quite simplistic, but not deliberately so. it's mostly lazyness. i simply don't want to be bothered to put a lot of effort into a better dev setup. i'd rather work on actual code. so i start with plain vim without any addons, and only slowly change stuff when i run into a problem that really bothers me. solaris, AIX, irix is where i started.
When I start with "plain" vim the first thing I do is drop in https://trout.me.uk/X11/vimrc to nerf it back to as close to ex-vi as I can get - because my muscle memory comes from classic vis (between the mostly-BSDi job and the mostly-Solaris job I did devbox sysadmin for a software house, so I got to handle Solaris, AIX, IRIX, Tru64, SCO OpenSewer, and a couple others all at the same time ;) my experience of ex-vi versus vim is "when I typo in ex-vi it beeps at me, when I typo in vim it activates yet another feature I didn't know existed and breaks my concentration."
I took a quick look at aurelia after you mentioned it and am clearly going to have to take a deeper look at some point, I'm curious how it's handling all that under the hood, and for cases it handles well it does indeed look really rather nice (though being me I'll need to take it apart before considering using it, automagic reactivity is really cool but only when I can reliably dry-run its path through the framework in my head as I'm writing the code that (ab)uses it).
(so, cheers for mentioning aurelia, all the best conversations involve both/all people involved coming away with extra things/ideas to poke at :)
Anyway. To mobx:
I think the best may to understand mobx before you've actually used it is roughly "it not only provides simple reactivity stuff, when stuff starts getting more complicated you'll find that the more powerful tools you wanted are already there, implemented, and will show you what they're doing in the devtools out of the box."
As an example, computed() is very handy (and the one mobx feature other than just "reacting to changes" that most frameworks' reactivity implementations *do* copy, although I have vague memories of them not always copying it as thoroughly as I'd like) - so this is a bit of a "too simple" example but we're in HN comments so
(sort function designed for numerics only, dumbass example is dumbass)and then assuming your display component is tracking its dependencies somehow, a change to the `someData.sortBy` field will automatically expire the cached `data` element and notify the component so it can re-render, at which point `data` gets recalculated and re-cached (and all this will show up as events in the devtools if you've stuck them into your page somewhere).
Since aurelia is basically taking apart your bindings to figure out what to observe, I think it wouldn't be able to track that, and you'd instead have to make sure that when `sortBy` gets changed it fires a trigger to recalculate the `data` field (and then aurelia *would* notice that getting set to a new value).
(aurelia may be smarter than I think, but the only way I can think of for it to *be* the necessary amount of smarter would basically be to implement a subset of mobx's makeAutoObservable and I don't think it does that and I don't think given aurelia's (clearly consciously chosen) aesthetic it *should* do that)
I'm tending towards having viewstate kinda objects as a layer and then model objects behind them, so `rawData` would delegate to the model objects in the above example - then the code that modifies the data I'd be persisting to a backend doesn't have to think/know about how it's going to be displayed, but if you modify it - `addTodo` or an `editTodo` or similar - the change notifications will propagate outwards, expiring things in the viewstate, causing things in the view to notice and re-render themselves as required.
This is especially noticeable to me when e.g. I'm writing a data viewer onto a table of log entries (or something in that vague area, *handwaves vigorously*) and I want to have a 'Refresh' button - that triggers something in the model layer that does a `fetch()` call or whatever, and then sets the new model data to whatever the backend sends back ... and then everything re-renders, keeping all my display choices intact because the viewstate layer didn't get touched.
My experience of learning mobx included a number of "oh for crying out loud, I've implemented half of this feature the hard way in three projects before now" moments - and *that* was what made me fall in love with it.
You might also want to bookmark (and then mostly ignore for the moment) mobx-keystone, which is Even More Complicated under the hood, but provides fairly simple syntax for declaring a tree of reactive state classes that provides (typed if you're using typescript) constructors for you, tree snapshots if you want to be able to save your session state to the backend to resume later, and mutation event logging as JSON-able objects that also includes reverse versions so you can get full undo with a fairly minimal amount of effort (I never, ever want to write undo functionality by hand again, that always required a significant amount of bourbon). Not necessarily to actually *use*, mind you, but seeing what keystone is capable of was useful to my understanding of what mobx itself was capable of.
I ... there's definitely a "would rather be writing code than yak shaving my setup" aspect to my choices, but it's definitely also a question of having as few layers as possible between me and What Actually Happens so things reliably behave how I expect them to behave. Debugging is much more fun for me when I can be confident I'm debugging *my* code rather than there being a decent chance I'm actually debugging how I've misunderstood something else two layers down.
(also, if you're not doing anything particularly strenuous, I'd suggest having a look at bun as an alternative to node, all my 'real world' and/or work stuff is node and I'm not particularly offended by it, but bun is definitely a smoother and more DWIM experience for me, at least so far)
i get you on vi. i was there (3000 years ago ;-), although only at the tail end of it in the early 90s. i switched from emacs to vi because of my sysadmin job.
i am really curious what you make of aurelia once you take it apart. please ping me if it is not to much trouble. i discovered aurelia when angularjs 1 was being redeveloped into angular 2. rob eisenberg was invited to the angular team based on his work on durandal but when his ideas were not accepted he left again and built aurelia instead. aurelia is not perfect either, but it was way better than angular 2 or vue at the time.
on your mobx example, you are right, aurelia doesn't track everything. it may be able to handle some level of complexity but i certainly had situations where i had to explicitly send a signal to get aurelia to update values.
> node-gyp
We're in ... let's call it a transitional period at work. I've got something like a dozen versions of node being managed by asdf. And in half of the projects I work on regularly, I consistently get warnings about this particular project failing to build.
One day, I'll actually look up what it actually is, and what it does, and why it's being built, but is apparently optional.
It's basically a set of tools to make building native modules easier, that said modules then use to deal with their binding to C/C++/etc. code.
Everybody complains about it, and understandably so, but if it didn't exist you'd probably instead have one set of potential similar problems *per* native module which has a good chance of not actually being better overall.
The counterargument is, I guess, "well, only people who can write their own high quality build setup in-tree should be writing things that bind to external code," and I do sometimes dream of that, but it's not hard to see the downsides of living in *that* world instead either.
Anybody have experience with pulling/running a docker image after many years?
I'd like to see the consensus on most durable software packaging method.
2020 is not "old".
"Not Invented Here" is whats going on. Developers of this age need to learn this.
A recent exmaple is the RPI Foundation nullifying thousands of internet tutorials renaming "/boot" vs "/bootfs". Ask yourself a serious question, did that actually improve anything? No it did not.
Dealing with node-gyp cost me at least 5 hours a month in the 2010s. I'm so very happy to not see those errors in my console anymore.
Native code in an npm module should be regarded as a massive red flag.
Cool that you managed to get it running after just 2 hours. Same thing applies to python projects, a little note in README saves so much time in the future, I always try and use virtual environments and specify a specific python version so I can just nuke and reinstall everything
OP was trying to install an old dep tree of gatsby on a different node target. These kinds of massive libraries break all the time. Look how big its dependency tree is: https://npmgraph.js.org/?q=gatsby
Fortunately this mindset has been changing in the node ecosystem with projects like https://hono.dev/ (koa/express successor) and https://github.com/porsager/postgres having zero deps.
Acknowledging this is absolutely awful, and also commenting that a project .nvmrc file is your friend!
...until the node version you locked can't be downloaded anymore, or hasn't ever existed for your CPU arch.
I see you too had to run node v14(? my memory fails me somewhat) on Apple Silicon hardware...
FWIW, I run Node 12 painlessly on Apple Silicon using fnm, so you might be thinking of a few versions before that.
shivers
I had to build some project that uses some Ruby package manager. I forgot already what the package manager is called. I got some error about "you don't have all the dev tools". So I installed what Google told me "dev tools" was. Then it still told me that I needed more dev tools. Stackoverflow had some question about this package manager. For Windows (Linux here). 20+ answers, mostly for Mac. All in the style of "this random thing worked for me". All with at least one upvote. Some answer about "I needed to symlink this system library".
Gave up.
Then I ran `devbox init` and installed whatever it told me that was needed. `devbox shell`.
At first I thought it would be a decade old project, but 4 years isn't old by any standards is it?
Anyways, npm ci should have been the first attempt, not npm install so that it installs the same package versions defined in the package-lock.json. Then as others have mentioned, pin your node versions. If you're afraid of this happening again, npm pack is your friend.
In the end, op could have done a bit more. BUT I'll give it to him that when bindings are involved, these things take more time than they should
Node/JS seems particularly fragile in this regard thanks to the complicated maze of dependencies and sub dependencies and flavour of the month framework syndrome
I think this was the biggest mistake that Java made. Breaking reverse compatibility after Java 8 means that thousands of organizations will never leave that version. There is an entire industry based around maintaining Java 8. Eventually there will be two versions of Java, just like Python 2 and 3. There will be Java 8 and Java 698.
This goes for both node and python: Avoid native extensions. For python this is less feasible due to its inherently poor performance, so limit yourself to the crucial ones like numpy. For node, there are few good reasons why you would need a native extension. Unless you have your node version pinned, it will try to find the binary for your node version, fail, then attempt to build it on your system, which will most likely fail as well.
As a rule of thumb if you are jumping into a legacy node project the first thing you should try is a sequentially older node version. Could've reduced his two hours to two minutes.
My take is one should be especially wary of packages that depend on C libraries and need compliation. You get to be extremely bound to what the OS distribution has to offer. If that's the case, docker is probably the safest solution.
Node.js (or more accurately, the entire Javascript ecosystem) changes, but the tropes don't.
https://medium.com/hackernoon/how-it-feels-to-learn-javascri... (beware the green background, I recommend reader mode.)
This is the reason I ripped out Gatsby from every project where I could. Every six months I’d spend an entire evening fixing obscure problems that shouldn’t even exist just to get things running. And that’s not even considering actual breaking changes of which there were plenty.
I still have to maintain many old projects that are using Node 7, but many dependencies are no longer available. Every update needs to patch the docker image manually from the last runnable docker image.
Lately I have been revisiting some older golang tool I wrote since before they introduced go modules.
"go mod init" + identify a working dependency version was all I had to do on any of those 10+ year old projects (5 minute work tops)
Can't help but feel that this is a massive nothing-burger. You wouldn't generally expect your Java project to run if you use an incompatible version of the JVM, nor would you generally expect your C++ project to build if you swap one compiler for a different one. Etc, always specify what your project relies on, whether it's in the readme or in the dependency tree.
> nor would you generally expect your C++ project to build if you swap one compiler for a different one.
it's absolutely the expectation for C++ projects to support building on GCC, Clang and MSVC
Maybe these days this is the case, last I used C++ about a decade ago though, this was most definitely not guaranteed to be so. This thread [0] is a good quick overview of the type of issues you used to run into, and often you didn't even need anything fancy to hit those edge cases. If life has improved in the meanwhile, that's good of course.
[0] https://stackoverflow.com/q/51866968/1470607
People love to hate on frontend/Node
You should save your deps in your SCM! Microsoft is giving away ownership to existing packages if you tell them you will use it for a TypeScript project.
What? Does Microsoft control npm?
Yes. npm is a subsidiary of GitHub, a subsidiary of Microsoft
We use package.lock and docker image with local folder binding to run legacy node projects. Eg. docker run -v local:inner node:12 command
The tragedy of running an̶ ̶o̶l̶d̶ Node project.
I had to chuckle after I read your username. Kudos.
On the one hand, it's not that terrible and most *of* the terrible is from people making silly choices.
On the other hand, there's a reason I regularly get annoyed enough at it to call it nope.js.
On the gripping hand, I mostly write perl, which argues for a different but unique set of masochistic tendencies on my part.
(you just have to remember that what 'perl' *really* stands for is 'Perenially Eclectic Rubbish Lister' and then you will have appropriate expectations and can settle back and have fun ;)
For sure. This is the number one reason I am switching as many projects as I can to HTMX.
https://dubroy.com/blog/cold-blooded-software/
Sibling comments say in so many words, it's no big deal bro, just update. But it is a big deal over time if you have dozens of cold-blooded projects to deal with.
Gatsby and htmx have approximately zero overlap
The point is NPM. HTMX helps you avoid NPM.
OP is tired after 2 hours of work.
OP is tired after two hours of stupid bullshit that should not ever occur. Stuff like that's enough to make a man quit a job, or indeed a career.
Running a new Node project is nearly as problematic… The ecosystem is broken
This could equally be written about old Android projects.
FWIW, I've mostly maintained long term PHP projects, and I've had nearly unaltered codebases running for ~25 years since php3. No frameworks, just core PHP. People dump on PHP, but it's a very good tool if you're focused on maintainable output and pick the right functional APIs to cede to mature unix tools. You can expect decades of solid, maintainable output.
Experienced programmers will not pick up a "built on shifting sand" stack, because they can acutely perceive the pain and suffering before it happens, generally from past experience. With fast-crumbling stacks, you need to execute quickly and move on, and treat the whole codebase as an expiring entity. Stacks I personally try to avoid: anything node/javascripty, anything Androidy, anything iDevicey.
Those who don't understand Unix are condemned to reinvent it, poorly. - Henry Spencer
... via https://github.com/globalcitizen/taoup
How is that pretty hard?
You spent only two hours on this and you think it’s too much?
Also, do not run shit on a node version that is years out of date and out of service. Also, update your damn packages. I know I sound cranky, but running anything internet facing with god knows how many vulnerabilities in is an exceedingly bad idea.
Having CI would have avoided this problem.
What you mean here is actually deps pinning + vendoring. It's way out of scope of basic CI concept.
CI doesn't help when all the dependencies aren't locked, or where some deps disappear, or ...
Not for running the project locally
I think you're missing the point.
CI solves it because it proves that it can build in the pipeline, using a well defined environment.
No guessing at which node version you need or any other dependencies that may be required.
Have you tried DevContainer before ?
yeah? now try running 4 years old React project, it's a hell on earth.
next.js user? :D
DHH has said this experience is a big reason Rails is pursuing a no-build approach.
can you elaborate or link to an article that explains what this means?