This is great, especially being System 3, given the nice user experience Oberon eventually morphed into.
In System 3 with the Gadgets system it was already starting to feel like a proper mainstream OS, instead of the plain black and white, without framework like experience from the initial Project Oberon, even thought it was a technological achivement already, with a memory safe systems language.
I prefer the path taken down by Active Oberon, however that doesn't seem to also get that much love nowadays, and is much more complex to explore than System 3.
For those that not know it, it already had something like OLE (inspired by how Xerox PARC did it with Cedar), an AOT/JIT compilation system (with slim binaries for portability), and everything on a memory safe systems language.
Thanks. Preparations for the migraion of AOS/Bluebottle are underway. Concerning memory safety: it's only memory safe if you don't use the SYSTEM module features, and OberonSystem 3 (and later) heavily depend on those features.
Oh, this is something I'm going to have to try. Excellent work!
I have to ask, since people who'd know will probably be here, what's the "ten thousand foot view" of Oberon today? I'm aware of the lineage from Pascal/Modula, and that it was a full OS written entirely in Oberon, sort of akin to a Smalltalk or Lisp machine image. What confuses me is the later work on Oberon seems to be something of a cross between a managed runtime like Java or dot net, and the Inferno OS, where it can both run hosted or "natively". Whenever I've skimmed the wikipedia or web pages I've been a bit confused.
Thanks. In contrast to Smalltalk or Lisp, Oberon is originally a native language, and the Oberon System originally was conceived as the native operating system of the Ceres computer used for teaching in the nineties at ETH Zurich. So there is no image as in Lisp or Smalltalk. Oberon lives on today in the form of various dialects and derivatives (such as my Oberon+ or Micron languages, see https://github.com/rochus-keller/oberon and https://github.com/rochus-keller/micron). There are indeed Oberon implementations which run on Java or ECMA 335 runtimes, which is possible due to the very restricted pointer handling and memory management of Oberon.
Smalltalk too was originally a full OS running on bare metal back in the Xerox Alto days (1972-ish).
The "OS" (or rather "kernel") was actually the VM which was implemented in microcode and BCPL. The Smalltalk code within the image was completely abstracted away from the physical machine. In today's terms it was rather the "userland", not a full OS.
It's refreshing to see Oberon getting some love on the Pi. There’s a certain 'engineering elegance' in the Wirthian school of thought that we’ve largely lost in modern systems.
While working on a C++ vector engine optimized for 5M+ documents in very tight RAM (240MB), I often find myself looking back at how Oberon handled resource management. In an era where a 'hello world' app can pull in 100MB of dependencies, the idea of a full OS that is both human-readable and fits into a few megabytes is more relevant than ever.
Rochus, since you’ve worked on the IDE and the kernel: do you think the strictness of Oberon’s type system and its lean philosophy still offers a performance advantage for modern high-density data tasks, or is it primarily an educational 'ideal' at this point?
I don't know. Unfortunately we don't have an Oberon compiler doing similar optimization as e.g. GCC, so we can only speculate. I did measurements some time ago to compare a typical Oberon compiler on x86 with GCC and the performance was roughly equivalent to that of GCC without optimizations (see https://github.com/rochus-keller/Are-we-fast-yet/tree/main/O...). The C++ type system is also pretty strict, and on the other hand it's possible and even unavoidable in the Oberon system 3 to do pointer arithmetics and other things common in C behind the compiler's back (via the SYSTEM module features which are not even type safe). So the original Oberon syntax and semantics is likely not on the sweet spot of systems programming. With my Micron (i.e. Micro Oberon, see https://github.com/rochus-keller/micron/) language currently in development I try for one part to get closer to C in terms of features and performance, but with stricter type safety, and on the other hand it also supports high-level applications e.g. with a garbage collector; the availabiltiy of features is controlled via language levels which are selected on module level. This design can be regarded as a consequence of many years of studying/working with Wirth languages and the Oberon system.
There was a couple of PhD theses at ETH Zurich in the 90s on optimizations for Oberon, as well as SSA support. I haven't looked at your language yet, but depending on how advanced your compiler is, and how similar to Oberon, they might be worth looking up.
I'm only aware of Brandis’s thesis who did optimizations on a subset of Oberon for the PPC architecture. There was also a JIT compiler, but not particularly optimized. OP2 was the prevalent compiler and continued to be extended and used for AOS, and it wasn't optimizing. To really assess whether a given language can achieve higher performance than other languages due to its special design features, we should actually implement it on the same optimizing infrastructure as the other languages (e.g. LLVM) so that both implementations have the same chance to get out the maximum possible benefit. Otherwise there are always alternative explanations for performance differences.
It might have been Brandis' thesis I was primarily thinking about. Of the PhD theses at EHTz on Oberon, I'm also a big fan of Michael Franz' thesis on Semantic Dictionary Encoding, but that only touched on optimization potential as a sidenote. I'm certain there was at least one other paper on optimization, but it might not have been a PhD thesis...
I get the motivation for wanting to use LLVM, but personally I don't like it (and have the luxury of ignoring it since I only do compilers as a hobby...) and prefer to aim for self-hosting whenever I work on a language. But LLVM is of course a perfectly fine choice if your goal doesn't include self-hosting - you get a lot for free.
I don’t like LLVM either, because its size and complexity are simply spiraling out of control, and especially because I consider the IR to be a total design failure. If I use LLVM at all, it would be version 4.0.1 or 3.4 at most. But it is the standard, especially if you want to run tests related to the question the fellow asked above. The alternative would be to build a frontend for GCC, but that is no less complex or time-consuming (and ultimately, you’re still dependent on binutils). However, C on LLVM or GCC should probably be considered the “upper bound” when it comes to how well a program can be optimized, and thus the benchmark for any performance measurement.
> However, C on LLVM or GCC should probably be considered the “upper bound” when it comes to how well a program can be optimized, and thus the benchmark for any performance measurement.
Is it? Isn't it rather the case that C is too low level to express intent and (hence) offer room to optimize? I would expect that a language in which, e.g. matrix multiplication can be natively expressed, could be compiled to more efficient code for such.
I would rather expect, that for compilers which don't optimize well, C is the easiest to produce fairly efficient code for (well, perhaps BCPL would be even easier, but nobody wants to use that these days).
> I would expect that a language in which, e.g. matrix multiplication can be natively expressed, could be compiled to more efficient code for such.
That's exactly the question we would hope to answer with such an experiment. Given that your language received sufficient investments to implement an optimal LLVM adaptation (as C did), we would then expect your language to be significantly faster on a benchmark heavily depending on matrix multiplication. If not, this would mean that the optimizer can get away with any language and the specific language design features have little impact on performance (and we can use them without performance worries).
When you call LLVM IR a design failure, do you mean its semantic model (e.g., memory/UB), or its role as a cross-language contract? Is there a specific IR propert that prevents clean mapping from Oberon?
Several historical design choices within the IR itself have created immense complexity, leading to unsound optimizations and severe compile-time bloat. It's not high-level enough so you e.g. don't have to care about ABI details, and it's not low-level enought to actually take care of those ABI details in a decent way. And it's a continuous moving target. You cannot implement something which then continus to work.
[dead]
Is anyone attempting to implement Oberon on LLVM IR? Sounds like a fun project
Threre are at least two projects I'm aware of, but I don't think they are ready yet to make serious measurements or to make optimal use of LLVM (just too big and complex for most people).
The Oberon user interface inspired Acme on Plan 9.
Oberon is a very nice, fun and cozy system and environment for programming. I lived in it for a few months back around 2010 and it was a joy.
I often think this style of UI -- tiled text windows but with mouse and graphics interaction (similar to emacs actually) -- is what we should be using for the coding agents we're all using now.
I'd like to be able to dock panels of information, live-edit pieces of code instead of just "accept? Y/N", have side interactions, have real scroll bars and proper clipboards, even a live REPL alongside.
Instead we get Claude Code's janky "60fps TUI" full of bugs and barely interactive.
I was about 5 links deep before I figured out what Oberon actually was. A high-level explainer at the top of the readme would be really nice for folks who aren't already familiar with the Oberon ecosystem
Does Oberon still require capitalized keywords? That always seemed to be emphasizing the wrong thing:
However, people always forget we don't program in Notepad, rather programmer editors that are able to do automatic capitalisation of keywords.
It is a non problem, like discussion of parentheses or white space in programming languages that require them.
This is great! I remember running System 3 on a 386 back when MS-DOS was king.
Thanks. There is actually also an i386 version of the system in the repository, where I modified the kernel so it runs with Multiboot, making installations much easier. An essential achievement for both platforms were the stand-alone tools, i.e. I can compile and link the whole Oberon system on Linux or any other platform (see https://github.com/rochus-keller/op2/). I even implemented an IDE which I used for the development (see https://github.com/rochus-keller/activeoberon/).
Cool. Is macOS (Apple Silicon) also supported?
If not, well there's another reason to have a Linux VM ready :)
Technically yes, but since Apple locked down their OS completely, you might have to compile the tools yourself on your machine so the OS allows them to start at all.
Have always been fond of Oberon! I would love to have A2/ActiveOberon/BlueBottle or whatever the name of the day is on a small native machine as well.
Great Stuff!
Thanks. The A2 Fox compiler actually has an ARM backend, so I would be surprised if nobody has migrated it to the Raspi yet. The 2003 version of AOS/Bluebottle (not A2) is on my list of interesting sytems, particularly because it supports multicore hardware.
[deleted]
I'm going to try and give it a go on a zero2 I have lying around. Thanks, this is exactly what I come to hacker news for.
Cool, tell me whether it worked. Unfortunately my mini HDMI adapter is broken and I have to wait for the new to arrive. But I already soldered the headers to the UART pins and observed the system start which looked as it should.
This is lovely. And I bet it is very fast on that hardware, all things considered.
The system is up extremenly fast (compared to Linux), but then it takes pretty long to find the USB hub and the keyboard/mouse. Maybe I can still speed this up.
Thank you, I've never heard of the Oberon os before.
Oberon is both a programming language and an operating system used mostly for teaching, much like e.g. xv6 or xinu. Similar to the latter, Wirth has written text books about the system, some of which can be downloaded for free (see https://projectoberon.net/ for the PDF links).
So good to see Oberon this accessible! Mad props!
I still hope to see the world where Oberon is the future (and present) of OS and programming language design, and I know very little about it.
Thanks to your work, that's about to change.
Thank you times a thousand <3
> I still hope to see the world where Oberon is the future (and present) of OS and programming language design
I see you're into horror stories.
Oberon is absolutely a horrible language. It's an example of how you can screw up a good language by insisting on things that were important in 1960-s.
Like not allowing multiple returns (not multiple return _values_ but multiple returns).
There's an argument (and I think a good one) that in structured programming there should be only one return per function. It's not that hard -- you just have a variable and you set it to what you want to return and the last line of the function returns that variable. I think that some things Wirth did with Oberon, particularly in the post Oberon-OS versions like Oberon-07, are a bit restrictive, but they are always in the service of making code easier to read, even if it makes it slightly harder to write.
The problem is that pure structured programming just sucks. It doesn't have a good answers for cleanups or error handling.
Structured programming was the answer to the earlier mess with unstructured gotos, but in the process of trying to improve it, structured programming became just as messy when taken dogmatically.
In real life, what matters is the mental load. Every ambient condition that you need to track adds mental load. Early returns/breaks/continues reduce it while in a "structured program" you have to keep track of them until the end of the function.
> It's not that hard -- you just have a variable and you set it to what you want to return and the last line of the function returns that variable.
And also have a flag "skip to return" to skip all the conditions. Or you end up mutating arguments of the function. I know, I suffered through programming on Standard Pascal.
It all boils down to the fact that ideas should be viewed as tools rather than dogmas, and famous people are neither infallible nor prophets simply because they had a few good ideas.
> Every ambient condition that you need to track adds mental load
Thus it's wise to limit the complexity of your code. If it starts getting difficult, it might be time to break it down in smaller, more understandable, pieces.
Apparently a school of though widely embraced by Go scholars, nowadays resposible for our cloud infrastructure.
Show me significant concepts implemented in today's languages which cannot directly be traced back to "things that were important in 1960-s" or seventies ;-)
"Traced back" is fine. We can trace back the size of the Shuttle's boosters to the width of the roads in the Roman Empire.
Insisting that the problems of 1960 are the only thing that matters, and MUST be solved dogmatically is not.
Well, a lot of ideas (and I mean really a lot) from the sixties are still very relevant today, and indeed, there are also problems discovered in the sixties still waiting for a solution. We don't have to live in the past, but many "new" things aren't actually new, or are not better just because they are new.
Of course. Problems that existed in 60-s were very real. And structured programming was an improvement over messy gotos.
At the same time, software from 1960-s did not have to deal with a lot of error conditions. When all you have is infallible computation code, you tend to overlook handling cleanups and exceptions. It was also single-threaded, so there was no focus on locking/mutability.
And it turns out that dealing with both of these requires stepping away from pure structured programming with one nice happy path and a single return.
This is great, especially being System 3, given the nice user experience Oberon eventually morphed into.
In System 3 with the Gadgets system it was already starting to feel like a proper mainstream OS, instead of the plain black and white, without framework like experience from the initial Project Oberon, even thought it was a technological achivement already, with a memory safe systems language.
I prefer the path taken down by Active Oberon, however that doesn't seem to also get that much love nowadays, and is much more complex to explore than System 3.
For those that not know it, it already had something like OLE (inspired by how Xerox PARC did it with Cedar), an AOT/JIT compilation system (with slim binaries for portability), and everything on a memory safe systems language.
Thanks. Preparations for the migraion of AOS/Bluebottle are underway. Concerning memory safety: it's only memory safe if you don't use the SYSTEM module features, and OberonSystem 3 (and later) heavily depend on those features.
Oh, this is something I'm going to have to try. Excellent work!
I have to ask, since people who'd know will probably be here, what's the "ten thousand foot view" of Oberon today? I'm aware of the lineage from Pascal/Modula, and that it was a full OS written entirely in Oberon, sort of akin to a Smalltalk or Lisp machine image. What confuses me is the later work on Oberon seems to be something of a cross between a managed runtime like Java or dot net, and the Inferno OS, where it can both run hosted or "natively". Whenever I've skimmed the wikipedia or web pages I've been a bit confused.
Thanks. In contrast to Smalltalk or Lisp, Oberon is originally a native language, and the Oberon System originally was conceived as the native operating system of the Ceres computer used for teaching in the nineties at ETH Zurich. So there is no image as in Lisp or Smalltalk. Oberon lives on today in the form of various dialects and derivatives (such as my Oberon+ or Micron languages, see https://github.com/rochus-keller/oberon and https://github.com/rochus-keller/micron). There are indeed Oberon implementations which run on Java or ECMA 335 runtimes, which is possible due to the very restricted pointer handling and memory management of Oberon.
Smalltalk too was originally a full OS running on bare metal back in the Xerox Alto days (1972-ish).
The "OS" (or rather "kernel") was actually the VM which was implemented in microcode and BCPL. The Smalltalk code within the image was completely abstracted away from the physical machine. In today's terms it was rather the "userland", not a full OS.
It's refreshing to see Oberon getting some love on the Pi. There’s a certain 'engineering elegance' in the Wirthian school of thought that we’ve largely lost in modern systems.
While working on a C++ vector engine optimized for 5M+ documents in very tight RAM (240MB), I often find myself looking back at how Oberon handled resource management. In an era where a 'hello world' app can pull in 100MB of dependencies, the idea of a full OS that is both human-readable and fits into a few megabytes is more relevant than ever.
Rochus, since you’ve worked on the IDE and the kernel: do you think the strictness of Oberon’s type system and its lean philosophy still offers a performance advantage for modern high-density data tasks, or is it primarily an educational 'ideal' at this point?
I don't know. Unfortunately we don't have an Oberon compiler doing similar optimization as e.g. GCC, so we can only speculate. I did measurements some time ago to compare a typical Oberon compiler on x86 with GCC and the performance was roughly equivalent to that of GCC without optimizations (see https://github.com/rochus-keller/Are-we-fast-yet/tree/main/O...). The C++ type system is also pretty strict, and on the other hand it's possible and even unavoidable in the Oberon system 3 to do pointer arithmetics and other things common in C behind the compiler's back (via the SYSTEM module features which are not even type safe). So the original Oberon syntax and semantics is likely not on the sweet spot of systems programming. With my Micron (i.e. Micro Oberon, see https://github.com/rochus-keller/micron/) language currently in development I try for one part to get closer to C in terms of features and performance, but with stricter type safety, and on the other hand it also supports high-level applications e.g. with a garbage collector; the availabiltiy of features is controlled via language levels which are selected on module level. This design can be regarded as a consequence of many years of studying/working with Wirth languages and the Oberon system.
There was a couple of PhD theses at ETH Zurich in the 90s on optimizations for Oberon, as well as SSA support. I haven't looked at your language yet, but depending on how advanced your compiler is, and how similar to Oberon, they might be worth looking up.
I'm only aware of Brandis’s thesis who did optimizations on a subset of Oberon for the PPC architecture. There was also a JIT compiler, but not particularly optimized. OP2 was the prevalent compiler and continued to be extended and used for AOS, and it wasn't optimizing. To really assess whether a given language can achieve higher performance than other languages due to its special design features, we should actually implement it on the same optimizing infrastructure as the other languages (e.g. LLVM) so that both implementations have the same chance to get out the maximum possible benefit. Otherwise there are always alternative explanations for performance differences.
It might have been Brandis' thesis I was primarily thinking about. Of the PhD theses at EHTz on Oberon, I'm also a big fan of Michael Franz' thesis on Semantic Dictionary Encoding, but that only touched on optimization potential as a sidenote. I'm certain there was at least one other paper on optimization, but it might not have been a PhD thesis...
I get the motivation for wanting to use LLVM, but personally I don't like it (and have the luxury of ignoring it since I only do compilers as a hobby...) and prefer to aim for self-hosting whenever I work on a language. But LLVM is of course a perfectly fine choice if your goal doesn't include self-hosting - you get a lot for free.
I don’t like LLVM either, because its size and complexity are simply spiraling out of control, and especially because I consider the IR to be a total design failure. If I use LLVM at all, it would be version 4.0.1 or 3.4 at most. But it is the standard, especially if you want to run tests related to the question the fellow asked above. The alternative would be to build a frontend for GCC, but that is no less complex or time-consuming (and ultimately, you’re still dependent on binutils). However, C on LLVM or GCC should probably be considered the “upper bound” when it comes to how well a program can be optimized, and thus the benchmark for any performance measurement.
> However, C on LLVM or GCC should probably be considered the “upper bound” when it comes to how well a program can be optimized, and thus the benchmark for any performance measurement.
Is it? Isn't it rather the case that C is too low level to express intent and (hence) offer room to optimize? I would expect that a language in which, e.g. matrix multiplication can be natively expressed, could be compiled to more efficient code for such.
I would rather expect, that for compilers which don't optimize well, C is the easiest to produce fairly efficient code for (well, perhaps BCPL would be even easier, but nobody wants to use that these days).
> I would expect that a language in which, e.g. matrix multiplication can be natively expressed, could be compiled to more efficient code for such.
That's exactly the question we would hope to answer with such an experiment. Given that your language received sufficient investments to implement an optimal LLVM adaptation (as C did), we would then expect your language to be significantly faster on a benchmark heavily depending on matrix multiplication. If not, this would mean that the optimizer can get away with any language and the specific language design features have little impact on performance (and we can use them without performance worries).
When you call LLVM IR a design failure, do you mean its semantic model (e.g., memory/UB), or its role as a cross-language contract? Is there a specific IR propert that prevents clean mapping from Oberon?
Several historical design choices within the IR itself have created immense complexity, leading to unsound optimizations and severe compile-time bloat. It's not high-level enough so you e.g. don't have to care about ABI details, and it's not low-level enought to actually take care of those ABI details in a decent way. And it's a continuous moving target. You cannot implement something which then continus to work.
[dead]
Is anyone attempting to implement Oberon on LLVM IR? Sounds like a fun project
Threre are at least two projects I'm aware of, but I don't think they are ready yet to make serious measurements or to make optimal use of LLVM (just too big and complex for most people).
[dead]
you can check also XDS modula2/oberon-2 programming system. is an optimizing complier https://github.com/excelsior-oss/xds
The Oberon user interface inspired Acme on Plan 9.
Oberon is a very nice, fun and cozy system and environment for programming. I lived in it for a few months back around 2010 and it was a joy.
I often think this style of UI -- tiled text windows but with mouse and graphics interaction (similar to emacs actually) -- is what we should be using for the coding agents we're all using now.
I'd like to be able to dock panels of information, live-edit pieces of code instead of just "accept? Y/N", have side interactions, have real scroll bars and proper clipboards, even a live REPL alongside.
Instead we get Claude Code's janky "60fps TUI" full of bugs and barely interactive.
I was about 5 links deep before I figured out what Oberon actually was. A high-level explainer at the top of the readme would be really nice for folks who aren't already familiar with the Oberon ecosystem
Does Oberon still require capitalized keywords? That always seemed to be emphasizing the wrong thing:
Yes, the original Oberon (which the system is based on) has upper-case keywords (and some other orthodoxies). If you are looking for something more modern, go to https://github.com/rochus-keller/oberon, https://github.com/rochus-keller/luon or https://github.com/rochus-keller/micron.
Yes, like Modula-2 as well.
However, people always forget we don't program in Notepad, rather programmer editors that are able to do automatic capitalisation of keywords.
It is a non problem, like discussion of parentheses or white space in programming languages that require them.
This is great! I remember running System 3 on a 386 back when MS-DOS was king.
Thanks. There is actually also an i386 version of the system in the repository, where I modified the kernel so it runs with Multiboot, making installations much easier. An essential achievement for both platforms were the stand-alone tools, i.e. I can compile and link the whole Oberon system on Linux or any other platform (see https://github.com/rochus-keller/op2/). I even implemented an IDE which I used for the development (see https://github.com/rochus-keller/activeoberon/).
Cool. Is macOS (Apple Silicon) also supported?
If not, well there's another reason to have a Linux VM ready :)
Technically yes, but since Apple locked down their OS completely, you might have to compile the tools yourself on your machine so the OS allows them to start at all.
Have always been fond of Oberon! I would love to have A2/ActiveOberon/BlueBottle or whatever the name of the day is on a small native machine as well.
Great Stuff!
Thanks. The A2 Fox compiler actually has an ARM backend, so I would be surprised if nobody has migrated it to the Raspi yet. The 2003 version of AOS/Bluebottle (not A2) is on my list of interesting sytems, particularly because it supports multicore hardware.
I'm going to try and give it a go on a zero2 I have lying around. Thanks, this is exactly what I come to hacker news for.
Cool, tell me whether it worked. Unfortunately my mini HDMI adapter is broken and I have to wait for the new to arrive. But I already soldered the headers to the UART pins and observed the system start which looked as it should.
This is lovely. And I bet it is very fast on that hardware, all things considered.
The system is up extremenly fast (compared to Linux), but then it takes pretty long to find the USB hub and the keyboard/mouse. Maybe I can still speed this up.
Thank you, I've never heard of the Oberon os before.
Oberon is both a programming language and an operating system used mostly for teaching, much like e.g. xv6 or xinu. Similar to the latter, Wirth has written text books about the system, some of which can be downloaded for free (see https://projectoberon.net/ for the PDF links).
So good to see Oberon this accessible! Mad props!
I still hope to see the world where Oberon is the future (and present) of OS and programming language design, and I know very little about it.
Thanks to your work, that's about to change.
Thank you times a thousand <3
> I still hope to see the world where Oberon is the future (and present) of OS and programming language design
I see you're into horror stories.
Oberon is absolutely a horrible language. It's an example of how you can screw up a good language by insisting on things that were important in 1960-s.
Like not allowing multiple returns (not multiple return _values_ but multiple returns).
There's an argument (and I think a good one) that in structured programming there should be only one return per function. It's not that hard -- you just have a variable and you set it to what you want to return and the last line of the function returns that variable. I think that some things Wirth did with Oberon, particularly in the post Oberon-OS versions like Oberon-07, are a bit restrictive, but they are always in the service of making code easier to read, even if it makes it slightly harder to write.
The problem is that pure structured programming just sucks. It doesn't have a good answers for cleanups or error handling.
Structured programming was the answer to the earlier mess with unstructured gotos, but in the process of trying to improve it, structured programming became just as messy when taken dogmatically.
In real life, what matters is the mental load. Every ambient condition that you need to track adds mental load. Early returns/breaks/continues reduce it while in a "structured program" you have to keep track of them until the end of the function.
> It's not that hard -- you just have a variable and you set it to what you want to return and the last line of the function returns that variable.
And also have a flag "skip to return" to skip all the conditions. Or you end up mutating arguments of the function. I know, I suffered through programming on Standard Pascal.
It all boils down to the fact that ideas should be viewed as tools rather than dogmas, and famous people are neither infallible nor prophets simply because they had a few good ideas.
> Every ambient condition that you need to track adds mental load
Thus it's wise to limit the complexity of your code. If it starts getting difficult, it might be time to break it down in smaller, more understandable, pieces.
Apparently a school of though widely embraced by Go scholars, nowadays resposible for our cloud infrastructure.
Show me significant concepts implemented in today's languages which cannot directly be traced back to "things that were important in 1960-s" or seventies ;-)
"Traced back" is fine. We can trace back the size of the Shuttle's boosters to the width of the roads in the Roman Empire.
Insisting that the problems of 1960 are the only thing that matters, and MUST be solved dogmatically is not.
Well, a lot of ideas (and I mean really a lot) from the sixties are still very relevant today, and indeed, there are also problems discovered in the sixties still waiting for a solution. We don't have to live in the past, but many "new" things aren't actually new, or are not better just because they are new.
Of course. Problems that existed in 60-s were very real. And structured programming was an improvement over messy gotos.
At the same time, software from 1960-s did not have to deal with a lot of error conditions. When all you have is infallible computation code, you tend to overlook handling cleanups and exceptions. It was also single-threaded, so there was no focus on locking/mutability.
And it turns out that dealing with both of these requires stepping away from pure structured programming with one nice happy path and a single return.
[dead]
[dead]
[dead]
[dead]