I compile a lot of C++ code from a lot of places, and the only time I run into code that somehow simply doesn't work on newer versions of C++ and where the developers aren't even sure if they will accept any patches to fix the issue as they claim it "isn't supported" to use a newer version of C++--even for the public headers of a library--is, you guessed it: code from Google.
Meanwhile, most of the C++ code from Google seems to be written in some mishmash of different ideas, always at some halfway point along a migration between something ancient and something passable... but never anything I would ever dare to call "modern", and thereby tends to be riddled with state machines and manual weak pointers that lead to memory corruption.
So... I really am not sure I buy the entire premise of this article? Honestly, I am extremely glad that Google is finally leaving the ecosystem, as I generally do not enjoy it when Google engineers try to force their ridiculous use cases down peoples' throats, as they seem to believe they simply know better than everyone else how to develop software.
Like... I honestly feel bad for the Rust people, as I do not think the increasing attention they are going to get from Google is going to be at all positive for that ecosystem, any more than I think the massive pressure Google has exerted on the web has been positive or any more than the pressure Google even exerted on Python was positive (not that Python caved to much of it, but the pressure was on and the fact that Python refused to play ball with Google was in no small part what caused Go to exist at all).
(FWIW, I do miss Microsoft's being in the space, but they honestly left years ago -- Herb's existence until recent being kind of a token consideration -- as they have been trying to figure out a tactical exit to C++ ever since Visual J++ and, arguably, Visual Basic, having largely managed to pivot to C# and TypeScript for SDKs long ago. That said... Sun kicking Microsoft out of Java might have been really smart, despite the ramifications?)
The issue with Microsoft until recently, has been the power of WinDev, which are the ones responsible for anything C++ on Microsoft dungeons.
Hence the failure of Longhorn, or any attempt coming out from Microsoft Research.
Ironically, given your Sun remark, Microsoft is back into the Java game, having their own distribution of OpenJDK, and Java is usually the only ecosystem that has day one parity with anything Azure puts out as .NET SDK.
What is "WinDev"? A quick search didn't turn up much except a French Wikipedia article.
Windows Development, per opposition to DevDiv, Developer Division.
Two quite common names in the Microsoft ecosystem.
For a moment there I thought you were referring to this trademark: https://pcsoft.fr/windev/index.html Which was known at a time for having young women in light clothing in their marketing material.
> Honestly, I am extremely glad that Google is finally leaving the ecosystem, as I generally do not enjoy it when Google engineers try to force their ridiculous use cases down peoples' throats, as they seem to believe they simply know better than everyone else how to develop software.
Well, you may be celebrating a bit prematurely then. Google still has a ton of C++ and they haven't stopped writing it. It's going to take ~forever until Google has left the C++ ecosystem. What did happen was that Google majorly scaled down their efforts in the committee.
When it comes to the current schism on how to improve the safety of C++ there are largely two factions:
* The Bjarne/Herb [1] side that focuses on minimal changes to the code. The idea here is to add different profiles to the language and then [draw the rest of the fucking owl]. The big issue here is that it's entirely unclear on how they will achieve temporal and spatial memory safety.
* The other side is represented by Sean Baxter and his work on Safe C++. This is basically a whole-sale adoption of Rust's semantics. The big issue here is that it's effectively introducing a new language that isn't C++.
Google decided to pursue Carbon and isn't a major playing in either of the above efforts. Last time I checked, that language is not not meant to be memory safe.
Why is this bad? Normally, state machines are easy to reason about.
The set of developers who say "I want to implement this logic as a state machine" is MUCH larger than the set of developers who say "I should make sure I fully understand every possible state and edge case ahead of time before making a state machine!"
Couldn't this be said about any alternative solution? I fail to see how this is specific to state machines.
What do you suggest instead of a state machine?
Like properly model a domain in domain terms?
Please describe "normally". State machines can turn into nightmares, just like any design pattern used poorly.
State machines don't have syntax for "transition here when event is encountered no matter what state you are in" so the whole diagram becomes a spaghetti mess if you have a lot of those escape hatches.
State machines don't have a native syntax in C++ at all, so you can structure them however you want. It's easy to structure a state machine, if needed, so that all (or some) states can handle the same event in the same way.
I believe HSMs can model this, but don't quote me. :)
Yes, of course in theory nested state machines should be able to model this. I feel like adding more complexity and bending the rules is a bit of a concession.
Back in the days we implemented HSM helper classes in about 500 LoC and generated them from Enterprise Architect. No need to write a GUI yourself, but better to have a visual for documentation and review. Worked very well until we replaced EA with docs-as-code, now I miss that there is no nice simulator and Modeler for that workflow.
They can be. Or they can be... less easy.
Imagine you have an informally-specified, undocumented, at-least-somewhat-incomplete state machine. Imagine that it interacts with several other similar state machines. Still easy to reason about?
Now add multithreading. Still easy?
Now add locking. Still easy?
Cleanly-done state machines can be the cleanest way to describe a problem, and the simplest way to implement it. But badly-done state machines can be a total mess.
Alas, I think that the last time I waded in such waters, what I left behind was pretty much on the "mess" side of the scale. It worked, it worked mostly solidly, and it did so for more than a decade. But it was still rather messy.
> Imagine you have an informally-specified, undocumented, at-least-somewhat-incomplete state machine. Imagine that it interacts with several other similar state machines. Still easy to reason about?
You think that developers that wrote an informally-specified, undocumented, at-least-somewhat-incomplete state-machine would have written that logic as a non-state-machine in a formally-specified, documented and at-least-somewhat-complete codebase?
State-machines are exceptionally easy to reason about because you can at least reverse-engineer a state-diagram from the state-machine code.
Almost-a-state-machine-but-not-quite are exceptionally difficult to reason about because you can not easily reverse-engineer the state-diagram from the state-machine code.
But that is just true of any problem-solving/programming technique.
In general, state/event machine transition table and decision table techniques of structuring code are easier to comprehend than adhoc and even worse, poorly understood pattern-based techniques are.
I think the article is pretty interesting. There are so many more interesting takes than just another boring Hacker News moan about Google.
Google has such proficient programmers that they had to invent a language for dummies (Go) for them not to get lost.
The farther they step from languages for professionals, the better!
Any other incompetent software developer downvoting the truth?
To the people who work on C++ standards: I approve of the current C++ trajectory and please ignore all of the online noise about "the future of C++." To anyone that disagrees severely with the C++ trajectory as stated, please just consider another language, e.g. Rust. I don't want static lifetime checking in C++ and if you want static lifetime checking, please use Rust. I am not a government contractor, if you are a government contractor who must meet bureaucratic risk-averse government requirements, please use Rust. I have an existing development process that works for me and my customers, I have no significant demand for lifetime checking. If your development process is shiny and new and necessitates lifetime checking, then please use Rust. To Rust advocates, you can have the US government and big tech. You can even have Linux. Just leave my existing C++ process alone. It works and the trade offs we have chosen efficiently accomplish our goals.
C++ has lifetime rules just like Rust. They're simply implicit in the code and not enforced by the compiler. Do you prefer the uncertainty of silent miscompilations and undefined behavior to upfront compiler errors?
You're already using a language with a strong type system, so it's confusing to me why you would choose to draw the line here.
> Do you prefer the uncertainty of silent miscompilations and undefined behavior to upfront compiler errors?
Yes because then I don't have to spend hours writing esoteric spaghetti code to prove something to the compiler that is trivially known to be true. Your error is assuming static lifetime checking is free. As an engineer, I use judgement to make context-dependent trade offs.
If you like playing the compiler olympics, or your employer forces you to, please use Rust.
I've found that often when I am writing esoteric spaghetti rust code... I need to start thinking about what I am trying too do! Most of the time it's a bad idea :)
[deleted]
> As an engineer, I use judgement to make context-dependent trade offs.
Well said.
This is why i am firmly in the Stroustrup camp of backward compatibility/zero overhead/better-C/etc. goodness of "old C++". I need to extend/maintain/rewrite tons of them and that needs to be as painless as possible. The current standards trajectory needs to be maintained.
The OP article is a rather poor one with no insights but mere hoopla over nothing.
Thank you for this. C++ should NOT try to be Rust. I find modern C++ really nice to program in, for the work I'm doing - 3D graphics. The combination of very powerful abstractions and excellent performance is what I'm looking for. I'm more than willing to endure percived lack of safety in the language.
While programming in Rust, I've never thought to myself, "man, this would be so much easier to express in C++". I've plenty of times thought the reverse while programming in C++ though.
Edit: except when interfacing with C APIs.
Then you must be avoiding situations that traditionally use OOP
Most kinds of OOP can be expressed idiomatically in Rust. The big exception is implementation inheritance, which is highly discouraged in modern code anyway due to its complex and unintuitive semantics. (Specifically, its reliance on "open recursion", and the related "fragile base class" problem)
This is interesting because i'm writing quite a bit of embedded Rust, and i always run into limitations of very barebones const generics. I always wish they'd have half the expressiveness of C++ constexpr and templates.
Win some, lose some though, as the overall development workflow is lightyears ahead of C++, mostly due to tooling
Rust generics are not intended as a one-to-one replacement for C++ templates. Most complex cases of template-level programming would be addressed with macros (possibly proc macros) in Rust.
The expressiveness of const generics (NTTPs) in C++ wouldn't go away if it adopted lifetime annotations and "safe" scopes. It's entirely orthogonal.
Rust decided to have more restrictive generic programming, with the benefit of early diagnostic of mistakes in generic code. C++ defers that detection to instantiation, which allows the generics to be more expressive, but it's a tradeoff. But this is an entirely different design decision to lifetime tracking.
That's actually quite interesting because this is not an inherent limitation of Rust, and it is definitely planned to be improved. And AFAIK, today (as opposed to last years) it is even being actively worked on!
On the contrary, why would I not want these things in C++ if I'm developing every project with -fsanitize=address,undefined to catch these types of errors anyway?
Ok.
Please, just use your current C++ standard.
But we will go to use the new one with all features we want to use.
What I don't understand is why you demand that C++ evolution be halted in a clearly suboptimal position so you don't need to change your processes. Just use the version of C++ that meets your needs, you clearly don't want nor need new developments. You are fine with being locked into bad designs for hash maps and unique ptr due to the (newly invented, in 2011/13) ABI stability being made inviolable, you clearly need no new developments in usability and security.
So why not be honest and just use C++01, or 11, or whatever it is that works for you, and let the rest of the ecosystem actually evolve and keep the language we invested so much effort into as a viable alternative? There's zero benefit, except to MS who want to sell this year's Visual Studio to all the companies with 80's-era C++...
But why say so under a pseudonym and avoid clarifying what products the process is for? Even if open source and not productized surely the interest goes simply beyond the the US presidents executive order regarding memory safety.
I don't disagree with the implication that order relies on oversimplification and misunderstanding of the ask but surely it and Rust do not mean as a theme C++ standards must not take anything from it no?
Please don't shame people for using pseudonyms on here, regardless of whether you disagree with their concrete point. It's nice to have a place where people don't have to think about how their friends, family or colleagues will react before posting something.
> But why say so under a pseudonym
That's a rather odd complaint, coming from a pseudonym.
The parts of the government that think everything should be written in a memory-safe language (like Rust) are the same parts that already write everything in Java. Most of the high-end systems work is in C++, and that is the type of software where lifetimes and ownership are frequently unknowable at compile-time, obviating Rust's main selling point.
It's not a hard dichotomy. Almost all of the rules Rust imposes are also present in C++, enforcement is simply left up to the fallible human programmer. Frankly though, is it that big a deal whether we call it unique_ptr/shared_ptr or Box/Arc if a lifetime is truly unknowable?
Rust shines in the other 95% of code. I spend some time every morning cleaning up the sorts of issues Rust prevents that my coworkers have managed to commit despite tooling safeguards. I try for 3 a day, the list is growing, and I don't have to dig deep to find them. My coworkers aren't stupid people, they're intelligent people making simple mistakes because they aren't computers. It won't matter how often I tell them "you made X mistake on Y line, which violates Z rule" because the issue is not their knowledge, it's the inherent inability of humans to follow onerous technical rules without mistakes.
Something that Rust got _really_ right:
Editions. And not just that they exist, but that they are specified per module, and you can mix and match modules with different Editions within a bigger project. This lets a language make backwards incompatible changes, and projects can adopt the new features piecemeal.
If such a thing came to C++, there would obviously be limitations around module boundaries, when different modules used a different Edition. But perhaps this could be a way forward that could allow both camps to have their cake and eat it too.
Imagine a world where the main difference between Python 2 and 3 was the frontend syntax parser, and each module could specifically which syntax ("Edition") it used...
But Edition can exist only because Rust intrinsically has the concept of package, which naturally defines the boundary. C++ has nothing. How do you denote a.cpp be of cpp_2017 edition which b.cpp be cpp_2026? Some per-file comment line at top of each file?
C++ is a mess in that it has too much historic baggage while trying to adapt to a fiercely changing landscape. Like the article says, it has to make drastic changes to keep up, but such changes will probably kill 80% of its target audiences. I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go. It is time to either switch to Rust, or pick one of its successor languages and put effort into it.
Rust doesn't have the concept of package. (Cargo does, but Cargo is a different thing from Rust, and it's entirely possible to use Rust without Cargo).
Rust has the concept of _crate_, which is very close to the concept of compilation unit in C++. You build a crate by invoking `rustc` with a particular set of arguments, just as you build a compilation unit by invoking `g++` or `clang++` with a particular set of arguments.
One of these arguments defines the edition, for Rust, just like it could for C++.
That only works for C++ code using C++20 modules (i.e. for approximately nothing).
With textual includes, you need to be able to switch back and forth the edition within a single compilation unit.
> I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go
I agree but also understand this is absolutely wishful thinking. There is so much inertia and natural resistance to change that C++ will be around for the next century barring nuclear armageddon.
I don't think even that would suffice. :)
Cobol's still around. Just because a language exists doesn't mean that we have to keep releasing updated specifications and compiler versions rather than moving all those resources to better languages.
COBOL's most recent standard was released in 2023, which rather ruins your point.
> And not just that they exist, but that they are specified per module
Nitpick: editions are specified per crate, not per module.
---
Also note that editions allow to make mostly syntactic changes (add/remove syntax or change the meaning of existing ones), however it is greatly limited in what can be changed in the standard library because ultimately that is a crate dependency shared by all other crates.
There was a similar proposal for C++, using rust’s original names: epochs. It stalled out.
They should call them 'eras'. Then they can explain that epochs did not lead to a new era in the language, but eras will mark the next epoch of C++.
If you're comparing Herb Sutter and the Google people at the standard committee, there is one thing that was clear: Herb was good at getting people to agree on compromises that served everybody, while Google was mostly claiming they knew better than everybody else and pushing their own agenda.
One thing I cannot stand about C++ is the fractured nature of everything. Compilers, build tools, package management, etc... It feels like you need to be a wizard just to get a project compiling and start writing some code.
The worst part is when you want to bring along people that are not as much of a wizard as you are. I've been prototyping some multiplayer, online video game with MMO-like sharding for a while now, mostly the backend and core stuff for the project and wanted to get two of my friends on the project to develop the gameplay logic which is largely done through a dynamic scripting language, but some features (that, say, I did not foresee needed yet), require source changes to expose the APIs to the scripting language, now, these guys are capable of doing these changes but the onboarding process for a single potential co-developer is such a pain, I basically have to explain to them how to download a compiler, a package manager like vcpkg (which wasn't even that much usable for these types of things pre-versioning, and is still not working properly - i.e. trying to pin LuaJIT version to 2.0.5 for VM bytecode compatibility will attempt to build LuaJIT with cl.exe on Linux), a build system like CMake, and so on, then guide them through all the steps to get the compiler, the build system, and the libraries working, and then hope that in the end they will actually work and not force you to spend an entire day over a remote desktop software trying to get them to become productive.
Include more of your dependencies in the repo and build them aa part of the ordinary build process. Now a package manager does not need to get involved.
Manually copy-pasting source trees around sounds like such an outdated idea from decades ago on how to approach dependency management in a modern programming language. Not to mention that you then have to hook them up to the build system that you are using and not all of them will work out of the box with the one you are using for your project, sure, if you are using CMake and your dependency uses CMake, you can add a subproject, how do you deal with it when they're mixed-and-matched aside from rewriting the builds for every dependency you're pulling in; or without manually writing glue shell scripts to build them independently and put them into a directory? How do you then ensure the said shell script works across different platforms? There are way too many issues with that approach that are solved in other languages through a standardized project management tool.
You don't have to actually copypaste. You can use CMake and FetchContent/CPM. You can specify custom build commands or inline declare a project for anything small that you pull in that doesn't use CMake (you can call add_library with a glob on the folder FetchContent pulled in, for example - I've done so here https://github.com/basis-robotics/basis/blob/main/cpp/CMakeL... for a header only lib). For large external dependencies that are either very slow to compile or for some reason aren't CMake, reach for the system package manager or similar. If you want to be really cross platform and are supporting Qt/wxwidgets/etc, vcpkg+CMake+clang is solid combo, if a bit slow and a bit disk space heavy with the build cache.
Have you taken a look at CPM ? https://github.com/cpm-cmake/CPM.cmake . It makes CMake project management easy - no need for separate package manager tool.
And yet that's the right approach. It's not really copying but rather onboarding.
You don't want to depend on a third-party hosting the code, so you need to copy it, and pin it to a specific version. You might also need to patch it since you'll be using it and most likely will run into problems with it.
Using third-party code means taking ownership of all the problems one might encounter when trying to use it in your project, so you might as well just adapt it for it to work with your tools and processes.
If you use a modular system this is essentially just maintaining a fork.
Can't you just put that into a docker container?
This is more of a workaround than a solution; see my other comment in this thread.
you DO need to be a wizard to launch a large C++ project.
Yes, languages that are beginner friendly are ... friendlier. Yes, languages that stick to one or a small number of programming paradigms are friendlier. But if you want the "flexible efficiency and raw power of C" and "something higher level than C", C++ is your baby.
Maybe it would be better if we all used Java, Rust, and Go, but C++ sings its siren von Neumann song to the wizards, and there will always be wizard musicologists who steer their projects toward those rocks and, when they have just enough wax in their ears, they sail right past the rocks and come out the other side of the straits leading the rest of the fleet.
You can choose to follow them or not, for there's no shame in coming in 4th.
Even the wizards are moving to Rust/Zig since C++ stdlib performance is becoming terrible thanks to the ABI-frozen till heat-death of the universe decision. Even wizards don't want to build a stdlib of their own from scratch.
Feel the committee was smoking weed that day in la-la land. You can ignore all the safety stuff from Sean Baxter, but saying no to performance on the altar of permanent, un-specified ABI backward compatibility - when such was never mentioned as a design goal of C++ - means its "Goodbye C++" for a long, long list of orgs and "wizards". The ABI was NEVER specified formally by the C++ standard - so why bother sacrificing the world for its immortal existence ?
C++ is NO longer the choice of language for greenfield native projects and the committee takes the full blame.
Really looking forward to zig 1.0. I feel like C++ has become a language where professionals are fine with the historical grime but for hobbyist and people that need C++ occasionally there is just no motivation in the community to make this language more ergonomic.
So much for the theory, then there is the hard reality how standard library is implemented, the variantions across implementations, and how the ongoing ABI drama is preventing any performance improvements to it.
“We must minimize the need to change existing code. For adoption in existing code, decades of experience has consistently shown that most customers with large code bases cannot and will not change even 1% of their lines of code in order to satisfy strictness rules, not even for safety reasons unless regulatory requirements compel them to do so.” – Herb Sutter
with large code bases cannot and will not change even 1% of their lines of code in order to satisfy strictness rules
Do people really say this? Voice this in committee? I have been in a few companies, and one fairly large one, and all are happy to and looking forward to upgrade newer standards and already spend a lot of time updating their build systems. Changing 1% of code on top of that is probably not really that much compared
People just don't make mass changes to existing working code. Mostly they cannot. Even if the tooling was available, which it's not, it's also about reeducating their developers, who don't want to or can't change. And it's all cost with no benefit.
Except, allegedly, at Google. But is there any evidence they actually do this, eg. in public code bases? Or is it just hype?
> Changing 1% of code on top of that is probably not really that much compared
Quite a few companies have millions and millions of lines of code. Changing 1% of it would mean changing more than 10K lines of code, perhaps even more than 100K. In much bigger code bases, where changing anything has a risk of breaking something — not just because you might make a mistake, but because your program is full of Undefined Behaviour, and changing anything might manifest latent bugs.
Given that, I'm not surprised people say that Sutter quote with a straight face.
Could be selection bias. Companies (or departments within companies) who are still actively developing their C++ code probably tend to hire more developers and consultants than companies who are doing minimal maintenance on their code base, and that might correlate well with the “two factions of C++” discussed here.
“Our code is an asset” ⇒ code kept up-to-date
“Our code is a burden, but we need it” ⇒ change averse
Are you referring to his book written 20 years ago or 25 years ago? "customers with large [C++] code bases" there aren't that many of these. Vendors, government. With code bases that have stewards, not programmers.
> Nimble, modern, highly capable tech corporations that understand that their code is an asset. (This isn’t strictly big tech. Any sane greenfield C++ startup will also fall into this category.)
Oh I see, this is a fantasy.
Keyword is "sane". You can probably count all "sane greenfield C++ startups" on one hand.
The Rust people pursue "solidarity" as a virtue. They don't understand that factions is a way of life, so any sufficiently impactful technology will be "fractured" to some extent. This is a good thing. Unitarity, solidarity, homogenous philosophies—are not, but they would have to learn it the hard way like everybody else.
Python similarly has 2-3 factions in my experience: teams doing engineering in Python and using all the modern tooling, linting, packaging, types, testing, etc; teams doing data science and using modern but different tooling (i.e. Anaconda); and teams that don't get onboard in any of the language health initiatives and are on unsupported language versions with no packaging, tooling, linting, etc.
Javascript/Node/Typescript has even more identifiable factions.
I think developing factions around these things is unfortunately normal as languages grow up and get used in different ways. Rust has arguably tried to stay away from this, but the flip side is a higher learning curve because it just doesn't let certain factions exist. Go is probably the best attempt to prevent factions and gain wide adoption, but even then the generics crowd forced the language to adopt them.
When you put it this way, personas might be a better term than factions.
Yeah I think that's a much friendlier term. I do think language ecosystems have a hard time, because on the one hand they should be aiming to be as useful as possible, which means doing more, on the other hand they have to acknowledge that any given user will likely not use all the language and that the rest of it may hinder them, which means doing less.
C++ does a lot, but has a big disengaged crowd, for many reasons, and that crowd will suffer from the push forward. Python and Node are similar.
Profiles aren't a mess because they're intended for legacy codebases instead of big tech monorepos. They're a mess because they're not a serious effort. There's no actual vision of what problems they're trying to solve or what the use cases are, or even what kind of guarantee profiles are going to make.
Ports of massive legacy codebases are possible and they happen. They can be extremely difficult, they take will and effort but they can get done. The idea that you have to slow down the development of the language standard for people who won't port to the new version is weird- Those people won't be updating compilers anyway.
How do I know this? I migrated a codebase of about 20m lines of C++ at a major investment bank from pre-ansi compilers to ansi conformance across 3 platforms (Linux, Solaris and Windows). Not all the code ran on all 3 platforms (I'm looking at you, Solaris) but the vast majority did. Some of it was 20 years old before I touched it - we're talking pre-STL not even just pre ansi. The team was me + one other dude for Linux and Solaris and me + one other different dude for windows, and to give you an idea the target for gcc went from gcc 2.7[1] to gcc 4[2], so a pretty massive change. The build tooling was all CMake + a bunch of special custom shell we had developed to set env vars etc and a CI/CD pipeline that was all custom (and years ahead of its time). Version control was CVS. So, single central code repo and if there was a version conflict an expert (of which I was one but it gives me cold sweats) had to go in, edit the RCS files by hand and if they screwed up all version control for everyone was totally hosed until someone restored from backup and redid the fix successfully.
While we were doing the port to make things harder there was a community of 667 developers[3] actively developing features on this codebase and it had to get pushed out come hell or high water every 2 weeks. Also, this being the securities division of a major investment bank, if anything screwed up real money would be lost.
It was a lot of work, but it got done. I did all my work using vim and quickfix lists (not any fancy pants tooling) including on windows but my windows colleague used visual C++ for his work.[4]
[1] Released in 1995
[2] Released in 2005
[3] yes. The CTO once memorably described it to me as "The number of the beast plus Kirat". Referring to one particularly prolific developer who is somewhat of a legend on Wall Street.
[4] This was in the era of "debugging the error novel" so you're talking 70 pages of ascii sometimes for a single error message with a template backtrace, and of course when you're porting you're getting tens of thousands of these errors. I actually wrote FAQs (for myself as much as anything) about when you were supposed to change "class" to "typename", when you needed "typedef typename" and when you just needed "typedef" etc. So glad I don't do that any more.
The main problem with bad C++ tooling is often the same, it's having a modular system that relies on importing/exporting binaries, then tracking binary versions when combining applications.
You should only track source versions and build things from source as needed.
Languages should not have a package management system. They all have a all the world is my language blindspot and fail hard when you have anything else. Sometimes you can build plugins in a different language but they still assume the one true language is all you want.
package management belongs to the os - or at least something else.
don't get me wrong, package management is a real problem and needs to be solved. I'm arguing against a language package manager we need a language agnostic package manager.
I think C++ is a living proof that not having a standard tooling around the language makes the language a complete pain in the ass to use, with any other language that does standard package managing/tooling out of the box, I can just pin the versions, commit a file to the repository, and on any computer that I'm working on I just issue a single command and everything is handled for me; meanwhile one of the C++ projects I've been working on, it turned out that I cannot build it on my server because one of the libraries I'm using only worked with clang17 which my desktop OS provides but the Debian I'm using on my server is shipping with clang16, and the library was not compatible with the earlier version of some C++ implementation, meanwhile Arch on my desktop updated to clang18, which also broke the library in some fashion, so now I'm sitting here with two systems, one where I want to deploy my software, and one where I want to develop the software, both of which are completely defunct and unable to build my project anymore; now I have to figure out how to build the specific version of clang on both systems and ensure I override a bunch of environment variables when configuring the builds on both of these systems, and then do the same on every new computer I'm developing/deploying on - with a proper tool I could just tell the project file to "use this compiler with this version with this standard" and things would just work. Some people will tell you "yeah bro just use docker with this and that and you will have a reproducible build system everywhere", but the thing is - I do not want to learn a completely unrelated tool and spend hours writing some scripts just to be able to continue working on my project when in any other programming language (like Go, Rust, JS), I can just install the runtime, clone the repo, run a command, and everything is handled for me seamlessly like it should be in 2024.
The problem for me is a "political" one, not a matter of convenience: When I choose a linux distro I implicitly trust the distro maintainers to not backdoor the liveCD, so I might as well trust them to maintain packages transparently. If something happens upstream, we expect the distro maintainers to patch out undesirable behavior, integrate changes into the system as a whole or warn us of changes. Most distros are the same in functionality: the choice of a certain distro is mostly a choice of which political institution (such as a business or non-profit) that we trust to maintain the interoperability of the OS.
Languages need to be more agnostic than a package manager requires because I should not have to rope another organization into my trust model.
Cargo already goes too far in encouraging a single repository (crates.io) for everything through its default behavior. Who maintains crates.io? Where is the transparency? This is the most important information the user should know when deciding to use crates.io, which is whether or not they can trust the maintainers not to backdoor code, and it is rarely discussed or even mentioned!
The default cargo crate (template?) encourages people to use permissive licensing for their code. So that is an example where you are already making implicit political decisions on behalf of the ecosystem and developers. That is alarming and should not be for the language maintainers to decide at all.
In C/C++ you have a separation of the standard from the implementation. This is really what makes C/C++ code long-lived, because you do not have to worry about the standard being hijacked by a single group. You have a standard and multiple competing implementations, like the WWW. I cannot encourage the use of Rust while there is only a single widely-accepted implementation.
The problem with that is that no Linux distro maintainer will ever put effort into maintaining every version of every library and compiler perpetually for a specific, seemingly random, programming language (or at least, reasonably, within few major versions including all minor releases in between), but with a tool that versions dependencies and allows for, say, git-based upstream with tag-versioned releases, you can expect to pick any specific version and for things to just work; managing library code for a specific programming language, be it any language, does not seem like the responsibility of an operating system, if anything, the package manager from your OS should be able to just supply the tool to manage the said language (like you currently can with npm, cargo or go); that also does not touch the topic of making things work across different platforms, sure, you maybe found a way to solve this issue in your imaginary Linux distro, how do you solve the problem for a co-developer that uses Windows, or macOS?
Additionally, you do not have to necessarily enforce these things on the language level, the standard and the tooling could live as two independent projects coming from the same entity. You could still use the compiler and the libraries from your OS, and build the code like that, or you could just reach out to an optional standardized tool that serves as a glue for all the external tools in a standardized way.
Yes, there are a lot of valid concerns with this approach as well, but personally for me, as a frustrated C++ developer, who is most likely going to still use the language for a decade to come, I feel like all the other languages I had mentioned in my previous post had addressed what is my biggest point of frustration with C++, so it's definitely an issue that could be solved. Many tried to do it independently, but due to personal differences, no funding, and different ideas of what should be the scope of such tooling, we ended up with a very fragmented ecosystem of tools, none of which have yet to date been able to fully address an issue that other languages solved.
I'd recommend using upstream apt llvm repos if you are using Debian or debian-derivatives like Ubuntu, to make sure you have the same compiler everywhere.
Specifications for package interchange are absolutely essential, which is distinct from language endorsed package managers.
Python doesn't have a language package manager, you're free to use pip or poetry or uv or whatever, but it does have PEP 517/518, which allow all Python package managers to interact with a common package ecosystem which encompasses polyglot codebases.
C++ is only starting to address this problem with efforts like CPS. We have a plethora of packaging formats, Debian, pkg-config, conan, CMake configs, but they cannot speak fluently to one another so the package ecosystem is fractured, presenting an immense obstacle to any integration effort.
> Python doesn't have a language package manager, you're free to use pip or poetry or uv or whatever, but it does have PEP 517/518, which allow all Python package managers to interact with a common package ecosystem which encompasses polyglot codebases.
This is a long-standing pain point. LWN has a series of reports covering this, one of which is: https://lwn.net/Articles/920832/
But like…why?
Let’s say we make a “thing” which contains packages for all participating languages.
98% of the time, aren’t users just going to go “filter down to my language” and just continue what they’re doing, except with a somewhat worse overall experience, depending on whatever the “lowest common denominator” API + semantics we use for this shared package management solution.
Multi-language build systems already exist, which happily serve the needs to those projects which find themselves needing cross-language (+distributed) builds. Could there be some easier versions of these? Sure, but I don’t feel like “throw everyone in the same big box” is the solution here.
Interesting point, and I'm included to agree with your main point. I don't think the OS level is preferable, however:
Point 1: I do not want my program to only run on only one OS, or to require custom code to make it multi-platform.
Point 2: What if there's no OS?
>Point 1: I do not want my program to only run on only one OS, or to require custom code to make it multi-platform.
To run on only one OS at build time? I usually just set up cross-compilers from linux if I am making cross-platform C/C++ code.
>Point 2: What if there's no OS?
You can use a system like bitbake I think.
I think the term "package management system" is a bit over broad a term to talk about.
If we are talking about global shared dependencies, sure it may belong in the OS.
If we are talking about directly shared code, it may as well belong in the language layer.
If we are talking about combining independent opaque libraries, then it might belong in a different "pseudo os" level like NPM.
> package management belongs to the os
It clearly doesn't except if you're a fan of dll hell and outdated packages.
the solution to DLL hell is to patch the applications to all use the same version of the library.
Oh but of course!
The solution to…a problem created directly by a specific approach is to…do even more work ourselves to try and untangle ourselves? And just cross our fingers and just _hope_ that every app/library is fully amenable to being patched this way?
Alternatively, we could realise that this isn’t really feasible at the scale that the ecosystem operates at now, and that instead of taking an approach that requires us to “do extra work to untangle ourselves” we should try and…not have that problem in the first place.
> The solution to…a problem created directly by a specific approach is to…do even more work ourselves to try and untangle ourselves?
The solution is to be more professional. DLL hell comes from libraries that break compatibility: serious libraries should not break compatibility, or at least not often. Then when they do and you happen to have the issue, it's totally fair to go patch the library you depend on that depends on the breaking lib. Even in proprietary software.
The modern way is to use ZeroVer [1] and language package managers that pull hundreds of dependencies in the blink of an eye. Then asking that people compile everything themselves or use the one system deemed worthy of support (usually Windows and the very latest Ubuntu). And of course not caring about security one bit.
I don't think it's unreasonable to have a system where every program uses the same version of a library.
>And just cross our fingers and just _hope_ that every app/library is fully amenable to being patched this way?
It requires some foresight in designing the application, and whether or not you even choose to use that application in the first place. We should strive to decrease the complexity of the system as a whole. The fact that packages are using different versions of the same library in the first place is a canary and the system should disincentivize that use case to some extent. Using static libraries or a chroot or a sandbox for everything is sweeping the problems under the carpet.
>taking an approach that requires us to “do extra work to untangle ourselves” we should try and…not have that problem in the first place.
I would prefer a system that allows you to link every application to the same library as a default, but also allows for some per-application override, perhaps by using symlinks. That would cover the majority of use cases. But I do not think that dynamic linking is generally in vain.
In my own projects, I try to rely on static linking as much as possible, so I understand your perspective as a developer. But as a user I do not want programs to have their own dependencies separate from the rest of the system.
> I don't think it's unreasonable to have a system where every program uses the same version of a library.
I really think it is. Even at the scale of a single app it may sometimes make sense to have multiple versions of a same library, if for instance it implements a given algorithm in two different ways and both ways have useful properties
I have seen this (linking with multiple versions of the same library) for maintaining backwards compatibility, for example to support parsing a data file from a previous version, but never for selecting different algorithms.
Then shouldn't these APIs be exposed as different libraries?
Could a more streamlined “conception” of something like Gentoo fix this?
Applications ship their lock files + version constraints. Gets merged into a user/os level set of packages. You update one package, OS can figure out what it has to rebuild and goes off and does that.
Still shit-out-of-luck for anything proprietary, and it’s still super possible for users to end up looking at compile failures, but technically fits the bill?
This sets up an untenable N*M explosion:
Will the GhostBSD maintainers pin the right version of Haskell's aeson package?
Will the Fedora Asahi devs stay on top of the latest Ocaml TLS developments?
Will MS package PureScript's code for DOM manipulation?
Agreed. At least, languages should not require its own package management system to be used. There should be a way to invoke the compiler or interpreter without involving that language's own package management system, so that something else (like Bazel) can build on top. Fortunately, most common languages are all like that. You can invoke rustc without cargo. You can use python without pip. You can use javac without maven.
Disagree completely. OS package managers are one of the biggest sources of problems.
Basically, once you have an OS level package manager, you have issues of versioning and ABI. You have people writing to the lowest common denominator - see for example being limited to the compiler and libraries available on an old Red Hat version. This need to maintain ABI compatibility has been one of the hugest issues with evolving C++.
The OS package manager ends up being a Procrustean bed forcing everything into its mold whether or not it actually fits.
Also, this doesn't even have the issue of multiple operating systems and even distros which have different package managers.
Rust and Go having their own package managers has helped greatly with real world usage and evolution.
This is a weird opinion, but I think that the OS package manager's complexity is largely owing to the unix directory structure which it just dumps all binaries in /bin, all configuration files in /etc, all libraries in /lib. It comes from a time where everything on the OS was developed by the same group of people.
By dumping all the same file types in massive top-level directories, you need a separate program (the package manager) to keep track of which files belong to which packages and dealing with their versions and ABI and stuff. Each package represents code developed by a specific group with a certain model of the system's interoperability.
GoboLinux has an interesting play on the problem by changing the directory structure so that the filesystem does most of the heavy lifting.
[deleted]
Honestly I don’t know why more languages don’t just adopt e.g. npm, maven or NuGet. They’re largely language independent at the binary level anyway.
npm, maven, and NuGet have caused me far more problems in trying to reproduce builds than the OS package manager ever will.
The author doesn’t appear to consider the use of binary-only (typically commercial licensed) libraries. There’s still a need for an ABI and “modern tooling” isn’t a magic wand.
I'd guess that majority of such binary-only libraries use C ABI interfaces. The entire Windows ecosystem of COM modules works on top of C ABI's.
Until the moment when you are forced to use a third-party SDK with std:: and boost:: (yeah, WTF?) types in the interface.
Oh, and you can't avoid that, say, you are working on a trading bot and that's the only "supported" way to connect to an exchange.
In the end people usually just reverse engineer and reimplement to get rid of such cursed blob. Fortunately, it works - the vendor can't effectively push all clients to update their SDK too, so all wire protocols are infinitely backward compatible.
I think he has this about right. The project I contribute to (and no, I'm not a massive contributor) is LibreOffice and it is a C++ codebase. It has a decent build system that is easy for anyone to run out of the box. It uses modern C++17+ code, and though it has a lot of legacy code, it is being constantly modified by people like Noel Grandin via clang plugins (along with a lot of manual effort).
This code was originally developed in the late 1980s.
A good packaging tool would have helped a lot.
I'm stoked to hear they're on C++17 now.
When I contributed to LibreOffice (GSoC 2012) they were still on C++03 !
Well, can't really blame them in 2012. Especially that C++11 did bring an ABI break. Looks like they keep it fresh, although C++17 is getting a bit dated. Migration from C++17 to 20 or even 23 is probably a breeze though compared to migrating 03 to 11.
[flagged]
Every study on security vulnerabilities has shown that "just don't screw up bro" doesn't scale.
Even if we ignore the absolute clown move of having no bounds checks by default (and std::span doesn't have them at all), it's very easy to get into trouble with anything involving C++ iterators and references.
The idea that RAII covers "99% of your ass" is "the low-IQ level statement".
Temporal safety is the primary hard problem from a memory safety standpoint, and RAII does nothing to solve it at least the moment a memory allocation crosses abstraction boundaries.
What's an example? I'm just a hobbyist when it comes to c++
No, if you have temporal safety issues you didn't understand RAII. That is pretty much the whole point of RAII.
If you want anyone to believe you, you're going to have to give more than just a blank assertion. Can you give at least a sketch of your reason for your claim?
Reasoning is, if your objects outlive the scope of your class, then they most likely belong to a class that's higher in the hierarchy (they already do, de facto).
Please explain how you would solve the iterator invalidation problem using only C++ and RAII. Thanks.
This whole thread is about temporal safety so you're out of luck.
RAII only helps with 1 of 4 primary cases of safety. RAII deals (badly) with temporal safety, but not spacial safety (bounds errors etc), safe initialization (use before initialization), or undefined behavior (overflow/underflow, aliasing, etc).
Use-after-free (or reference/iterator invalidation in general) is the main issue. RAII doesn't help there at all. RAII helps with deterministically cleaning up resources, which is important, but barely related to safety.
How does RAII not help with safe initialization? It's right in the name.
>RAII deals (badly) with temporal safety
>safe initialization (use before initialization)
These two are solved by proper use of RAII.
But you have a point with UB. That's always been an issue, though, it's part of the idiosyncrasies of C/C++; all languages have their equivalent of UB.
As a relative newcomer to C++, I have found RAII to be fine for writing in object-oriented style. But I would only subject myself to the suffering and complexity of C++ if I really wanted excellent performance, and while RAII does not generally have a runtime cost by itself, engineering for full performance tends to exclude the things that RAII makes easy. If you manage memory via arenas, you want to make your types trivially destructible. If you don't use exceptions, then RAII is not necessary to ensure cleanup. In addition, use of destructors tends towards template APIs that generate lots of duplicate code, when an API that used a C-style function pointer for generic disposal would have produced much smaller code.
And C++'s object model can add additional complexity and sources of UB. In C++20 previously valid code that reads a trivially destructible thread_local after it has been destroyed became UB, even though nothing has actually happened to the backing storage yet.
As an old-timer, I think you have some serious misconception about how RAII works, and what it does for you.
> Arena management
There's nothing that stops you from using arena allocators in C++. (See pmr allocators in C++17 for handling complex non-POD types).
> The cost of RAII_
you're going to have to clean up one way or another. RAII can be zero-overhead, and usually generates less code than the C idiom of "goto Cleanup".
> Use of destructors leads toward template APIs.
Not getting that. Use of destructors leads to use of destructors. Not much else.
> If you don't use exceptions....
Why on earth would you not use exceptions? Proper error handling in C is a complete nightmare.
But even if you don't, lifetime management is a huge problem in C. Not at all trivial to clean things up when you're done. Debugging memory leaks in C code was always a nightmare. The only thing worse was debugging wild memory writes. C++ RAII: very difficult to leak things (impossible, if you're doing it right, which isn't hard), and if it ever does happen almost always related to using C apis that should have been properly wrapped with RAII in the first place.
Granted, wrapping C handles in RAII was a bit tedious in C++89; but C++17 now allows you to write a really tidy AutoClose template for doing RAII close/free of C library pointers now. Not in the standard library, but really easy to roll your own:
// call snd_pcm_close when the variable goes out of close.
using snd_pcm_T = pipedal::AutoClose<snd_pcm_t*,snd_pcm_close>;
snd_pcm_T pcm_handle = snd_pcm_open(....);
> C++ 20 undefined behavior of a read-after-free problem.
That's not UB; that's a serious bug. And C's behavior would also be "UB" if you read after freeing a pointer.
> equivalent to people unable to grasp type coercion on JS and thus blaming the language for it (literally just use '===' and stop bitching about it).
They're not even remotely equivalent. A single eslint rule has immediately and permanently fixed this in every Javascript project I've worked on, both for me and my coworkers' code. RAII helps, but in C++, no amount of linters and language features can fully protect you.
[deleted]
Just give up, they’ll never get it.
What's to get? It's an unsupported claim with substantial counterexamples in the form of every large C++ project. If everyone in the world gets RAII wrong then it doesn't matter what it's theoretically capable of.
I compile a lot of C++ code from a lot of places, and the only time I run into code that somehow simply doesn't work on newer versions of C++ and where the developers aren't even sure if they will accept any patches to fix the issue as they claim it "isn't supported" to use a newer version of C++--even for the public headers of a library--is, you guessed it: code from Google.
Meanwhile, most of the C++ code from Google seems to be written in some mishmash of different ideas, always at some halfway point along a migration between something ancient and something passable... but never anything I would ever dare to call "modern", and thereby tends to be riddled with state machines and manual weak pointers that lead to memory corruption.
So... I really am not sure I buy the entire premise of this article? Honestly, I am extremely glad that Google is finally leaving the ecosystem, as I generally do not enjoy it when Google engineers try to force their ridiculous use cases down peoples' throats, as they seem to believe they simply know better than everyone else how to develop software.
Like... I honestly feel bad for the Rust people, as I do not think the increasing attention they are going to get from Google is going to be at all positive for that ecosystem, any more than I think the massive pressure Google has exerted on the web has been positive or any more than the pressure Google even exerted on Python was positive (not that Python caved to much of it, but the pressure was on and the fact that Python refused to play ball with Google was in no small part what caused Go to exist at all).
(FWIW, I do miss Microsoft's being in the space, but they honestly left years ago -- Herb's existence until recent being kind of a token consideration -- as they have been trying to figure out a tactical exit to C++ ever since Visual J++ and, arguably, Visual Basic, having largely managed to pivot to C# and TypeScript for SDKs long ago. That said... Sun kicking Microsoft out of Java might have been really smart, despite the ramifications?)
The issue with Microsoft until recently, has been the power of WinDev, which are the ones responsible for anything C++ on Microsoft dungeons.
Hence the failure of Longhorn, or any attempt coming out from Microsoft Research.
Ironically, given your Sun remark, Microsoft is back into the Java game, having their own distribution of OpenJDK, and Java is usually the only ecosystem that has day one parity with anything Azure puts out as .NET SDK.
What is "WinDev"? A quick search didn't turn up much except a French Wikipedia article.
Windows Development, per opposition to DevDiv, Developer Division.
Two quite common names in the Microsoft ecosystem.
For a moment there I thought you were referring to this trademark: https://pcsoft.fr/windev/index.html Which was known at a time for having young women in light clothing in their marketing material.
> Honestly, I am extremely glad that Google is finally leaving the ecosystem, as I generally do not enjoy it when Google engineers try to force their ridiculous use cases down peoples' throats, as they seem to believe they simply know better than everyone else how to develop software.
Well, you may be celebrating a bit prematurely then. Google still has a ton of C++ and they haven't stopped writing it. It's going to take ~forever until Google has left the C++ ecosystem. What did happen was that Google majorly scaled down their efforts in the committee.
When it comes to the current schism on how to improve the safety of C++ there are largely two factions:
* The Bjarne/Herb [1] side that focuses on minimal changes to the code. The idea here is to add different profiles to the language and then [draw the rest of the fucking owl]. The big issue here is that it's entirely unclear on how they will achieve temporal and spatial memory safety.
* The other side is represented by Sean Baxter and his work on Safe C++. This is basically a whole-sale adoption of Rust's semantics. The big issue here is that it's effectively introducing a new language that isn't C++.
Google decided to pursue Carbon and isn't a major playing in either of the above efforts. Last time I checked, that language is not not meant to be memory safe.
[1] https://github.com/BjarneStroustrup/profiles [2] https://safecpp.org/draft.html
The set of developers who say "I want to implement this logic as a state machine" is MUCH larger than the set of developers who say "I should make sure I fully understand every possible state and edge case ahead of time before making a state machine!"
Couldn't this be said about any alternative solution? I fail to see how this is specific to state machines.
What do you suggest instead of a state machine?
Like properly model a domain in domain terms?
Please describe "normally". State machines can turn into nightmares, just like any design pattern used poorly.
State machines don't have syntax for "transition here when event is encountered no matter what state you are in" so the whole diagram becomes a spaghetti mess if you have a lot of those escape hatches.
State machines don't have a native syntax in C++ at all, so you can structure them however you want. It's easy to structure a state machine, if needed, so that all (or some) states can handle the same event in the same way.
I believe HSMs can model this, but don't quote me. :)
Yes, of course in theory nested state machines should be able to model this. I feel like adding more complexity and bending the rules is a bit of a concession.
Back in the days we implemented HSM helper classes in about 500 LoC and generated them from Enterprise Architect. No need to write a GUI yourself, but better to have a visual for documentation and review. Worked very well until we replaced EA with docs-as-code, now I miss that there is no nice simulator and Modeler for that workflow.
They can be. Or they can be... less easy.
Imagine you have an informally-specified, undocumented, at-least-somewhat-incomplete state machine. Imagine that it interacts with several other similar state machines. Still easy to reason about?
Now add multithreading. Still easy?
Now add locking. Still easy?
Cleanly-done state machines can be the cleanest way to describe a problem, and the simplest way to implement it. But badly-done state machines can be a total mess.
Alas, I think that the last time I waded in such waters, what I left behind was pretty much on the "mess" side of the scale. It worked, it worked mostly solidly, and it did so for more than a decade. But it was still rather messy.
> Imagine you have an informally-specified, undocumented, at-least-somewhat-incomplete state machine. Imagine that it interacts with several other similar state machines. Still easy to reason about?
You think that developers that wrote an informally-specified, undocumented, at-least-somewhat-incomplete state-machine would have written that logic as a non-state-machine in a formally-specified, documented and at-least-somewhat-complete codebase?
State-machines are exceptionally easy to reason about because you can at least reverse-engineer a state-diagram from the state-machine code.
Almost-a-state-machine-but-not-quite are exceptionally difficult to reason about because you can not easily reverse-engineer the state-diagram from the state-machine code.
But that is just true of any problem-solving/programming technique.
In general, state/event machine transition table and decision table techniques of structuring code are easier to comprehend than adhoc and even worse, poorly understood pattern-based techniques are.
I think the article is pretty interesting. There are so many more interesting takes than just another boring Hacker News moan about Google.
Google has such proficient programmers that they had to invent a language for dummies (Go) for them not to get lost.
The farther they step from languages for professionals, the better!
Any other incompetent software developer downvoting the truth?
To the people who work on C++ standards: I approve of the current C++ trajectory and please ignore all of the online noise about "the future of C++." To anyone that disagrees severely with the C++ trajectory as stated, please just consider another language, e.g. Rust. I don't want static lifetime checking in C++ and if you want static lifetime checking, please use Rust. I am not a government contractor, if you are a government contractor who must meet bureaucratic risk-averse government requirements, please use Rust. I have an existing development process that works for me and my customers, I have no significant demand for lifetime checking. If your development process is shiny and new and necessitates lifetime checking, then please use Rust. To Rust advocates, you can have the US government and big tech. You can even have Linux. Just leave my existing C++ process alone. It works and the trade offs we have chosen efficiently accomplish our goals.
C++ has lifetime rules just like Rust. They're simply implicit in the code and not enforced by the compiler. Do you prefer the uncertainty of silent miscompilations and undefined behavior to upfront compiler errors?
You're already using a language with a strong type system, so it's confusing to me why you would choose to draw the line here.
> Do you prefer the uncertainty of silent miscompilations and undefined behavior to upfront compiler errors?
Yes because then I don't have to spend hours writing esoteric spaghetti code to prove something to the compiler that is trivially known to be true. Your error is assuming static lifetime checking is free. As an engineer, I use judgement to make context-dependent trade offs.
If you like playing the compiler olympics, or your employer forces you to, please use Rust.
I've found that often when I am writing esoteric spaghetti rust code... I need to start thinking about what I am trying too do! Most of the time it's a bad idea :)
> As an engineer, I use judgement to make context-dependent trade offs.
Well said.
This is why i am firmly in the Stroustrup camp of backward compatibility/zero overhead/better-C/etc. goodness of "old C++". I need to extend/maintain/rewrite tons of them and that needs to be as painless as possible. The current standards trajectory needs to be maintained.
The OP article is a rather poor one with no insights but mere hoopla over nothing.
Thank you for this. C++ should NOT try to be Rust. I find modern C++ really nice to program in, for the work I'm doing - 3D graphics. The combination of very powerful abstractions and excellent performance is what I'm looking for. I'm more than willing to endure percived lack of safety in the language.
While programming in Rust, I've never thought to myself, "man, this would be so much easier to express in C++". I've plenty of times thought the reverse while programming in C++ though.
Edit: except when interfacing with C APIs.
Then you must be avoiding situations that traditionally use OOP
Most kinds of OOP can be expressed idiomatically in Rust. The big exception is implementation inheritance, which is highly discouraged in modern code anyway due to its complex and unintuitive semantics. (Specifically, its reliance on "open recursion", and the related "fragile base class" problem)
This is interesting because i'm writing quite a bit of embedded Rust, and i always run into limitations of very barebones const generics. I always wish they'd have half the expressiveness of C++ constexpr and templates.
Win some, lose some though, as the overall development workflow is lightyears ahead of C++, mostly due to tooling
Rust generics are not intended as a one-to-one replacement for C++ templates. Most complex cases of template-level programming would be addressed with macros (possibly proc macros) in Rust.
The expressiveness of const generics (NTTPs) in C++ wouldn't go away if it adopted lifetime annotations and "safe" scopes. It's entirely orthogonal.
Rust decided to have more restrictive generic programming, with the benefit of early diagnostic of mistakes in generic code. C++ defers that detection to instantiation, which allows the generics to be more expressive, but it's a tradeoff. But this is an entirely different design decision to lifetime tracking.
That's actually quite interesting because this is not an inherent limitation of Rust, and it is definitely planned to be improved. And AFAIK, today (as opposed to last years) it is even being actively worked on!
On the contrary, why would I not want these things in C++ if I'm developing every project with -fsanitize=address,undefined to catch these types of errors anyway?
Ok. Please, just use your current C++ standard. But we will go to use the new one with all features we want to use.
What I don't understand is why you demand that C++ evolution be halted in a clearly suboptimal position so you don't need to change your processes. Just use the version of C++ that meets your needs, you clearly don't want nor need new developments. You are fine with being locked into bad designs for hash maps and unique ptr due to the (newly invented, in 2011/13) ABI stability being made inviolable, you clearly need no new developments in usability and security.
So why not be honest and just use C++01, or 11, or whatever it is that works for you, and let the rest of the ecosystem actually evolve and keep the language we invested so much effort into as a viable alternative? There's zero benefit, except to MS who want to sell this year's Visual Studio to all the companies with 80's-era C++...
But why say so under a pseudonym and avoid clarifying what products the process is for? Even if open source and not productized surely the interest goes simply beyond the the US presidents executive order regarding memory safety.
I don't disagree with the implication that order relies on oversimplification and misunderstanding of the ask but surely it and Rust do not mean as a theme C++ standards must not take anything from it no?
Please don't shame people for using pseudonyms on here, regardless of whether you disagree with their concrete point. It's nice to have a place where people don't have to think about how their friends, family or colleagues will react before posting something.
> But why say so under a pseudonym
That's a rather odd complaint, coming from a pseudonym.
The parts of the government that think everything should be written in a memory-safe language (like Rust) are the same parts that already write everything in Java. Most of the high-end systems work is in C++, and that is the type of software where lifetimes and ownership are frequently unknowable at compile-time, obviating Rust's main selling point.
It's not a hard dichotomy. Almost all of the rules Rust imposes are also present in C++, enforcement is simply left up to the fallible human programmer. Frankly though, is it that big a deal whether we call it unique_ptr/shared_ptr or Box/Arc if a lifetime is truly unknowable?
Rust shines in the other 95% of code. I spend some time every morning cleaning up the sorts of issues Rust prevents that my coworkers have managed to commit despite tooling safeguards. I try for 3 a day, the list is growing, and I don't have to dig deep to find them. My coworkers aren't stupid people, they're intelligent people making simple mistakes because they aren't computers. It won't matter how often I tell them "you made X mistake on Y line, which violates Z rule" because the issue is not their knowledge, it's the inherent inability of humans to follow onerous technical rules without mistakes.
Something that Rust got _really_ right: Editions. And not just that they exist, but that they are specified per module, and you can mix and match modules with different Editions within a bigger project. This lets a language make backwards incompatible changes, and projects can adopt the new features piecemeal.
If such a thing came to C++, there would obviously be limitations around module boundaries, when different modules used a different Edition. But perhaps this could be a way forward that could allow both camps to have their cake and eat it too.
Imagine a world where the main difference between Python 2 and 3 was the frontend syntax parser, and each module could specifically which syntax ("Edition") it used...
But Edition can exist only because Rust intrinsically has the concept of package, which naturally defines the boundary. C++ has nothing. How do you denote a.cpp be of cpp_2017 edition which b.cpp be cpp_2026? Some per-file comment line at top of each file?
C++ is a mess in that it has too much historic baggage while trying to adapt to a fiercely changing landscape. Like the article says, it has to make drastic changes to keep up, but such changes will probably kill 80% of its target audiences. I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go. It is time to either switch to Rust, or pick one of its successor languages and put effort into it.
Rust doesn't have the concept of package. (Cargo does, but Cargo is a different thing from Rust, and it's entirely possible to use Rust without Cargo).
Rust has the concept of _crate_, which is very close to the concept of compilation unit in C++. You build a crate by invoking `rustc` with a particular set of arguments, just as you build a compilation unit by invoking `g++` or `clang++` with a particular set of arguments.
One of these arguments defines the edition, for Rust, just like it could for C++.
That only works for C++ code using C++20 modules (i.e. for approximately nothing). With textual includes, you need to be able to switch back and forth the edition within a single compilation unit.
> I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go
I agree but also understand this is absolutely wishful thinking. There is so much inertia and natural resistance to change that C++ will be around for the next century barring nuclear armageddon.
I don't think even that would suffice. :)
Cobol's still around. Just because a language exists doesn't mean that we have to keep releasing updated specifications and compiler versions rather than moving all those resources to better languages.
COBOL's most recent standard was released in 2023, which rather ruins your point.
> And not just that they exist, but that they are specified per module
Nitpick: editions are specified per crate, not per module.
---
Also note that editions allow to make mostly syntactic changes (add/remove syntax or change the meaning of existing ones), however it is greatly limited in what can be changed in the standard library because ultimately that is a crate dependency shared by all other crates.
There was a similar proposal for C++, using rust’s original names: epochs. It stalled out.
They should call them 'eras'. Then they can explain that epochs did not lead to a new era in the language, but eras will mark the next epoch of C++.
If you're comparing Herb Sutter and the Google people at the standard committee, there is one thing that was clear: Herb was good at getting people to agree on compromises that served everybody, while Google was mostly claiming they knew better than everybody else and pushing their own agenda.
One thing I cannot stand about C++ is the fractured nature of everything. Compilers, build tools, package management, etc... It feels like you need to be a wizard just to get a project compiling and start writing some code.
The worst part is when you want to bring along people that are not as much of a wizard as you are. I've been prototyping some multiplayer, online video game with MMO-like sharding for a while now, mostly the backend and core stuff for the project and wanted to get two of my friends on the project to develop the gameplay logic which is largely done through a dynamic scripting language, but some features (that, say, I did not foresee needed yet), require source changes to expose the APIs to the scripting language, now, these guys are capable of doing these changes but the onboarding process for a single potential co-developer is such a pain, I basically have to explain to them how to download a compiler, a package manager like vcpkg (which wasn't even that much usable for these types of things pre-versioning, and is still not working properly - i.e. trying to pin LuaJIT version to 2.0.5 for VM bytecode compatibility will attempt to build LuaJIT with cl.exe on Linux), a build system like CMake, and so on, then guide them through all the steps to get the compiler, the build system, and the libraries working, and then hope that in the end they will actually work and not force you to spend an entire day over a remote desktop software trying to get them to become productive.
Include more of your dependencies in the repo and build them aa part of the ordinary build process. Now a package manager does not need to get involved.
Manually copy-pasting source trees around sounds like such an outdated idea from decades ago on how to approach dependency management in a modern programming language. Not to mention that you then have to hook them up to the build system that you are using and not all of them will work out of the box with the one you are using for your project, sure, if you are using CMake and your dependency uses CMake, you can add a subproject, how do you deal with it when they're mixed-and-matched aside from rewriting the builds for every dependency you're pulling in; or without manually writing glue shell scripts to build them independently and put them into a directory? How do you then ensure the said shell script works across different platforms? There are way too many issues with that approach that are solved in other languages through a standardized project management tool.
You don't have to actually copypaste. You can use CMake and FetchContent/CPM. You can specify custom build commands or inline declare a project for anything small that you pull in that doesn't use CMake (you can call add_library with a glob on the folder FetchContent pulled in, for example - I've done so here https://github.com/basis-robotics/basis/blob/main/cpp/CMakeL... for a header only lib). For large external dependencies that are either very slow to compile or for some reason aren't CMake, reach for the system package manager or similar. If you want to be really cross platform and are supporting Qt/wxwidgets/etc, vcpkg+CMake+clang is solid combo, if a bit slow and a bit disk space heavy with the build cache.
Have you taken a look at CPM ? https://github.com/cpm-cmake/CPM.cmake . It makes CMake project management easy - no need for separate package manager tool.
And yet that's the right approach. It's not really copying but rather onboarding.
You don't want to depend on a third-party hosting the code, so you need to copy it, and pin it to a specific version. You might also need to patch it since you'll be using it and most likely will run into problems with it.
Using third-party code means taking ownership of all the problems one might encounter when trying to use it in your project, so you might as well just adapt it for it to work with your tools and processes.
If you use a modular system this is essentially just maintaining a fork.
Can't you just put that into a docker container?
This is more of a workaround than a solution; see my other comment in this thread.
you DO need to be a wizard to launch a large C++ project.
Yes, languages that are beginner friendly are ... friendlier. Yes, languages that stick to one or a small number of programming paradigms are friendlier. But if you want the "flexible efficiency and raw power of C" and "something higher level than C", C++ is your baby.
Maybe it would be better if we all used Java, Rust, and Go, but C++ sings its siren von Neumann song to the wizards, and there will always be wizard musicologists who steer their projects toward those rocks and, when they have just enough wax in their ears, they sail right past the rocks and come out the other side of the straits leading the rest of the fleet.
You can choose to follow them or not, for there's no shame in coming in 4th.
Even the wizards are moving to Rust/Zig since C++ stdlib performance is becoming terrible thanks to the ABI-frozen till heat-death of the universe decision. Even wizards don't want to build a stdlib of their own from scratch.
Feel the committee was smoking weed that day in la-la land. You can ignore all the safety stuff from Sean Baxter, but saying no to performance on the altar of permanent, un-specified ABI backward compatibility - when such was never mentioned as a design goal of C++ - means its "Goodbye C++" for a long, long list of orgs and "wizards". The ABI was NEVER specified formally by the C++ standard - so why bother sacrificing the world for its immortal existence ?
C++ is NO longer the choice of language for greenfield native projects and the committee takes the full blame.
Really looking forward to zig 1.0. I feel like C++ has become a language where professionals are fine with the historical grime but for hobbyist and people that need C++ occasionally there is just no motivation in the community to make this language more ergonomic.
So much for the theory, then there is the hard reality how standard library is implemented, the variantions across implementations, and how the ongoing ABI drama is preventing any performance improvements to it.
People just don't make mass changes to existing working code. Mostly they cannot. Even if the tooling was available, which it's not, it's also about reeducating their developers, who don't want to or can't change. And it's all cost with no benefit.
Except, allegedly, at Google. But is there any evidence they actually do this, eg. in public code bases? Or is it just hype?
> Changing 1% of code on top of that is probably not really that much compared
Quite a few companies have millions and millions of lines of code. Changing 1% of it would mean changing more than 10K lines of code, perhaps even more than 100K. In much bigger code bases, where changing anything has a risk of breaking something — not just because you might make a mistake, but because your program is full of Undefined Behaviour, and changing anything might manifest latent bugs.
Given that, I'm not surprised people say that Sutter quote with a straight face.
Could be selection bias. Companies (or departments within companies) who are still actively developing their C++ code probably tend to hire more developers and consultants than companies who are doing minimal maintenance on their code base, and that might correlate well with the “two factions of C++” discussed here.
“Our code is an asset” ⇒ code kept up-to-date
“Our code is a burden, but we need it” ⇒ change averse
Are you referring to his book written 20 years ago or 25 years ago? "customers with large [C++] code bases" there aren't that many of these. Vendors, government. With code bases that have stewards, not programmers.
> Nimble, modern, highly capable tech corporations that understand that their code is an asset. (This isn’t strictly big tech. Any sane greenfield C++ startup will also fall into this category.)
Oh I see, this is a fantasy.
Keyword is "sane". You can probably count all "sane greenfield C++ startups" on one hand.
The Rust people pursue "solidarity" as a virtue. They don't understand that factions is a way of life, so any sufficiently impactful technology will be "fractured" to some extent. This is a good thing. Unitarity, solidarity, homogenous philosophies—are not, but they would have to learn it the hard way like everybody else.
Python similarly has 2-3 factions in my experience: teams doing engineering in Python and using all the modern tooling, linting, packaging, types, testing, etc; teams doing data science and using modern but different tooling (i.e. Anaconda); and teams that don't get onboard in any of the language health initiatives and are on unsupported language versions with no packaging, tooling, linting, etc.
Javascript/Node/Typescript has even more identifiable factions.
I think developing factions around these things is unfortunately normal as languages grow up and get used in different ways. Rust has arguably tried to stay away from this, but the flip side is a higher learning curve because it just doesn't let certain factions exist. Go is probably the best attempt to prevent factions and gain wide adoption, but even then the generics crowd forced the language to adopt them.
When you put it this way, personas might be a better term than factions.
Yeah I think that's a much friendlier term. I do think language ecosystems have a hard time, because on the one hand they should be aiming to be as useful as possible, which means doing more, on the other hand they have to acknowledge that any given user will likely not use all the language and that the rest of it may hinder them, which means doing less.
C++ does a lot, but has a big disengaged crowd, for many reasons, and that crowd will suffer from the push forward. Python and Node are similar.
Profiles aren't a mess because they're intended for legacy codebases instead of big tech monorepos. They're a mess because they're not a serious effort. There's no actual vision of what problems they're trying to solve or what the use cases are, or even what kind of guarantee profiles are going to make.
Ports of massive legacy codebases are possible and they happen. They can be extremely difficult, they take will and effort but they can get done. The idea that you have to slow down the development of the language standard for people who won't port to the new version is weird- Those people won't be updating compilers anyway.
How do I know this? I migrated a codebase of about 20m lines of C++ at a major investment bank from pre-ansi compilers to ansi conformance across 3 platforms (Linux, Solaris and Windows). Not all the code ran on all 3 platforms (I'm looking at you, Solaris) but the vast majority did. Some of it was 20 years old before I touched it - we're talking pre-STL not even just pre ansi. The team was me + one other dude for Linux and Solaris and me + one other different dude for windows, and to give you an idea the target for gcc went from gcc 2.7[1] to gcc 4[2], so a pretty massive change. The build tooling was all CMake + a bunch of special custom shell we had developed to set env vars etc and a CI/CD pipeline that was all custom (and years ahead of its time). Version control was CVS. So, single central code repo and if there was a version conflict an expert (of which I was one but it gives me cold sweats) had to go in, edit the RCS files by hand and if they screwed up all version control for everyone was totally hosed until someone restored from backup and redid the fix successfully.
While we were doing the port to make things harder there was a community of 667 developers[3] actively developing features on this codebase and it had to get pushed out come hell or high water every 2 weeks. Also, this being the securities division of a major investment bank, if anything screwed up real money would be lost.
It was a lot of work, but it got done. I did all my work using vim and quickfix lists (not any fancy pants tooling) including on windows but my windows colleague used visual C++ for his work.[4]
[1] Released in 1995
[2] Released in 2005
[3] yes. The CTO once memorably described it to me as "The number of the beast plus Kirat". Referring to one particularly prolific developer who is somewhat of a legend on Wall Street.
[4] This was in the era of "debugging the error novel" so you're talking 70 pages of ascii sometimes for a single error message with a template backtrace, and of course when you're porting you're getting tens of thousands of these errors. I actually wrote FAQs (for myself as much as anything) about when you were supposed to change "class" to "typename", when you needed "typedef typename" and when you just needed "typedef" etc. So glad I don't do that any more.
The main problem with bad C++ tooling is often the same, it's having a modular system that relies on importing/exporting binaries, then tracking binary versions when combining applications.
You should only track source versions and build things from source as needed.
Languages should not have a package management system. They all have a all the world is my language blindspot and fail hard when you have anything else. Sometimes you can build plugins in a different language but they still assume the one true language is all you want.
package management belongs to the os - or at least something else.
don't get me wrong, package management is a real problem and needs to be solved. I'm arguing against a language package manager we need a language agnostic package manager.
I think C++ is a living proof that not having a standard tooling around the language makes the language a complete pain in the ass to use, with any other language that does standard package managing/tooling out of the box, I can just pin the versions, commit a file to the repository, and on any computer that I'm working on I just issue a single command and everything is handled for me; meanwhile one of the C++ projects I've been working on, it turned out that I cannot build it on my server because one of the libraries I'm using only worked with clang17 which my desktop OS provides but the Debian I'm using on my server is shipping with clang16, and the library was not compatible with the earlier version of some C++ implementation, meanwhile Arch on my desktop updated to clang18, which also broke the library in some fashion, so now I'm sitting here with two systems, one where I want to deploy my software, and one where I want to develop the software, both of which are completely defunct and unable to build my project anymore; now I have to figure out how to build the specific version of clang on both systems and ensure I override a bunch of environment variables when configuring the builds on both of these systems, and then do the same on every new computer I'm developing/deploying on - with a proper tool I could just tell the project file to "use this compiler with this version with this standard" and things would just work. Some people will tell you "yeah bro just use docker with this and that and you will have a reproducible build system everywhere", but the thing is - I do not want to learn a completely unrelated tool and spend hours writing some scripts just to be able to continue working on my project when in any other programming language (like Go, Rust, JS), I can just install the runtime, clone the repo, run a command, and everything is handled for me seamlessly like it should be in 2024.
The problem for me is a "political" one, not a matter of convenience: When I choose a linux distro I implicitly trust the distro maintainers to not backdoor the liveCD, so I might as well trust them to maintain packages transparently. If something happens upstream, we expect the distro maintainers to patch out undesirable behavior, integrate changes into the system as a whole or warn us of changes. Most distros are the same in functionality: the choice of a certain distro is mostly a choice of which political institution (such as a business or non-profit) that we trust to maintain the interoperability of the OS.
Languages need to be more agnostic than a package manager requires because I should not have to rope another organization into my trust model.
Cargo already goes too far in encouraging a single repository (crates.io) for everything through its default behavior. Who maintains crates.io? Where is the transparency? This is the most important information the user should know when deciding to use crates.io, which is whether or not they can trust the maintainers not to backdoor code, and it is rarely discussed or even mentioned!
The default cargo crate (template?) encourages people to use permissive licensing for their code. So that is an example where you are already making implicit political decisions on behalf of the ecosystem and developers. That is alarming and should not be for the language maintainers to decide at all.
In C/C++ you have a separation of the standard from the implementation. This is really what makes C/C++ code long-lived, because you do not have to worry about the standard being hijacked by a single group. You have a standard and multiple competing implementations, like the WWW. I cannot encourage the use of Rust while there is only a single widely-accepted implementation.
The problem with that is that no Linux distro maintainer will ever put effort into maintaining every version of every library and compiler perpetually for a specific, seemingly random, programming language (or at least, reasonably, within few major versions including all minor releases in between), but with a tool that versions dependencies and allows for, say, git-based upstream with tag-versioned releases, you can expect to pick any specific version and for things to just work; managing library code for a specific programming language, be it any language, does not seem like the responsibility of an operating system, if anything, the package manager from your OS should be able to just supply the tool to manage the said language (like you currently can with npm, cargo or go); that also does not touch the topic of making things work across different platforms, sure, you maybe found a way to solve this issue in your imaginary Linux distro, how do you solve the problem for a co-developer that uses Windows, or macOS?
Additionally, you do not have to necessarily enforce these things on the language level, the standard and the tooling could live as two independent projects coming from the same entity. You could still use the compiler and the libraries from your OS, and build the code like that, or you could just reach out to an optional standardized tool that serves as a glue for all the external tools in a standardized way.
Yes, there are a lot of valid concerns with this approach as well, but personally for me, as a frustrated C++ developer, who is most likely going to still use the language for a decade to come, I feel like all the other languages I had mentioned in my previous post had addressed what is my biggest point of frustration with C++, so it's definitely an issue that could be solved. Many tried to do it independently, but due to personal differences, no funding, and different ideas of what should be the scope of such tooling, we ended up with a very fragmented ecosystem of tools, none of which have yet to date been able to fully address an issue that other languages solved.
I'd recommend using upstream apt llvm repos if you are using Debian or debian-derivatives like Ubuntu, to make sure you have the same compiler everywhere.
Specifications for package interchange are absolutely essential, which is distinct from language endorsed package managers.
Python doesn't have a language package manager, you're free to use pip or poetry or uv or whatever, but it does have PEP 517/518, which allow all Python package managers to interact with a common package ecosystem which encompasses polyglot codebases.
C++ is only starting to address this problem with efforts like CPS. We have a plethora of packaging formats, Debian, pkg-config, conan, CMake configs, but they cannot speak fluently to one another so the package ecosystem is fractured, presenting an immense obstacle to any integration effort.
> Python doesn't have a language package manager, you're free to use pip or poetry or uv or whatever, but it does have PEP 517/518, which allow all Python package managers to interact with a common package ecosystem which encompasses polyglot codebases.
This is a long-standing pain point. LWN has a series of reports covering this, one of which is: https://lwn.net/Articles/920832/
But like…why?
Let’s say we make a “thing” which contains packages for all participating languages.
98% of the time, aren’t users just going to go “filter down to my language” and just continue what they’re doing, except with a somewhat worse overall experience, depending on whatever the “lowest common denominator” API + semantics we use for this shared package management solution.
Multi-language build systems already exist, which happily serve the needs to those projects which find themselves needing cross-language (+distributed) builds. Could there be some easier versions of these? Sure, but I don’t feel like “throw everyone in the same big box” is the solution here.
Interesting point, and I'm included to agree with your main point. I don't think the OS level is preferable, however:
Point 1: I do not want my program to only run on only one OS, or to require custom code to make it multi-platform.
Point 2: What if there's no OS?
>Point 1: I do not want my program to only run on only one OS, or to require custom code to make it multi-platform.
To run on only one OS at build time? I usually just set up cross-compilers from linux if I am making cross-platform C/C++ code.
>Point 2: What if there's no OS?
You can use a system like bitbake I think.
I think the term "package management system" is a bit over broad a term to talk about.
If we are talking about global shared dependencies, sure it may belong in the OS.
If we are talking about directly shared code, it may as well belong in the language layer.
If we are talking about combining independent opaque libraries, then it might belong in a different "pseudo os" level like NPM.
> package management belongs to the os
It clearly doesn't except if you're a fan of dll hell and outdated packages.
the solution to DLL hell is to patch the applications to all use the same version of the library.
Oh but of course!
The solution to…a problem created directly by a specific approach is to…do even more work ourselves to try and untangle ourselves? And just cross our fingers and just _hope_ that every app/library is fully amenable to being patched this way?
Alternatively, we could realise that this isn’t really feasible at the scale that the ecosystem operates at now, and that instead of taking an approach that requires us to “do extra work to untangle ourselves” we should try and…not have that problem in the first place.
> The solution to…a problem created directly by a specific approach is to…do even more work ourselves to try and untangle ourselves?
The solution is to be more professional. DLL hell comes from libraries that break compatibility: serious libraries should not break compatibility, or at least not often. Then when they do and you happen to have the issue, it's totally fair to go patch the library you depend on that depends on the breaking lib. Even in proprietary software.
The modern way is to use ZeroVer [1] and language package managers that pull hundreds of dependencies in the blink of an eye. Then asking that people compile everything themselves or use the one system deemed worthy of support (usually Windows and the very latest Ubuntu). And of course not caring about security one bit.
[1]: https://0ver.org/
I don't think it's unreasonable to have a system where every program uses the same version of a library.
>And just cross our fingers and just _hope_ that every app/library is fully amenable to being patched this way?
It requires some foresight in designing the application, and whether or not you even choose to use that application in the first place. We should strive to decrease the complexity of the system as a whole. The fact that packages are using different versions of the same library in the first place is a canary and the system should disincentivize that use case to some extent. Using static libraries or a chroot or a sandbox for everything is sweeping the problems under the carpet.
>taking an approach that requires us to “do extra work to untangle ourselves” we should try and…not have that problem in the first place.
I would prefer a system that allows you to link every application to the same library as a default, but also allows for some per-application override, perhaps by using symlinks. That would cover the majority of use cases. But I do not think that dynamic linking is generally in vain.
In my own projects, I try to rely on static linking as much as possible, so I understand your perspective as a developer. But as a user I do not want programs to have their own dependencies separate from the rest of the system.
> I don't think it's unreasonable to have a system where every program uses the same version of a library.
I really think it is. Even at the scale of a single app it may sometimes make sense to have multiple versions of a same library, if for instance it implements a given algorithm in two different ways and both ways have useful properties
I have seen this (linking with multiple versions of the same library) for maintaining backwards compatibility, for example to support parsing a data file from a previous version, but never for selecting different algorithms.
Then shouldn't these APIs be exposed as different libraries?
Could a more streamlined “conception” of something like Gentoo fix this?
Applications ship their lock files + version constraints. Gets merged into a user/os level set of packages. You update one package, OS can figure out what it has to rebuild and goes off and does that.
Still shit-out-of-luck for anything proprietary, and it’s still super possible for users to end up looking at compile failures, but technically fits the bill?
This sets up an untenable N*M explosion:
Will the GhostBSD maintainers pin the right version of Haskell's aeson package?
Will the Fedora Asahi devs stay on top of the latest Ocaml TLS developments?
Will MS package PureScript's code for DOM manipulation?
Agreed. At least, languages should not require its own package management system to be used. There should be a way to invoke the compiler or interpreter without involving that language's own package management system, so that something else (like Bazel) can build on top. Fortunately, most common languages are all like that. You can invoke rustc without cargo. You can use python without pip. You can use javac without maven.
Disagree completely. OS package managers are one of the biggest sources of problems.
Basically, once you have an OS level package manager, you have issues of versioning and ABI. You have people writing to the lowest common denominator - see for example being limited to the compiler and libraries available on an old Red Hat version. This need to maintain ABI compatibility has been one of the hugest issues with evolving C++.
The OS package manager ends up being a Procrustean bed forcing everything into its mold whether or not it actually fits.
Also, this doesn't even have the issue of multiple operating systems and even distros which have different package managers.
Rust and Go having their own package managers has helped greatly with real world usage and evolution.
This is a weird opinion, but I think that the OS package manager's complexity is largely owing to the unix directory structure which it just dumps all binaries in /bin, all configuration files in /etc, all libraries in /lib. It comes from a time where everything on the OS was developed by the same group of people.
By dumping all the same file types in massive top-level directories, you need a separate program (the package manager) to keep track of which files belong to which packages and dealing with their versions and ABI and stuff. Each package represents code developed by a specific group with a certain model of the system's interoperability.
GoboLinux has an interesting play on the problem by changing the directory structure so that the filesystem does most of the heavy lifting.
Honestly I don’t know why more languages don’t just adopt e.g. npm, maven or NuGet. They’re largely language independent at the binary level anyway.
npm, maven, and NuGet have caused me far more problems in trying to reproduce builds than the OS package manager ever will.
The author doesn’t appear to consider the use of binary-only (typically commercial licensed) libraries. There’s still a need for an ABI and “modern tooling” isn’t a magic wand.
I'd guess that majority of such binary-only libraries use C ABI interfaces. The entire Windows ecosystem of COM modules works on top of C ABI's.
Until the moment when you are forced to use a third-party SDK with std:: and boost:: (yeah, WTF?) types in the interface.
Oh, and you can't avoid that, say, you are working on a trading bot and that's the only "supported" way to connect to an exchange.
In the end people usually just reverse engineer and reimplement to get rid of such cursed blob. Fortunately, it works - the vendor can't effectively push all clients to update their SDK too, so all wire protocols are infinitely backward compatible.
Any mirrors/archives? DNS not resolving for me
EDIT: found one on wayback: https://web.archive.org/web/20241124225457/https://herecomes...
Weird, it works for me.
I think he has this about right. The project I contribute to (and no, I'm not a massive contributor) is LibreOffice and it is a C++ codebase. It has a decent build system that is easy for anyone to run out of the box. It uses modern C++17+ code, and though it has a lot of legacy code, it is being constantly modified by people like Noel Grandin via clang plugins (along with a lot of manual effort).
This code was originally developed in the late 1980s.
A good packaging tool would have helped a lot.
I'm stoked to hear they're on C++17 now.
When I contributed to LibreOffice (GSoC 2012) they were still on C++03 !
Well, can't really blame them in 2012. Especially that C++11 did bring an ABI break. Looks like they keep it fresh, although C++17 is getting a bit dated. Migration from C++17 to 20 or even 23 is probably a breeze though compared to migrating 03 to 11.
[flagged]
Every study on security vulnerabilities has shown that "just don't screw up bro" doesn't scale.
Even if we ignore the absolute clown move of having no bounds checks by default (and std::span doesn't have them at all), it's very easy to get into trouble with anything involving C++ iterators and references.
The idea that RAII covers "99% of your ass" is "the low-IQ level statement".
Temporal safety is the primary hard problem from a memory safety standpoint, and RAII does nothing to solve it at least the moment a memory allocation crosses abstraction boundaries.
What's an example? I'm just a hobbyist when it comes to c++
No, if you have temporal safety issues you didn't understand RAII. That is pretty much the whole point of RAII.
If you want anyone to believe you, you're going to have to give more than just a blank assertion. Can you give at least a sketch of your reason for your claim?
Reasoning is, if your objects outlive the scope of your class, then they most likely belong to a class that's higher in the hierarchy (they already do, de facto).
Please explain how you would solve the iterator invalidation problem using only C++ and RAII. Thanks.
This whole thread is about temporal safety so you're out of luck.
RAII only helps with 1 of 4 primary cases of safety. RAII deals (badly) with temporal safety, but not spacial safety (bounds errors etc), safe initialization (use before initialization), or undefined behavior (overflow/underflow, aliasing, etc).
Use-after-free (or reference/iterator invalidation in general) is the main issue. RAII doesn't help there at all. RAII helps with deterministically cleaning up resources, which is important, but barely related to safety.
How does RAII not help with safe initialization? It's right in the name.
>RAII deals (badly) with temporal safety
>safe initialization (use before initialization)
These two are solved by proper use of RAII.
But you have a point with UB. That's always been an issue, though, it's part of the idiosyncrasies of C/C++; all languages have their equivalent of UB.
As a relative newcomer to C++, I have found RAII to be fine for writing in object-oriented style. But I would only subject myself to the suffering and complexity of C++ if I really wanted excellent performance, and while RAII does not generally have a runtime cost by itself, engineering for full performance tends to exclude the things that RAII makes easy. If you manage memory via arenas, you want to make your types trivially destructible. If you don't use exceptions, then RAII is not necessary to ensure cleanup. In addition, use of destructors tends towards template APIs that generate lots of duplicate code, when an API that used a C-style function pointer for generic disposal would have produced much smaller code.
And C++'s object model can add additional complexity and sources of UB. In C++20 previously valid code that reads a trivially destructible thread_local after it has been destroyed became UB, even though nothing has actually happened to the backing storage yet.
As an old-timer, I think you have some serious misconception about how RAII works, and what it does for you.
> Arena management
There's nothing that stops you from using arena allocators in C++. (See pmr allocators in C++17 for handling complex non-POD types).
> The cost of RAII_
you're going to have to clean up one way or another. RAII can be zero-overhead, and usually generates less code than the C idiom of "goto Cleanup".
> Use of destructors leads toward template APIs.
Not getting that. Use of destructors leads to use of destructors. Not much else.
> If you don't use exceptions....
Why on earth would you not use exceptions? Proper error handling in C is a complete nightmare.
But even if you don't, lifetime management is a huge problem in C. Not at all trivial to clean things up when you're done. Debugging memory leaks in C code was always a nightmare. The only thing worse was debugging wild memory writes. C++ RAII: very difficult to leak things (impossible, if you're doing it right, which isn't hard), and if it ever does happen almost always related to using C apis that should have been properly wrapped with RAII in the first place.
Granted, wrapping C handles in RAII was a bit tedious in C++89; but C++17 now allows you to write a really tidy AutoClose template for doing RAII close/free of C library pointers now. Not in the standard library, but really easy to roll your own:
> C++ 20 undefined behavior of a read-after-free problem.That's not UB; that's a serious bug. And C's behavior would also be "UB" if you read after freeing a pointer.
> equivalent to people unable to grasp type coercion on JS and thus blaming the language for it (literally just use '===' and stop bitching about it).
They're not even remotely equivalent. A single eslint rule has immediately and permanently fixed this in every Javascript project I've worked on, both for me and my coworkers' code. RAII helps, but in C++, no amount of linters and language features can fully protect you.
Just give up, they’ll never get it.
What's to get? It's an unsupported claim with substantial counterexamples in the form of every large C++ project. If everyone in the world gets RAII wrong then it doesn't matter what it's theoretically capable of.