Super cool. Also, love reading high quality linux patches. I think many, myself previously included, are afraid to even read the kernel source as one thinks it must be super complex. Of course some parts really are. However, the code is honestly of such high quality. I also highly value that feeling of realizing something once thought 'arcane' was actually only made by other humans, and it is legal to go read it and learn from it.
Clever! I know some will say it's like closing the barn door after the horse left, but having this in place to mitigate future vulnerabilities will be handy.
ok, but what kind of nefarious use case will it enable if it is accessible to malfeasance.
I may be wrong, but on a correctly-configured system, one would have to have root access to act nefariously. Since this is intended to prevent exploitation of vulnerabilities that enable privilege escalation, it feels like a net win.
I guess it could disable the killswitch
besides that.
Could something like this also be done via BPF?
That’s how this[0] project mitigates e.g. CopyFail.
BPF LSM if you want to return -EPERM.
Or a kprobe that kills the process via bpf_send_signal() if BPF LSM isn’t enabled.
Is there any library that does this safely for user-mode and is currently used in production ?
Better tooling for kpatch would be nice tho
IIRC canonical makes patches for official ubuntu kernels but acts like a Chinese restaurant (closed kitchen, orders come in through a small hatch behind the counter)
If I'm a malicious actor that gets root, can I killswitch the killswitch?
Once you’ve got root, you don’t need to exploit compromised code to do whatever you want.
LSMs say otherwise
you're on the other side of the secure door already
killswitch is to prevent you from gaining root
Or malloc(), or open()... They kind of discussing it in the thread on how to prevent this from a malicious actor (or from footgunning yourself), but my understanding it is not all that plain and simple...
this sounds simple, but not running a function doesn't on its own mean safe behavior, if the caller code wasn't written keeping in mind this novel potential refusal as an outcome
still i believe this is the right direction
What about inlined functions?
As addressed in the article ("Choosing the right target"):
> Pick the *highest-level* entry point that contains the bug
>Assisted-by: Claude:claude-opus-4-7
The author has an @kernel.org address, and has been a regular Linux contributor for over a decade. I'll give him the benefit of the doubt. It's easy to write high quality AI-assisted code (harder than writing vibe-coded slop, but still easier than writing every line of code yourself).
Edit: This also reminds me that I've begun to judge projects by whether the developer has public code from before AI. I'm more likely to trust their new code. Which causes me concern about the pipeline for new developers...how am I supposed to know a new user on Github has enough understanding of how software works to be shipping software? It used to be that if a new project by a new dev had good docs, some kind of test coverage, and a coherent git history, I could infer some level of quality. Not true, now, so I probably move on and look for something from someone with a pre-AI track record.
Super cool. Also, love reading high quality linux patches. I think many, myself previously included, are afraid to even read the kernel source as one thinks it must be super complex. Of course some parts really are. However, the code is honestly of such high quality. I also highly value that feeling of realizing something once thought 'arcane' was actually only made by other humans, and it is legal to go read it and learn from it.
Clever! I know some will say it's like closing the barn door after the horse left, but having this in place to mitigate future vulnerabilities will be handy.
ok, but what kind of nefarious use case will it enable if it is accessible to malfeasance.
I may be wrong, but on a correctly-configured system, one would have to have root access to act nefariously. Since this is intended to prevent exploitation of vulnerabilities that enable privilege escalation, it feels like a net win.
I guess it could disable the killswitch
besides that.
Could something like this also be done via BPF?
That’s how this[0] project mitigates e.g. CopyFail.
BPF LSM if you want to return -EPERM.
Or a kprobe that kills the process via bpf_send_signal() if BPF LSM isn’t enabled.
[0] https://github.com/cozystack/copy-fail-blocker#how-it-works
Is there any library that does this safely for user-mode and is currently used in production ?
Better tooling for kpatch would be nice tho
IIRC canonical makes patches for official ubuntu kernels but acts like a Chinese restaurant (closed kitchen, orders come in through a small hatch behind the counter)
If I'm a malicious actor that gets root, can I killswitch the killswitch?
Once you’ve got root, you don’t need to exploit compromised code to do whatever you want.
LSMs say otherwise
you're on the other side of the secure door already
killswitch is to prevent you from gaining root
Or malloc(), or open()... They kind of discussing it in the thread on how to prevent this from a malicious actor (or from footgunning yourself), but my understanding it is not all that plain and simple...
this sounds simple, but not running a function doesn't on its own mean safe behavior, if the caller code wasn't written keeping in mind this novel potential refusal as an outcome
still i believe this is the right direction
What about inlined functions?
As addressed in the article ("Choosing the right target"):
> Pick the *highest-level* entry point that contains the bug
>Assisted-by: Claude:claude-opus-4-7
The author has an @kernel.org address, and has been a regular Linux contributor for over a decade. I'll give him the benefit of the doubt. It's easy to write high quality AI-assisted code (harder than writing vibe-coded slop, but still easier than writing every line of code yourself).
Edit: This also reminds me that I've begun to judge projects by whether the developer has public code from before AI. I'm more likely to trust their new code. Which causes me concern about the pipeline for new developers...how am I supposed to know a new user on Github has enough understanding of how software works to be shipping software? It used to be that if a new project by a new dev had good docs, some kind of test coverage, and a coherent git history, I could infer some level of quality. Not true, now, so I probably move on and look for something from someone with a pre-AI track record.
As an ai critic myself, so what?