I followed the link to the Pixel 9 bug/exploit and saw this:
"Over the past few years, several AI-powered features have been added to mobile phones that allow users to better search and understand their messages. One effect of this change is increased 0-click attack surface, as efficient analysis often requires message media to be decoded before the message is opened by the user"
Haven't we learned our lesson on this? Don't read and act on my sms messages without me asking you to!
> Haven't we learned our lesson on this?
What is the purported lesson we should have learned? Users choose phones with rich messaging features. This was a major selling point for iPhone, first, with iMessage, and later with Android until iOS caught up with RCS.
One of the things Apple's Lockdown mode does is disable previews of images or links that are sent to you.
It seems like the lesson is that you shouldn't be processing data sent to the device by random strangers without the user explicitly choosing to open the file or follow the link.
> What is the purported lesson we should have learned?
Not to automatically execute things within data that we have been sent.
I think it's "don't use parsers written in unsafe languages".
> Don't read and act on my sms messages without me asking you to!
Somewhere there's an NSA agent reading this and laughing like a gin addict on payday.
I don't know if that is the right lesson. It's kind of like "don't click on links"... Err, no. You should be able to click any link without getting hacked.
Sure, in an ideal world different from this one. You should be able to do anything on any device and never worry about security.
Unfortunately, since we don't live in that world, we need to not open links, emails, text messages, etc, if they are sketchy.
A better solution may someday exist, but as of yet has not been found.
How are they going to make trillions of dollars if not!?
"move fast and break things"
"But the users never know what they want to do! We have to shove suggestions and recommendations at them at every! waking! moment!"
"This is notably fast given that this is the first time that an Android driver bug I reported was patched within 90 days of the vendor first learning about the vulnerability."
This makes me feel better about Google, but also makes me kind of frightened of the rest of Android. I wonder what Apple's response time is?
Android vendors have been notorious about updates for a long time. Part of that is supposedly because all of the phone companies want to distinguish themselves from each other, and so they all want to fork the default Android UI so they can offer some psychedelic UI vision with some brand-specific features. But that means that when an update to stock Android comes out, it's a lot of work to migrate.
I don't think Android UI customization is the main issue. Many vendors are not even able to keep device firmware and Linux kernels in sync. Qualcomm and others are doing monthly bulletins:
Since a lot of vendors are months or even years behind, their phones are full of known holes.
When it comes to security, basically: GrapheneOS > iOS > PixelOS >> Samsung OneUI >>>>>>>> everybody else.
Sadly, Samsung lets anyone who pays enough push bloatware and analytics on their phones. E.g. AppCloud from an Isreali company, Meta services that stay even when you remove Meta apps (only removable with ADB/UAD), etc. So there are only three somewhat serious options (and for two of them, you still give a lot of analytics to Apple or Google).
I've reported security bugs to Apple before. Was a couple years back but I remember it taking around 6 months to patch (there was a couple back and forth for me to get a more reliable POC). Maybe 2 months from when I submitted a POC with 100% reproducibility
I've heard they cleaned up their program recently to respond much quicker nowadays
Not sure how much it helps, but I just run all my Apple devices in "Lockdown mode", don't install apps (use Safari), and try to mostly use Safari in private sandboxed mode.
This makes sense if you’re a human-rights journalist working in a dangerous country, with the threat of state-level actors looking to compromise you.
If you’re not then this seems quite paranoid, bordering on LARPing.
LARPing is imagining that Lockdown mode protects you from state-level actors. It is frankly baffling why a industry that has been laughing for literal decades at even the possibility of stopping state-level actors just turns around and uncritically believes Apple's marketing team with literally zero support, evidence or proof except for a long track record of failure. You would think that extraordinary claims would demand extraordinary evidence.
We have seen multiple software hacks resulting in >10 million dollar payouts. Apple's bug bounty program only pays out 4 million dollars (2 million dollars (2x) more than non-Lockdown) for a zero-click total compromise that can trivially worm to take down hundreds of millions of iPhones simultaneously. Even at the low end of that cyberattack payout range that is still a >2x ROI if your successful cyberattack depends on a iPhone zero-click, with many publicly known attacks being in the 10x ROI range. Lockdown mode, at best, raises the bar slightly for commercial profit-motivated attackers and reduces their profit margin from wildly profitable to slightly less, but still, wildly profitable.
And of course I am using the Apple bug bounty program as merely a available metric with at least some semblance of objective support. There are zero certifications, audits, or analysis that Apple has even attempted that would confirm any claim of protection against state level actors.
I strongly disagree that there is no evidence that Lockdown mode is effective; there have been numerous exposed, active iOS exploitation campaigns of which none have worked against Lockdown mode. When we're trying to prove a negative, that's actually some of the strongest evidence we can get.
The economics of the device exploitation industry are completely orthogonal from bug bounty payouts; the markets only overlap at the _extreme_ fringes. Trying to use one as a proxy for the other is meaningless.
> We have seen multiple software hacks resulting in >10 million dollar payouts
This sets a nice price bar for exploitation. Is someone willing to pay 10+ million dollars to get access to your phone?
The obvious caveat here is that for a lot less than 10 million dollars someone can be hired to hit you with a metal pipe until you give up your passcode.
> click total compromise that can trivially worm to take down hundreds of millions of iPhones simultaneously
Where is the profit motive in doing this? Possibility is one thing, but a realistic threat is another.
Is someone willing to pay 10+ million dollars to get access to your phone?
Not yours specifically usually, but there is a lot of money in a general tool that law enforcement can use to read out phones. Of course, most of them focus on physical access. In the few Cellebrite reports/presentations that have leaked, iPhones would fall after a relatively short time (IIRC a few months), but did better than most Android phones (except GrapheneOS).
Also, sometimes you do not need the 10M exploit, you can buy many cheaper exploits and make a chain yourself.
The obvious caveat here is that for a lot less than 10 million dollars someone can be hired to hit you with a metal pipe until you give up your passcode
If they hit you with a metal pipe, it's likely that you won't survive even if you give up your passcode. So most likely you are protecting something or someone else. Set up a duress PIN so that you have options in that case.
... really? Zero-click RCEs can be used on arbitrarily many phones until they are discovered which usually takes on the order of months. You do not need to burn them on every individual target.
As a example of how they might be used in that fashion for profit, NSO group had a revenue of 240 million dollars in 2020. Many of their customers were governments who wanted to spy on activists and journalists. NSO group was in the business of economies of scale to democratize access to journalist devices by reusing a small stockpile of exploits across many targets with enough revenue to assure a steady stream of new exploits as fast as they were burned.
You’re right, I misstated. It’s not 10 million per exploitation, it instead limits the pool of people who can exploit you to those willing and have the ability to spend 10 million+ on an exploit.
That is still quite a small pool, and there are other network effects preventing any Joe blogs with that much capital from launching an exploitation campaign.
Again, no. You do not need to spend 10 million on a exploit if you are working with a company like NSO Group who sells white-glove access to target individual as a service. The cost lower bound is going to be on the order of ((cost of exploit) / (number of times exploit can be used)) and the denominator there is going to easily be in the hundreds to thousands. Of course prices are likely to be higher than the minimum due to profit margins.
To, once again, use the same example of NSO Group as it is infamous and well-documented [1]. In 2016 it was 500,000 $ upfront and 650,000 $/year for 10 devices. That article claims Saudi Arabia was monitoring 15,000 phones at a average cost of 10,000 $/phone. In [2] it was 7 million $ for 15 devices, but the upfront versus marginal cost per device is not broken down. And this was a relatively "above-board" company in the sense that they were a legitimate business entity with government deals which commands a premium relative to random unknown blackhat organization with no reputation.
And again, my original comment was discussing commercial profit-motivated attackers for which 1 million $ is easily within reach and just a cost of doing business to unlock greater amounts of profit. That is less than the cost of setting up a McDonalds. There is a vast, vast gap spanning factors of millions between Joe Schmo and commercial actors and a even vaster gap to state actors. There is no evidence that Lockdown mode is adequate against even commercial actors, let alone the vastly more capable state actors.
"If you’re not then this seems quite paranoid, bordering on LARPing."
There are sooooooo many other situations where such device lockdown is warranted. Government intrusion, sensitive industry, journalism, anything ITAR/EAR covered, and more. Your reduction to a single issue is absurd.
[deleted]
I thought it was common knowledge that all kinds of Americans (not to mention other nations) are routinely compromised with zero-clicks, mostly developed in the US and Israel.
This is the kind of assertion without evidence that just muddies the waters. “All kinds” of people is so vague as to be an almost entirely vacuous category and “routine” means almost nothing without an actual quantification of how prevalent and frequent the problem is.
It’s undeniable that the proverbial guns for hire make it easy (if not cheap) to target basically anyone — but just because the vibes are bad doesn’t mean we can just say “it’s common knowledge that …”
The fact is mitigations are costly in terms of convenience and ease of use. Helping people make informed choices about whether to enable mitigations and bear that cost requires more than platitudes imo
Are you at an above average risk of being targeted by a state level threat actor?
No, just keep the usual tax/finacial/health data on my devices.
I consider Anthropic's Mythros security bug finder mostly marketing, but other things worry me that there might be a global hack contagion: for example, a few months ago I saw in the news that an executive at a US security company was caught selling information to a hacking group.
Except for disabled Javascript compilation possibly slowing down web sites, not getting some attachments in messages, and some graphics not showing up on some web sites, having Lockdown mode set doesn't seem to affect anything I do. For dev I use VPSs with ssh set for ensuring SSH agent forwarding is strictly disabled, as are reverse tunnels.
It seems like doing little things like this make sense because it is such a tiny hassle to be a little safer.
Given that 42% of Android devices are unpatched as of now [1] it's an interesting decision on their part to release their research and make them all vulnerable
That's perennially the case. A big portion of the world buys bargain-basement android devices that are unsupported right out of the box.
Search "android phone" on aliexpress and there's top selling phones on the first page running android 8, android 10, etc. They're not getting security updates of any sort, let alone driver updates.
On brand-name android devices you can count on getting OS security updates. The first-party vendor can build and push these themselves. Driver and firmware security updates are a maybe. These often have to come from an upstream vendor, who may or may not care to fix the issues.
Smaller brands often ship budget android devices and never update them.
Semi-related: has the rate of published exploits picked up as if late, or is it simply the fact that there’s hype around ai as security tool (offense or defense) so it’s simply in the news more often?
Feels like there’s something new every other day - linux, windows, mobile, various commonplace tools used by everybody, the list goes on
I just did some analysis on this last weekend, in 2024 there were roughly 100 CVEs published every day. In April we hit approximately 200 per day.
Going backwards from 2023, the doubling interval for published CVEs was approximately 4 to 4 1/2 years. Since then it’s approximately two years.
There has definitely been a rapid uptick.
Published CVEs seems a bad metric to use for this- unless we assume that the ratio of really nasty vulns/not-too-bad vulns is consistent.
Also the question remains if more CVE laden code was produced in the first place, instead of automated detection improvements.
It's easier to find a needle in the haystack if the haystack is 50% needles.
have the AI vibe code crappy apps so the related AI vuln finder can fix them
just doubled the value and use cases of your AI solution!
Another reason published CVEs isn't a great metric is that one of the largest contributors to the number of CVEs significantly increasing in the past couple years has been that the Linux kernel now submits almost all bugs as CVEs which wasn't the case before.
I wouldn't look at the numbers. There used to be a lot of "scam" CVEs before LLMs, that weren't actual vulns. Nowadays its more popular to collect CVEs, and there is a lot of people scanning with LLMs and reporting without checking (like it was in case of cURL). These CVEs are often not verified by anyone.
There probably is more vulnerabilities found, but the amount of CVEs is not a good metric.
Did you publish this anywhere? Would love to read more.
The rules around CVE reporting changed recently and it would be expected a lot more are accepted.
If one reads between the lines in part 1, the code in question was introduced due to AI features and the exploit was found by humans:
So AI usage increases bugs and humans have to weed them out!
[deleted]
There are reports from people who manage security bugs in OSS that there has been a big uptick in reports: initially low quality ones that were mostly bogus, but now many more legitimate ones as well.
This is pure guesswork, I am not a security researcher, but my guess would be that AI is increasing the amount of low quality exploitable attack surface available, while simultaneously providing security researchers with an accelerant for their work. Which is to say, its great if you use it well and really bad if you use it poorly.
Not low quality if it works!
But it is low quality if it's vulnerable to exploits. And if that's the case, I wouldn't say it really "works".
The low quality refers to the features with security holes. So no, it didn't work (in this hypothetical).
I've reported a few very serious issues to vendors of widely used tools in recent weeks, and it's been even more difficult than usual to get them to be acknowledged - the teams that respond are reportedly swamped.
There definitely is hype around AI as a security tool right now. Someone else pointed out that the rate of CVEs has gone up, but that doesn't tell is why.
This article doesn't mention AI helping find this bug. Seems like humans can still do that on their own.
A bit of both (it finds new things and news is hyped/blown up), and a third factor is that more people are trying to find things. The authors might have been able to do this already, because you still need to have a decent understanding to get useful work out of it and verify the results, but the shiny new toy and FOMO factors make people spend more hours on it that they'd have spent doing something else otherwise
I've seen quite a few saying that they were inspired by the previous report that is presented as "the model pointed us to it" and you get FOMO about missing out if you don't snatch bugs now as well
I think AI helped researchers navigate better in the codebase, not necessarily the AI is succeeding in exploiting.
[dead]
[dead]
> This is rendered even easier by the fact that the kernel is always at the same physical address on Pixel
OpenBSD fixed this back in 2017.
Do we have any evidence on how AI has affected NSO et als’ businesses? Does it render them obsolete? Or are they now superpowered?
[dead]
Hmmm... I'd like someone to double check my thinking here. I posted this exact prompt for gpt 5.5 xhigh:
```
does this look right to you? don't do any searches or check memory, just think through first principles
static int vpu_mmap(struct file fp, struct vm_area_struct vm) { unsigned long pfn; struct vpu_core core = container_of(fp->f_inode->i_cdev, struct vpu_core, cdev); vm_flags_set(vm, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); / This is a CSRs mapping, use pgprot_device */ vm->vm_page_prot = pgprot_device(vm->vm_page_prot); pfn = core->paddr >> PAGE_SHIFT; return remap_pfn_range(vm, vm->vm_start, pfn, vm->vm_end-vm->vm_start, vm->vm_page_prot) ? -EAGAIN : 0; }
```
And it correctly identified the issue at hand, without web searches. I'd love to try something more comprehensive, e.g. shoving whole chunks of the codebase into the prompt instead of just the specific function, but it seems the latent ability to catch security exploits is there.
So then.... I wonder how this got out in the first place. I know I'm using a toy example but would love to learn more!
That's not really a fair test because you're leading the model pretty hard, even if the prompt doesn't specifically say there's a bug to be found. It's basically the same objections that people raised in the thread where someone claimed current models are just as good as mythos.
right exactly, but clearly it's possible to elicit the behavior we want in the model, which means the capabilities are there!
The more interesting question is, how many issues will this prompt report to you in random code that is perfectly fine?
As an anecdote, I provided fragnesia.c and the subsequent proposed patch to fix the issue and while it was not able to discover an entirely new vulnerability, I think it was able to find 2 new ways of exploiting the same underlying bug.
This is quite impressive considering I’m just a dumbass with a Claude subscription.
How do you know it didn't search the web?
no tool calls!
[deleted]
I pasted the code into claude Opus 4.7 with no internet access and just asked it to just tell me what the function did, and it explained it and also called out the bug. I did not tell it to look for bugs:
> Observations & Potential Issues
A few things worth flagging:
1. No bounds checking on the mapping size. Userspace controls vm_end - vm_start and vm->vm_pgoff. Here vm_pgoff is ignored entirely and the size is trusted blindly. If the VPU's register block is, say, 64KB but userspace requests a 1MB mapping, the driver will happily map 1MB of physical address space starting at core->paddr — potentially exposing whatever hardware happens to live at adjacent physical addresses. A defensive check would be:
---
70 day release cycles are very quickly not going to be fast enough to stop widespread use of exploits when you have bots able to scan every PR on every open source project as it comes out.
It's the usual problem of having no consequences for the person who wrote catastrophic code like this and the company who released it. If the person who wrote this were to be imprisoned for the rest of their life, for instance, or if the company were to be fined $1 million per user put at risk (which would probably mean a $1-10 trillion fine for Google -enough to trigger bankruptcy), then things would be very different
If this rule were implemented, would you be walking free right now? Think it over.
Pretty sure the million dollars was not meant seriously. There are plenty of regulated fields in which people still participate, despite various risks of liability. Professional engineers, doctors, every Uber driver in the US, who could potentially be punished for negligent driving while on the job. The point, I think, is that the current level of responsibility for writing bad code is essentially zero, but should probably be higher for some applications.
> the current level of responsibility for writing bad code is essentially zero, but should probably be higher for some applications
I agree that e.g. working on an OS should require guild-type credentials. But I don't know if most SWEs understand the professional-standards requirements such organisations are empowered to enforce on their members.
I most definitely don't agree with him, and I find the idea absolutely repugnant. Devils advocate though, I would be much more careful with the code that I write if regulators passed some dogshit law like that.
I would stop writing code for money.
I understand that, though I wouldn't stop. I'd just go much slower and radically change my methodology. Failures in other engineering domains come with massive legal consequences, and they have for a very long time. In mesopotamia if a house collapsed and killed someone inside, the builder was put to death. People still built houses in the hundreds of thousands.
It really just introduces a legal burden to prove competence and work in good-faith, and nets immense power to throw out ridiculous deadlines. Your managers are legally responsible too, and if they push beyond what's reasonable you have just cause to bring them to court in a way that you currently don't. To re-emphasize, I don't think this is a better world, but it's not unlivable.
Sure, but home builders today very rarely get put to death, and it takes a particular kind of intentional fuck-up to have a plumber, or a drywaller, or electrician placed in prison.
If I was personally liable for damages, and there was an insurance program or some sort - similar to how doctors & dentists practice - sure, I'd probably still write code, very carefully. But if there was a decent change of me spending the rest of my life in prison because something I wrote on a Friday at 4pm under some amount of stress? No thanks. I can re-train as a plumber, and stand knee-deep in shit all day.
Well, one scenario would be that everybody who writes code would do so for money.
Take my friend who is a property lawyer. The firm she works for buys her insurance, because it would be insane to operate without insurance, but the only available insurance is personal insurance, it insures a specific person to do property law. So, although her day job is helping that $100Bn farm equipment company buy a $10M new factory from a $100Bn construction firm, at the weekend she is covered by that same insurance when she represents her friend buying a $500k cottage. AIUI this is a completely normal arrangement.
If that was the situation for programming, the company is going to buy your $100M exploit insurance because they need a programmer, but it's personal insurance so you could work on your Game jam game using the same insurance, and it'd be crazy to just "Go commando" if you don't have employment and thus insurance, in case somehow your "Galaga but also Blue Prince and somehow a visual novel" Game jam entry causes a $10M damages payment.
Or it becomes standardized to have exclusions - pilots for example often have extensive insurance that covers the company when they’re flying for hire, but covers nothing if puttering around in a Cessna on the weekend.
Insurance companies are very, very good at figuring out how to identify and price risk, once motivated to do so.
Sure as you'd expect lawyers are better at cutting a good deal for themselves than other professions, but I wanted to cite an example where it does work out.
Also from what I've seen there are way too many GA accidents involving airline pilots for the insurers to eat that loss. They almost invariably have superior skills, but some of them more than compensate with risk taking.
It's not about lawyers cutting a good deal for themselves. Liability issues get complicated along multiple dimensions when they involve licensed professionals, even when they're day-to-day working relationship is indistinguishable from any other employee. And lawyers, even more than doctors, are at the furthest extreme of this complication spectrum. Even were software engineering to become a mandatorily licensed profession like some other engineering disciplines, there's little reason to believe insurance products would mirror those in the legal profession. I seriously doubt we'd end up in a place where employers are common--let alone routinely--paying to cover liability for work outside the scope of employment.
It's because it's simpler to insure "everything real estate" or whatever than to try to cut out exclusions for (relatively) cheap properties.
But if they noticed that they were paying out more than expected on these $500k deals, the insurance would change quite quickly.
The same thing happened with GA insurance - there was an assumption that airline pilots would be safer but it didn't really turn out as expected, because a 747 has a heck of a lot more "keep you safe" doohickeys and doesn't fly low to the ground much.
We should roll this out for everything.
Someone T-bones you in parking lot, chef causes food poisoning, plumber's leak floods your bathroom, personal trainer pushes to injury, mislabeled allergen on food, movers break your armoire, roofer leaves a leak -- I bet we'd see a lot less of all that if a $1MM fine + life in jail loomed over everyone.
Nobody would want to do business, but boy would we be in a golden age.
A golden age of prisons and debt collectors.
> If the person who wrote this were to be imprisoned for the rest of their life [...] then things would be very different
Yes, they certainly would. You wouldn't have smartphones, for instance.
I can't tell if this is satirical or not. But there are so many takes like this recently (hold the website liable for user content, hold the corporate developer liable for zero days in a project they happened to touch) that would all result in the same outcome (no more product at all) that I can't help but wonder if there's some luddite psy-op trying desperately to bring us back to a pre-Internet era in any way they can...
Yes...no one would write any code.
Code would be written over TOR and passed around on unmarked USB sticks in back alleys.
Pssst - hey, kid, want some GNU?
This is a great bug report! I am not a kernel expert by any means even though I have read some about it... 10+ years ago. And I was able to follow along and see what was going on.
It does make me scared for what other dangers lurk since this was a really bad one and it was so little work to find.
Also of note: so many security issues lately have been done using AI. This report makes me think two things:
1. Expertise is still immensely valuable, the more niche, the more valuable.
2. There are lots of niches still where AI doesn't dominate...
There have been some V4L2 enhancements to support hardware video decoding pending a merge for a long time, they do seem to be in the mainline kernel now, I guess people didn't want to wait that long.
Project Zero has to report bugs to Android through the front door, and deal with Android VRP severity classification? I always assumed they could just walk over to the Android office and advocate for their bugs, face to face.
If they felt it was too painful to do it the "normal" way then that would probably be the next thing for Project Zero try to get fixed.
hm. surprised there aren't idioms like copy_(to|from)_user for these kinds of kernel to userspace mappings for custom device nodes that ensure bounds are supplied...
[deleted]
fascinating how GrapheneOS achieves high security level on the same hardware where Google failed to even randomize android's kernel location
Randomizing the kernel location is of marginal utility at best. There are so many info leaks that KASLR ends up being only a small speed bump on the way to exploitation.
on selling ads or what do you mean their focus used to be that they've lost? I'm not at all negative about more paid features that they've been offering over time, from workspace to youtube to hardware. Still very conflicted about giving Google of all places my custom, but for e.g. phones it's hard to avoid and second-hand the prices are really quite competitive for a tangible hardware product (not a software subscription that you're stuck on). Not bad to shift focus to making these Pixel devices imo, so long as they remain open that is
KASLR isn't an effective mitigation against anything, and to me this is part of GrapheneOS's catalog of superficial but meaningless claims.
I've not seen someone refer to a portion of GrapheneOS's mitigations as superficial and meaningless before. What might an OS with significant improvements to usable attack surface reduction and exploit mitigations look like to you? What sort of things (given a team of less than a dozen contending with OS updates, upgrades and device support) would you have liked to see implemented?
And that is against a device whose BSP is actually open source and available for research!
Now imagine the dark horrors hiding in the BSPs of other Android devices... or embedded devices in general.
Frankly, it should be a requirement of Google's certification process that everything regarding drivers gets upstreamed into the Linux kernel. Yes, even if this adds quite a time delay to the usual hardware development process.
I read about Pixel 9 Dolby Decoder bug, and it is based on integer overflow. It was a mistake to allow "+" operator to overflow, and this must be fixed in new languages like Rust, but it is not.
In Rust the decision about whether to pay for overflow checks or just wrap (because all modern hardware will just wrap if you don't check and that's cheaper) is a choice you can make when compiling software, by default you get checks except in release builds but you can choose checks everywhere, even in release builds or no checks even in debug.
By definition in Rust it's incorrect to overflow the non-overflowing integer types, and so if you intend say wrapping you should use the explicit wrapping operations such as wrapping_add or the Wrapping<T> types in which the default operators do wrap - but if you turn off checks then it's still safe to be wrong, just as if you'd call the wrapping operations by hand instead of using the non-wrapping operations.
That Dolby overflow code looks awkward enough that I can't imagine writing it in Rust even if the checking was off - but I wasn't there. However the reason it's on Project Zero is that it resulted in a bounds miss, and that Rust would have prevented anyway.
> is a choice you can make when compiling software
That is not a solution because it means the code can behave differently, and expose vulnerability if wrong compilation settings are chosen.
The functions like "wrapping_add" have such a long names so that nobody wants to use them and they make the code ugly. Instead, "+" should be used for addition with exceptions, and something like "wrap+" or "<+>" or "[+]" used for wrapping addition.
That's how people work, they will choose the laziest path (the simplest function name) and this is why you should use "+" for safer, non-wrapping addition and make the symbol for wrapping addition long and unattractive. Make writing unsafe code harder. This is just basic psychology.
C has the same problem, they have functions checking for overflow, but they also have long and ugly names that discourage their use.
> modern hardware will just wrap if you don't check and that's cheaper
So you suggest that because x86 is a poorly designed architecture, we should adapt programing languages to its poor design? x86 will be gone sooner or later anyway.
Also, there are languages like JS, Python, Swift which chose the right path, it is only C and Rust developers who seem to be backwards.
__builtin_add_overflow Exists and it’s basically free on most CPUs out there.
> __builtin_add_overflow Exists and it’s basically free on most CPUs out there.
This is a very C-flavoured "solution". For those who haven't seen it this involves a pointer (!) and we're going to compute the addition, write the result to the pointed-at integer and then if that didn't fit and so it overflowed we'll return true otherwise false.
The closest Rust analogy would be T::carrying_add which returns a pair to achieve a similar result.
And yeah, checking is "basically free" unless it isn't, that's not different. If you haven't measured you don't know, same in every programming language.
It's never been true that you can't write correct software in C or C++ the problem is that in practice you won't do so.
I've been using this as a touchstone for whether or not we are actually going to take security seriously for a long time.
We've moved slightly closer to this, but in a world where we're still arguing over memory safety being necessary we've probably still got a ways to go before we notice that addition silently overflowing is a top-10 security issue. It's the silent top-10 security issue, I guess.
Isn't it often combined with poor bounds checks to be exploitable?
It's not as if rust or VM based languages don't help a lot with this
It isn't because no ISA implements add like that, so there's always performance on the table if you check every time, and people would probably endlessly moan about how Rust is 20% slower than C on this add-heavy microbenchmark.
That said you can enable overflow checks in Rust's release mode. It's literally two lines:
[profile.release]
overflow-checks = true
I wonder if it would make sense for ISAs to have trapping versions of add and subtract. RISC-V's justification for not doing that is that it's only a couple more instructions to check afterwards. It would be interesting to see the performance difference of `overflow-check = true` on high performance RISC-V chips once they are available.
I think it is 3 extra instructions on RISC-V if you add signed numbers. So 1 addition (the most popular operation) turns into 4 instructions. What are those people thinking? I generally like RISC-V but this part in my opinion, is wrong. They should just have added "overflow enabled" bit to the add instruction.
It does seem like "What if we offer checked integer arithmetic operations?" is a cheaper experiment than CHERI's "What if we mechanically reify extent based provenance"?"
> It isn't because no ISA implements add like that
I followed the link to the Pixel 9 bug/exploit and saw this:
"Over the past few years, several AI-powered features have been added to mobile phones that allow users to better search and understand their messages. One effect of this change is increased 0-click attack surface, as efficient analysis often requires message media to be decoded before the message is opened by the user"
Haven't we learned our lesson on this? Don't read and act on my sms messages without me asking you to!
> Haven't we learned our lesson on this?
What is the purported lesson we should have learned? Users choose phones with rich messaging features. This was a major selling point for iPhone, first, with iMessage, and later with Android until iOS caught up with RCS.
One of the things Apple's Lockdown mode does is disable previews of images or links that are sent to you.
It seems like the lesson is that you shouldn't be processing data sent to the device by random strangers without the user explicitly choosing to open the file or follow the link.
> What is the purported lesson we should have learned?
Not to automatically execute things within data that we have been sent.
I think it's "don't use parsers written in unsafe languages".
> Don't read and act on my sms messages without me asking you to!
Somewhere there's an NSA agent reading this and laughing like a gin addict on payday.
I don't know if that is the right lesson. It's kind of like "don't click on links"... Err, no. You should be able to click any link without getting hacked.
Sure, in an ideal world different from this one. You should be able to do anything on any device and never worry about security.
Unfortunately, since we don't live in that world, we need to not open links, emails, text messages, etc, if they are sketchy.
A better solution may someday exist, but as of yet has not been found.
How are they going to make trillions of dollars if not!?
"move fast and break things"
"But the users never know what they want to do! We have to shove suggestions and recommendations at them at every! waking! moment!"
"This is notably fast given that this is the first time that an Android driver bug I reported was patched within 90 days of the vendor first learning about the vulnerability."
This makes me feel better about Google, but also makes me kind of frightened of the rest of Android. I wonder what Apple's response time is?
Android vendors have been notorious about updates for a long time. Part of that is supposedly because all of the phone companies want to distinguish themselves from each other, and so they all want to fork the default Android UI so they can offer some psychedelic UI vision with some brand-specific features. But that means that when an update to stock Android comes out, it's a lot of work to migrate.
I don't think Android UI customization is the main issue. Many vendors are not even able to keep device firmware and Linux kernels in sync. Qualcomm and others are doing monthly bulletins:
https://docs.qualcomm.com/securitybulletin/may-2026-bulletin...
Since a lot of vendors are months or even years behind, their phones are full of known holes.
When it comes to security, basically: GrapheneOS > iOS > PixelOS >> Samsung OneUI >>>>>>>> everybody else.
Sadly, Samsung lets anyone who pays enough push bloatware and analytics on their phones. E.g. AppCloud from an Isreali company, Meta services that stay even when you remove Meta apps (only removable with ADB/UAD), etc. So there are only three somewhat serious options (and for two of them, you still give a lot of analytics to Apple or Google).
I've reported security bugs to Apple before. Was a couple years back but I remember it taking around 6 months to patch (there was a couple back and forth for me to get a more reliable POC). Maybe 2 months from when I submitted a POC with 100% reproducibility
At least in the past there has been instances where Apple sat on security bugs for years until they were fixed, one example: https://jonbottarini.com/2021/12/09/dont-reply-a-clever-phis...
I've heard they cleaned up their program recently to respond much quicker nowadays
Not sure how much it helps, but I just run all my Apple devices in "Lockdown mode", don't install apps (use Safari), and try to mostly use Safari in private sandboxed mode.
This makes sense if you’re a human-rights journalist working in a dangerous country, with the threat of state-level actors looking to compromise you.
If you’re not then this seems quite paranoid, bordering on LARPing.
LARPing is imagining that Lockdown mode protects you from state-level actors. It is frankly baffling why a industry that has been laughing for literal decades at even the possibility of stopping state-level actors just turns around and uncritically believes Apple's marketing team with literally zero support, evidence or proof except for a long track record of failure. You would think that extraordinary claims would demand extraordinary evidence.
We have seen multiple software hacks resulting in >10 million dollar payouts. Apple's bug bounty program only pays out 4 million dollars (2 million dollars (2x) more than non-Lockdown) for a zero-click total compromise that can trivially worm to take down hundreds of millions of iPhones simultaneously. Even at the low end of that cyberattack payout range that is still a >2x ROI if your successful cyberattack depends on a iPhone zero-click, with many publicly known attacks being in the 10x ROI range. Lockdown mode, at best, raises the bar slightly for commercial profit-motivated attackers and reduces their profit margin from wildly profitable to slightly less, but still, wildly profitable.
And of course I am using the Apple bug bounty program as merely a available metric with at least some semblance of objective support. There are zero certifications, audits, or analysis that Apple has even attempted that would confirm any claim of protection against state level actors.
I strongly disagree that there is no evidence that Lockdown mode is effective; there have been numerous exposed, active iOS exploitation campaigns of which none have worked against Lockdown mode. When we're trying to prove a negative, that's actually some of the strongest evidence we can get.
The economics of the device exploitation industry are completely orthogonal from bug bounty payouts; the markets only overlap at the _extreme_ fringes. Trying to use one as a proxy for the other is meaningless.
> We have seen multiple software hacks resulting in >10 million dollar payouts
This sets a nice price bar for exploitation. Is someone willing to pay 10+ million dollars to get access to your phone?
The obvious caveat here is that for a lot less than 10 million dollars someone can be hired to hit you with a metal pipe until you give up your passcode.
> click total compromise that can trivially worm to take down hundreds of millions of iPhones simultaneously
Where is the profit motive in doing this? Possibility is one thing, but a realistic threat is another.
Is someone willing to pay 10+ million dollars to get access to your phone?
Not yours specifically usually, but there is a lot of money in a general tool that law enforcement can use to read out phones. Of course, most of them focus on physical access. In the few Cellebrite reports/presentations that have leaked, iPhones would fall after a relatively short time (IIRC a few months), but did better than most Android phones (except GrapheneOS).
Also, sometimes you do not need the 10M exploit, you can buy many cheaper exploits and make a chain yourself.
The obvious caveat here is that for a lot less than 10 million dollars someone can be hired to hit you with a metal pipe until you give up your passcode
If they hit you with a metal pipe, it's likely that you won't survive even if you give up your passcode. So most likely you are protecting something or someone else. Set up a duress PIN so that you have options in that case.
... really? Zero-click RCEs can be used on arbitrarily many phones until they are discovered which usually takes on the order of months. You do not need to burn them on every individual target.
As a example of how they might be used in that fashion for profit, NSO group had a revenue of 240 million dollars in 2020. Many of their customers were governments who wanted to spy on activists and journalists. NSO group was in the business of economies of scale to democratize access to journalist devices by reusing a small stockpile of exploits across many targets with enough revenue to assure a steady stream of new exploits as fast as they were burned.
You’re right, I misstated. It’s not 10 million per exploitation, it instead limits the pool of people who can exploit you to those willing and have the ability to spend 10 million+ on an exploit.
That is still quite a small pool, and there are other network effects preventing any Joe blogs with that much capital from launching an exploitation campaign.
Again, no. You do not need to spend 10 million on a exploit if you are working with a company like NSO Group who sells white-glove access to target individual as a service. The cost lower bound is going to be on the order of ((cost of exploit) / (number of times exploit can be used)) and the denominator there is going to easily be in the hundreds to thousands. Of course prices are likely to be higher than the minimum due to profit margins.
To, once again, use the same example of NSO Group as it is infamous and well-documented [1]. In 2016 it was 500,000 $ upfront and 650,000 $/year for 10 devices. That article claims Saudi Arabia was monitoring 15,000 phones at a average cost of 10,000 $/phone. In [2] it was 7 million $ for 15 devices, but the upfront versus marginal cost per device is not broken down. And this was a relatively "above-board" company in the sense that they were a legitimate business entity with government deals which commands a premium relative to random unknown blackhat organization with no reputation.
And again, my original comment was discussing commercial profit-motivated attackers for which 1 million $ is easily within reach and just a cost of doing business to unlock greater amounts of profit. That is less than the cost of setting up a McDonalds. There is a vast, vast gap spanning factors of millions between Joe Schmo and commercial actors and a even vaster gap to state actors. There is no evidence that Lockdown mode is adequate against even commercial actors, let alone the vastly more capable state actors.
[1] https://prodefence.io/news/pegasus-spyware-operating-costs-c...
[2] https://www.reuters.com/business/media-telecom/meta-suit-aga...
"If you’re not then this seems quite paranoid, bordering on LARPing."
There are sooooooo many other situations where such device lockdown is warranted. Government intrusion, sensitive industry, journalism, anything ITAR/EAR covered, and more. Your reduction to a single issue is absurd.
I thought it was common knowledge that all kinds of Americans (not to mention other nations) are routinely compromised with zero-clicks, mostly developed in the US and Israel.
This is the kind of assertion without evidence that just muddies the waters. “All kinds” of people is so vague as to be an almost entirely vacuous category and “routine” means almost nothing without an actual quantification of how prevalent and frequent the problem is.
It’s undeniable that the proverbial guns for hire make it easy (if not cheap) to target basically anyone — but just because the vibes are bad doesn’t mean we can just say “it’s common knowledge that …”
The fact is mitigations are costly in terms of convenience and ease of use. Helping people make informed choices about whether to enable mitigations and bear that cost requires more than platitudes imo
Are you at an above average risk of being targeted by a state level threat actor?
No, just keep the usual tax/finacial/health data on my devices.
I consider Anthropic's Mythros security bug finder mostly marketing, but other things worry me that there might be a global hack contagion: for example, a few months ago I saw in the news that an executive at a US security company was caught selling information to a hacking group.
Except for disabled Javascript compilation possibly slowing down web sites, not getting some attachments in messages, and some graphics not showing up on some web sites, having Lockdown mode set doesn't seem to affect anything I do. For dev I use VPSs with ssh set for ensuring SSH agent forwarding is strictly disabled, as are reverse tunnels.
It seems like doing little things like this make sense because it is such a tiny hassle to be a little safer.
Given that 42% of Android devices are unpatched as of now [1] it's an interesting decision on their part to release their research and make them all vulnerable
[1] https://gs.statcounter.com/android-version-market-share [2] https://www.cybersecurity-insiders.com/survey-reveals-over-1...
That's perennially the case. A big portion of the world buys bargain-basement android devices that are unsupported right out of the box.
Search "android phone" on aliexpress and there's top selling phones on the first page running android 8, android 10, etc. They're not getting security updates of any sort, let alone driver updates.
On brand-name android devices you can count on getting OS security updates. The first-party vendor can build and push these themselves. Driver and firmware security updates are a maybe. These often have to come from an upstream vendor, who may or may not care to fix the issues.
Smaller brands often ship budget android devices and never update them.
Semi-related: has the rate of published exploits picked up as if late, or is it simply the fact that there’s hype around ai as security tool (offense or defense) so it’s simply in the news more often?
Feels like there’s something new every other day - linux, windows, mobile, various commonplace tools used by everybody, the list goes on
I just did some analysis on this last weekend, in 2024 there were roughly 100 CVEs published every day. In April we hit approximately 200 per day.
Going backwards from 2023, the doubling interval for published CVEs was approximately 4 to 4 1/2 years. Since then it’s approximately two years.
There has definitely been a rapid uptick.
Published CVEs seems a bad metric to use for this- unless we assume that the ratio of really nasty vulns/not-too-bad vulns is consistent.
Also the question remains if more CVE laden code was produced in the first place, instead of automated detection improvements.
It's easier to find a needle in the haystack if the haystack is 50% needles.
have the AI vibe code crappy apps so the related AI vuln finder can fix them
just doubled the value and use cases of your AI solution!
Another reason published CVEs isn't a great metric is that one of the largest contributors to the number of CVEs significantly increasing in the past couple years has been that the Linux kernel now submits almost all bugs as CVEs which wasn't the case before.
I wouldn't look at the numbers. There used to be a lot of "scam" CVEs before LLMs, that weren't actual vulns. Nowadays its more popular to collect CVEs, and there is a lot of people scanning with LLMs and reporting without checking (like it was in case of cURL). These CVEs are often not verified by anyone.
There probably is more vulnerabilities found, but the amount of CVEs is not a good metric.
Did you publish this anywhere? Would love to read more.
The rules around CVE reporting changed recently and it would be expected a lot more are accepted.
If one reads between the lines in part 1, the code in question was introduced due to AI features and the exploit was found by humans:
https://projectzero.google/2026/01/pixel-0-click-part-1.html
So AI usage increases bugs and humans have to weed them out!
There are reports from people who manage security bugs in OSS that there has been a big uptick in reports: initially low quality ones that were mostly bogus, but now many more legitimate ones as well.
This is pure guesswork, I am not a security researcher, but my guess would be that AI is increasing the amount of low quality exploitable attack surface available, while simultaneously providing security researchers with an accelerant for their work. Which is to say, its great if you use it well and really bad if you use it poorly.
Not low quality if it works!
But it is low quality if it's vulnerable to exploits. And if that's the case, I wouldn't say it really "works".
The low quality refers to the features with security holes. So no, it didn't work (in this hypothetical).
only until it's ransomware'd
https://lwn.net/Articles/1065620/
I've reported a few very serious issues to vendors of widely used tools in recent weeks, and it's been even more difficult than usual to get them to be acknowledged - the teams that respond are reportedly swamped.
There definitely is hype around AI as a security tool right now. Someone else pointed out that the rate of CVEs has gone up, but that doesn't tell is why.
This article doesn't mention AI helping find this bug. Seems like humans can still do that on their own.
A bit of both (it finds new things and news is hyped/blown up), and a third factor is that more people are trying to find things. The authors might have been able to do this already, because you still need to have a decent understanding to get useful work out of it and verify the results, but the shiny new toy and FOMO factors make people spend more hours on it that they'd have spent doing something else otherwise
I've seen quite a few saying that they were inspired by the previous report that is presented as "the model pointed us to it" and you get FOMO about missing out if you don't snatch bugs now as well
I think AI helped researchers navigate better in the codebase, not necessarily the AI is succeeding in exploiting.
[dead]
[dead]
> This is rendered even easier by the fact that the kernel is always at the same physical address on Pixel
OpenBSD fixed this back in 2017.
Do we have any evidence on how AI has affected NSO et als’ businesses? Does it render them obsolete? Or are they now superpowered?
[dead]
Hmmm... I'd like someone to double check my thinking here. I posted this exact prompt for gpt 5.5 xhigh:
```
does this look right to you? don't do any searches or check memory, just think through first principles
static int vpu_mmap(struct file fp, struct vm_area_struct vm) { unsigned long pfn; struct vpu_core core = container_of(fp->f_inode->i_cdev, struct vpu_core, cdev); vm_flags_set(vm, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); / This is a CSRs mapping, use pgprot_device */ vm->vm_page_prot = pgprot_device(vm->vm_page_prot); pfn = core->paddr >> PAGE_SHIFT; return remap_pfn_range(vm, vm->vm_start, pfn, vm->vm_end-vm->vm_start, vm->vm_page_prot) ? -EAGAIN : 0; }
```
And it correctly identified the issue at hand, without web searches. I'd love to try something more comprehensive, e.g. shoving whole chunks of the codebase into the prompt instead of just the specific function, but it seems the latent ability to catch security exploits is there.
So then.... I wonder how this got out in the first place. I know I'm using a toy example but would love to learn more!
That's not really a fair test because you're leading the model pretty hard, even if the prompt doesn't specifically say there's a bug to be found. It's basically the same objections that people raised in the thread where someone claimed current models are just as good as mythos.
right exactly, but clearly it's possible to elicit the behavior we want in the model, which means the capabilities are there!
The more interesting question is, how many issues will this prompt report to you in random code that is perfectly fine?
As an anecdote, I provided fragnesia.c and the subsequent proposed patch to fix the issue and while it was not able to discover an entirely new vulnerability, I think it was able to find 2 new ways of exploiting the same underlying bug.
This is quite impressive considering I’m just a dumbass with a Claude subscription.
How do you know it didn't search the web?
no tool calls!
I pasted the code into claude Opus 4.7 with no internet access and just asked it to just tell me what the function did, and it explained it and also called out the bug. I did not tell it to look for bugs:
> Observations & Potential Issues A few things worth flagging: 1. No bounds checking on the mapping size. Userspace controls vm_end - vm_start and vm->vm_pgoff. Here vm_pgoff is ignored entirely and the size is trusted blindly. If the VPU's register block is, say, 64KB but userspace requests a 1MB mapping, the driver will happily map 1MB of physical address space starting at core->paddr — potentially exposing whatever hardware happens to live at adjacent physical addresses. A defensive check would be:
---
70 day release cycles are very quickly not going to be fast enough to stop widespread use of exploits when you have bots able to scan every PR on every open source project as it comes out.
It's the usual problem of having no consequences for the person who wrote catastrophic code like this and the company who released it. If the person who wrote this were to be imprisoned for the rest of their life, for instance, or if the company were to be fined $1 million per user put at risk (which would probably mean a $1-10 trillion fine for Google -enough to trigger bankruptcy), then things would be very different
If this rule were implemented, would you be walking free right now? Think it over.
Pretty sure the million dollars was not meant seriously. There are plenty of regulated fields in which people still participate, despite various risks of liability. Professional engineers, doctors, every Uber driver in the US, who could potentially be punished for negligent driving while on the job. The point, I think, is that the current level of responsibility for writing bad code is essentially zero, but should probably be higher for some applications.
> the current level of responsibility for writing bad code is essentially zero, but should probably be higher for some applications
I agree that e.g. working on an OS should require guild-type credentials. But I don't know if most SWEs understand the professional-standards requirements such organisations are empowered to enforce on their members.
I most definitely don't agree with him, and I find the idea absolutely repugnant. Devils advocate though, I would be much more careful with the code that I write if regulators passed some dogshit law like that.
I would stop writing code for money.
I understand that, though I wouldn't stop. I'd just go much slower and radically change my methodology. Failures in other engineering domains come with massive legal consequences, and they have for a very long time. In mesopotamia if a house collapsed and killed someone inside, the builder was put to death. People still built houses in the hundreds of thousands.
It really just introduces a legal burden to prove competence and work in good-faith, and nets immense power to throw out ridiculous deadlines. Your managers are legally responsible too, and if they push beyond what's reasonable you have just cause to bring them to court in a way that you currently don't. To re-emphasize, I don't think this is a better world, but it's not unlivable.
Sure, but home builders today very rarely get put to death, and it takes a particular kind of intentional fuck-up to have a plumber, or a drywaller, or electrician placed in prison.
If I was personally liable for damages, and there was an insurance program or some sort - similar to how doctors & dentists practice - sure, I'd probably still write code, very carefully. But if there was a decent change of me spending the rest of my life in prison because something I wrote on a Friday at 4pm under some amount of stress? No thanks. I can re-train as a plumber, and stand knee-deep in shit all day.
Well, one scenario would be that everybody who writes code would do so for money.
Take my friend who is a property lawyer. The firm she works for buys her insurance, because it would be insane to operate without insurance, but the only available insurance is personal insurance, it insures a specific person to do property law. So, although her day job is helping that $100Bn farm equipment company buy a $10M new factory from a $100Bn construction firm, at the weekend she is covered by that same insurance when she represents her friend buying a $500k cottage. AIUI this is a completely normal arrangement.
If that was the situation for programming, the company is going to buy your $100M exploit insurance because they need a programmer, but it's personal insurance so you could work on your Game jam game using the same insurance, and it'd be crazy to just "Go commando" if you don't have employment and thus insurance, in case somehow your "Galaga but also Blue Prince and somehow a visual novel" Game jam entry causes a $10M damages payment.
Or it becomes standardized to have exclusions - pilots for example often have extensive insurance that covers the company when they’re flying for hire, but covers nothing if puttering around in a Cessna on the weekend.
Insurance companies are very, very good at figuring out how to identify and price risk, once motivated to do so.
Sure as you'd expect lawyers are better at cutting a good deal for themselves than other professions, but I wanted to cite an example where it does work out.
Also from what I've seen there are way too many GA accidents involving airline pilots for the insurers to eat that loss. They almost invariably have superior skills, but some of them more than compensate with risk taking.
It's not about lawyers cutting a good deal for themselves. Liability issues get complicated along multiple dimensions when they involve licensed professionals, even when they're day-to-day working relationship is indistinguishable from any other employee. And lawyers, even more than doctors, are at the furthest extreme of this complication spectrum. Even were software engineering to become a mandatorily licensed profession like some other engineering disciplines, there's little reason to believe insurance products would mirror those in the legal profession. I seriously doubt we'd end up in a place where employers are common--let alone routinely--paying to cover liability for work outside the scope of employment.
It's because it's simpler to insure "everything real estate" or whatever than to try to cut out exclusions for (relatively) cheap properties.
But if they noticed that they were paying out more than expected on these $500k deals, the insurance would change quite quickly.
The same thing happened with GA insurance - there was an assumption that airline pilots would be safer but it didn't really turn out as expected, because a 747 has a heck of a lot more "keep you safe" doohickeys and doesn't fly low to the ground much.
We should roll this out for everything.
Someone T-bones you in parking lot, chef causes food poisoning, plumber's leak floods your bathroom, personal trainer pushes to injury, mislabeled allergen on food, movers break your armoire, roofer leaves a leak -- I bet we'd see a lot less of all that if a $1MM fine + life in jail loomed over everyone.
Nobody would want to do business, but boy would we be in a golden age.
A golden age of prisons and debt collectors.
> If the person who wrote this were to be imprisoned for the rest of their life [...] then things would be very different
Yes, they certainly would. You wouldn't have smartphones, for instance.
I can't tell if this is satirical or not. But there are so many takes like this recently (hold the website liable for user content, hold the corporate developer liable for zero days in a project they happened to touch) that would all result in the same outcome (no more product at all) that I can't help but wonder if there's some luddite psy-op trying desperately to bring us back to a pre-Internet era in any way they can...
Yes...no one would write any code.
Code would be written over TOR and passed around on unmarked USB sticks in back alleys.
Pssst - hey, kid, want some GNU?
This is a great bug report! I am not a kernel expert by any means even though I have read some about it... 10+ years ago. And I was able to follow along and see what was going on.
It does make me scared for what other dangers lurk since this was a really bad one and it was so little work to find.
Also of note: so many security issues lately have been done using AI. This report makes me think two things:
1. Expertise is still immensely valuable, the more niche, the more valuable.
2. There are lots of niches still where AI doesn't dominate...
There have been some V4L2 enhancements to support hardware video decoding pending a merge for a long time, they do seem to be in the mainline kernel now, I guess people didn't want to wait that long.
Project Zero has to report bugs to Android through the front door, and deal with Android VRP severity classification? I always assumed they could just walk over to the Android office and advocate for their bugs, face to face.
If they felt it was too painful to do it the "normal" way then that would probably be the next thing for Project Zero try to get fixed.
hm. surprised there aren't idioms like copy_(to|from)_user for these kinds of kernel to userspace mappings for custom device nodes that ensure bounds are supplied...
fascinating how GrapheneOS achieves high security level on the same hardware where Google failed to even randomize android's kernel location
Randomizing the kernel location is of marginal utility at best. There are so many info leaks that KASLR ends up being only a small speed bump on the way to exploitation.
Here's a cool project that inventories all your KASLR info leaks: https://github.com/bcoles/kasld
Is Graphene vulnerable to these exploits?
It's easy to be secure if you just remove features. There's obvious tension here.
Could you be any more specific about what features they've removed such that the hardening functions work? Because I think there are none
They're quite open about it. https://grapheneos.org/features#attack-surface-reduction
google has lost its focus with pixel phones
on selling ads or what do you mean their focus used to be that they've lost? I'm not at all negative about more paid features that they've been offering over time, from workspace to youtube to hardware. Still very conflicted about giving Google of all places my custom, but for e.g. phones it's hard to avoid and second-hand the prices are really quite competitive for a tangible hardware product (not a software subscription that you're stuck on). Not bad to shift focus to making these Pixel devices imo, so long as they remain open that is
KASLR isn't an effective mitigation against anything, and to me this is part of GrapheneOS's catalog of superficial but meaningless claims.
I've not seen someone refer to a portion of GrapheneOS's mitigations as superficial and meaningless before. What might an OS with significant improvements to usable attack surface reduction and exploit mitigations look like to you? What sort of things (given a team of less than a dozen contending with OS updates, upgrades and device support) would you have liked to see implemented?
And that is against a device whose BSP is actually open source and available for research!
Now imagine the dark horrors hiding in the BSPs of other Android devices... or embedded devices in general.
Frankly, it should be a requirement of Google's certification process that everything regarding drivers gets upstreamed into the Linux kernel. Yes, even if this adds quite a time delay to the usual hardware development process.
I read about Pixel 9 Dolby Decoder bug, and it is based on integer overflow. It was a mistake to allow "+" operator to overflow, and this must be fixed in new languages like Rust, but it is not.
In Rust the decision about whether to pay for overflow checks or just wrap (because all modern hardware will just wrap if you don't check and that's cheaper) is a choice you can make when compiling software, by default you get checks except in release builds but you can choose checks everywhere, even in release builds or no checks even in debug.
By definition in Rust it's incorrect to overflow the non-overflowing integer types, and so if you intend say wrapping you should use the explicit wrapping operations such as wrapping_add or the Wrapping<T> types in which the default operators do wrap - but if you turn off checks then it's still safe to be wrong, just as if you'd call the wrapping operations by hand instead of using the non-wrapping operations.
That Dolby overflow code looks awkward enough that I can't imagine writing it in Rust even if the checking was off - but I wasn't there. However the reason it's on Project Zero is that it resulted in a bounds miss, and that Rust would have prevented anyway.
> is a choice you can make when compiling software
That is not a solution because it means the code can behave differently, and expose vulnerability if wrong compilation settings are chosen.
The functions like "wrapping_add" have such a long names so that nobody wants to use them and they make the code ugly. Instead, "+" should be used for addition with exceptions, and something like "wrap+" or "<+>" or "[+]" used for wrapping addition.
That's how people work, they will choose the laziest path (the simplest function name) and this is why you should use "+" for safer, non-wrapping addition and make the symbol for wrapping addition long and unattractive. Make writing unsafe code harder. This is just basic psychology.
C has the same problem, they have functions checking for overflow, but they also have long and ugly names that discourage their use.
> modern hardware will just wrap if you don't check and that's cheaper
So you suggest that because x86 is a poorly designed architecture, we should adapt programing languages to its poor design? x86 will be gone sooner or later anyway.
Also, there are languages like JS, Python, Swift which chose the right path, it is only C and Rust developers who seem to be backwards.
__builtin_add_overflow Exists and it’s basically free on most CPUs out there.
> __builtin_add_overflow Exists and it’s basically free on most CPUs out there.
This is a very C-flavoured "solution". For those who haven't seen it this involves a pointer (!) and we're going to compute the addition, write the result to the pointed-at integer and then if that didn't fit and so it overflowed we'll return true otherwise false.
The closest Rust analogy would be T::carrying_add which returns a pair to achieve a similar result.
And yeah, checking is "basically free" unless it isn't, that's not different. If you haven't measured you don't know, same in every programming language.
It's never been true that you can't write correct software in C or C++ the problem is that in practice you won't do so.
I've been using this as a touchstone for whether or not we are actually going to take security seriously for a long time.
We've moved slightly closer to this, but in a world where we're still arguing over memory safety being necessary we've probably still got a ways to go before we notice that addition silently overflowing is a top-10 security issue. It's the silent top-10 security issue, I guess.
Isn't it often combined with poor bounds checks to be exploitable? It's not as if rust or VM based languages don't help a lot with this
It isn't because no ISA implements add like that, so there's always performance on the table if you check every time, and people would probably endlessly moan about how Rust is 20% slower than C on this add-heavy microbenchmark.
That said you can enable overflow checks in Rust's release mode. It's literally two lines:
I wonder if it would make sense for ISAs to have trapping versions of add and subtract. RISC-V's justification for not doing that is that it's only a couple more instructions to check afterwards. It would be interesting to see the performance difference of `overflow-check = true` on high performance RISC-V chips once they are available.I think it is 3 extra instructions on RISC-V if you add signed numbers. So 1 addition (the most popular operation) turns into 4 instructions. What are those people thinking? I generally like RISC-V but this part in my opinion, is wrong. They should just have added "overflow enabled" bit to the add instruction.
It does seem like "What if we offer checked integer arithmetic operations?" is a cheaper experiment than CHERI's "What if we mechanically reify extent based provenance"?"
> It isn't because no ISA implements add like that
MIPS does (did?). And VAX, IBM/360, ....