Which provides way more information than the article
This is decidedly not what I’d expect to be discussed at Thotcon. That said, super interesting!
As an avid pirate, I’ll say these days even the Denuvo game which were going years without cracks now have “cracks”, although they rely on hypervisor fixes and disabling secure boot and giving the hypervisor cracks unfettered access to your system to intercept the Denuvo checks. [0] It’s a dangerous game we’re playing to keep these AAA games bottom lines fat.
The main site to get these hypervisor cracks thoroughly vets them, requiring the devs to publish the source code to it all.
disabling secure boot
...making it even more clear what "secure" boot actually secures: the control others have over your own computer.
It has their uses. If, for example, a company wants to issue fleet computers to workers or school to students, you want to have secure boot on those devices to prevent tampering. Secure boot makes it so that physical access is not the end all of security.
If you own the computer yourself, you "ought" to be able to turn off these measures in a way that is undetectable. Being unable to do so would be the red line imho - and looking at those hypervisor cracks available, it's not quite being crossed. The pessimistic, but realistic future prediction is that various media companies would want and lobby for machines to have unbreakable enclaves for which they can "trust" to DRM your machine, and it's just boiling the frog right now. Windows 11's new TPM requirement is testament to that.
Switch to linux asap - that's about the only thing a consumer is capable of doing.
> If, for example, a company wants to issue fleet computers to workers or school to students, you want to have secure boot on those devices to prevent tampering. Secure boot makes it so that physical access is not the end all of security.
Measured boot is actually better for that: You can still boot whatever you want however you want, but hashes are different which can be used for e.g. remote attestation. Secure boot has to prevent that "unauthorized" code (whatever that means for each setup) can ever run. If it does, game over. That means less freedom and flexibility.
Measured boot isn't any better. Look at Android phones, where it's technically possible to unlock your bootloader, but a ton of apps (e.g., McDonald's and most banking apps) use remote attestation to see whether you did so and will refuse to work if you did.
This is coming. In particular, without a Secure-Boot-enforced allowlist of operating systems, it will be near impossible to verify that an OS connecting to the internet complies with your locality's age verification laws, so it will soon be illegal to run a computer that does not make Secure Boot mandatory and connect it to the network.
If you're starting to think "huh, maybe that's why these age verification laws suddenly became all the rage", you're onto something. Whatever the case, "general purpose computing" is definitely cooked.
The laws in my locality place requirements on the service provider (e.g. the adult website operator), not on random computer owners or manufacturers or software vendors.
Newsom signed a law that places those requirements on every operating system in California, and in practice, organizations tend to comply with California's terrible laws no matter where you are, rather than stopping doing business there or making two variants of their products.
With software it's trivial to have a switch for "California compliant" mode, but in any case, that makes it clear that such criticisms should be directed at California. Other (generally "red") states already had a more reasonable solution: make the sites offering the restricted service liable for their actions just like other businesses.
[deleted]
General purpose computing as it was done in the 1900s is cooked for the average user because there is no market incentive for it to exist. The actual market incentive revolves around apps as they provide user value along with the ability to deploy custom apps.
it is stupid to turn it off. It is incredibly easy to infect your system components without your knowning.
that being said, it does assume a certain trust in firmware vendors / oems. If you dont trust those, then dont buy from them.
i think for most ppl trusting OEM or trusting rando from interwebz with a custom hypervisor and requirement to cripple my system security are totally different things ..
u know they could actually make theyr HV support secure boot etc. to do it properly and have ur system run the cracks but not have gaping holes left by them -_-. lazy.
If you’re downloading torrents and running code with elevated privileges that infects your PC, 99% of people are absolutely hosed at that point anyway. I don’t see th real distinction between being owned at an elevated system level and owned by disabling system secure boot for a home user
pwned at the bios level means the pwnage can survive a complete OS reinstall
As always in security, It Depends™; there are vulnerabilities that only impact systems with secure boot (and result in a situation worse than not having secure boot to begin with).
> there are vulnerabilities that only impact systems with secure boot
Boring claim, obviously true.
> and result in a situation worse than not having secure boot to begin with
A very big claim that requires evidence.
If your system gets locked (I.e. ransomware) and you have secure boot active, then you are out of luck.
See Apple M chips which if they get locked you will never unlock them again.
It would work just as well if the instructions instead told you to enrol your own key and sign the cracks. Those instructions just aren't as popular.
Having an operating system purposefully allow support to installing rootkits should clearly be a bad idea. It shouldn't be surprising you have to turn off security features to install a rootkit.
Anti-cheat drivers are just as much of rootkits, and in practice, they have vulnerabilities that get a lot more hosts pwned than cheats do. Let's get Microsoft to stop loading their drivers.
Cheap take
What I'm wondering for a while now: How do the game streaming services run the Denuvo titles? Do they get special builds? They will not run on bare metal hardware but in some kind of VM right? Wouldn't Denuvo detect that and stop working?
Just like every Game Store requires its own build: Steamworks SDK, even GOG: https://docs.gog.com/sdk/
Some games allow browsing files locally for savegames, music libray, ... . Imagine if you could do that on the cloud VM.
> * Stadia SDK: developer.stadia.com (offline)
Stadia is completely shutdown and Archive.org has no captures of that subdomain so any content there is likely lost.
To add to this, almost every time a Denuvo game was “cracked” before the hypervisor methods it was because the dev accidentally published a demo with none of the Denuvo stuff. Happened to Lies of P a couple months after release.
That makes a lot of sense, thanks for clarifying!
> While security researchers love the entropy of randomized function layouts
I don't think any competent security researcher has anything positive to say about "security through obscurity"
at best this is lawyer position
You would think but in my experience, if you ask to just open something up they'll start talking about "defense in depth" and it suddenly matters a lot.
I disagree, obscurity wastes attacker resources and easily fools a lot of simple vulnerability scanners.
Obscurity is totally underrated. Attacker resources are limited.
It’s kind of having a line of cardboard tanks. Can be helpful in some circumstances, but it can’t always replace actual tanks
Actually decoys are very useful in Ukraine Russian war. It is usually decoys of air defense or long range precision fires like Himars and target is to waste resources of opponents long range fires which are limited and/or expensive.
Further more you can also reveal position of the attacker and counterfire.
If you have 500 tanks and 500 cardboard tanks, someone with only as many real tanks as you have may not bother attacking. Thus, having the cardboard tanks saved you a battle.
If someone with 1000 tanks attacks, it's a battle you would not have won anyway.
And yet, cardboard tanks have been useful only a handful of times during wartime. Tanks on the other hand have proven their usefulness many times.
thank you, I had this debate at work so many times.
Sure it's not a security measure as such, but it's still a worthwile component to the overall defense system.
The problem with this is, you spend a lot of effort for low benefit. You should spend it on actual security instead.
What would be "actual security" in this context?
This isn't about security of the same kind as authentication/encryption etc where security by obscurity is a bad idea. This is an effort where obscurity is almost the only idea there is, and where even a marginal increase in difficulty for tampering/inspecting/exploiting is well worth it.
The one not described as "security through obscurity".
My point is: the "security through obscurity is bad" and "security through obscurity isn't real security" are both incorrect.
They apply to different threats and different contexts. When you have code running in the attackers' system, in normal privilege so they can pick it apart, then obscurity is basically all you have. So the only question to answer is: do you want a quick form of security through obscurity, or do you not? If it delivers tangible benefits that outweigh the costs, then why would you not?
What one is aiming for here is just slowing an annoying down an attacker. Because it's the best you can do.
Changing a port and enabling aslr are not "a lot of effort".
Changing the port is not the kind of security measure that will consume a lot of the attacker resources
Sure, it'll do nothing to stop a determined attacker, but it does wonders to stop the noise from passive scanners.
Are you familiar with the Swiss cheese model of risk management[0]? Obscurity is just another slice of Swiss cheese. It's not your only security measure. You still use all the other measures.
It will conserve a lot of defender resources, it will completely bypass all mass scans, and it will make "determined attackers" much more visible as they will have to find the port first which will show up in logs and potentially land them in a tarpit.
You can consider obscurity as concealment. You can't be attacked if you are not seen. And to be seen attacker needs much more resources to see you.
Security through obscurity is bad only if the obscurity is the only measure
It's not something to over-index on, but it's not a strong protection measure. It simply raises the overall cost to attack and analyze a system.
Take the PS5 for example. It has execute-only memory. Even if you find a bug, how do you exploit it if you can't read the executable text of your ROP/JOP target?
Security through obscurity is an excellent first-line defense, as long as you have other real defenses at the next layer.
Security through obscurity is like a bike lock. It can be cracked with the right tools and effort, but massively improves security compared to leaving it out unlocked.
It’s not about security, it’s about wasting a crackers time.
Some people find cracking them interesting and fun.
Agreed. I’ve done trivial obfuscation for games. In my observation, if you make it trivial to hack your game, huge numbers will trivially hack it. If you make it even slightly non-trivial, the numbers decrease exponentially. The more you waste their time, put up hurdles, the lower the number of successful hackers goes.
The goal is not perfect security in all situations for all products. The goal is to make the effort required for your particular product excessive compared to the payoff.
ASLR (for example) is a pretty standard technique, I thought all commercial OSes enabled this generally. What's the purpose of picking at this portion?
I’ve noticed that LLMs can effortlessly read minified JS. How does it do with obfuscated binary code? I wonder if the days of obfuscation are numbered when the tedious job of de-obfuscation can be automated.
Between this and rootkits masquerading as anticheat, video games are starting to look indistinguishable from malware
there is an immense difference between obfuscating the binary you ship for your game and requiring rootkit-level anti-cheat systems to play your game.
it is wild to imply they are remotely the same in their effect on the user. one is literal malware, and the other shares 0 of the capabilities or effects of malware.
I'm a bit perplexed by the choice of Nintendo Switch as the example hardware. I was under the impression that the switch was locked down and you can't run offset based cheat software like cheatengine on it.
Echoing the other comments here - why? What is the threat model here and how does this protect you from it?
the threat is people who cheat in games. obfuscation slows them down, but incurs a performance cost. this work is focused on reducing the performance cost.
- from the slides
Exactly. That and in game currencies. You like competing in games, or for game-bucks? Well you need some level of obfuscation and hardening to make that viable.
From my understanding the goal is to prevent pirates and hackers from modifying the game's binary.
I have no idea why would anyone want to do that on Nintendo Switch though, Switch 1 doesn't have any headroom and Switch 2 OS security hasn't been defeated yet.
It also frustrates datamining of secret client-side game mechanics, story spoilers, and unreleased content (good branch management is not priority for some devs). Yeah this wouldn't stand up to the best of the best, but not all game communities have a George Hotz, so this suffices for most cases.
[deleted]
The amount of work that goes into moats, for stuff that nobody will care about in 6 months, is kind of insane. I understand it for security reasons, but in video games? Just more bloat for nothing
>Just more bloat for nothing
playing an online game, especially if it is competitive, alongside a bunch of cheaters is not fun.
reducing the number of cheaters is not "nothing"
Security through obscurity is not a good strategy
people love repeating this little line without a single thought of their own.
security through obscurity is an effective defensive layer with a relatively low implementation effort. it raises the minimum effort required for bypass.
the quote you have parroted is only applicable when obscurity is the only defense layer. when obscurity is used in addition to other defensive layers, it is a great first line of defense.
oh fascinating. i just finished reverse engineering Aegis and now working on their newest Eidolon. pretty cool technology.
why bother?
I guess it’s mainly to sell the technology and the illusion that comes with that.
So, money, for supposed control. Which is not true of course
Link to the slides (almost missed it when i was reading): https://farzon.org/files/presentations/Thotcon_talk_may_2025...
Which provides way more information than the article
This is decidedly not what I’d expect to be discussed at Thotcon. That said, super interesting!
As an avid pirate, I’ll say these days even the Denuvo game which were going years without cracks now have “cracks”, although they rely on hypervisor fixes and disabling secure boot and giving the hypervisor cracks unfettered access to your system to intercept the Denuvo checks. [0] It’s a dangerous game we’re playing to keep these AAA games bottom lines fat.
[0] https://www.thefpsreview.com/2026/04/03/denuvo-has-been-brok...
The main site to get these hypervisor cracks thoroughly vets them, requiring the devs to publish the source code to it all.
disabling secure boot
...making it even more clear what "secure" boot actually secures: the control others have over your own computer.
It has their uses. If, for example, a company wants to issue fleet computers to workers or school to students, you want to have secure boot on those devices to prevent tampering. Secure boot makes it so that physical access is not the end all of security.
If you own the computer yourself, you "ought" to be able to turn off these measures in a way that is undetectable. Being unable to do so would be the red line imho - and looking at those hypervisor cracks available, it's not quite being crossed. The pessimistic, but realistic future prediction is that various media companies would want and lobby for machines to have unbreakable enclaves for which they can "trust" to DRM your machine, and it's just boiling the frog right now. Windows 11's new TPM requirement is testament to that.
Switch to linux asap - that's about the only thing a consumer is capable of doing.
> If, for example, a company wants to issue fleet computers to workers or school to students, you want to have secure boot on those devices to prevent tampering. Secure boot makes it so that physical access is not the end all of security.
Measured boot is actually better for that: You can still boot whatever you want however you want, but hashes are different which can be used for e.g. remote attestation. Secure boot has to prevent that "unauthorized" code (whatever that means for each setup) can ever run. If it does, game over. That means less freedom and flexibility.
Measured boot isn't any better. Look at Android phones, where it's technically possible to unlock your bootloader, but a ton of apps (e.g., McDonald's and most banking apps) use remote attestation to see whether you did so and will refuse to work if you did.
This is coming. In particular, without a Secure-Boot-enforced allowlist of operating systems, it will be near impossible to verify that an OS connecting to the internet complies with your locality's age verification laws, so it will soon be illegal to run a computer that does not make Secure Boot mandatory and connect it to the network.
If you're starting to think "huh, maybe that's why these age verification laws suddenly became all the rage", you're onto something. Whatever the case, "general purpose computing" is definitely cooked.
The laws in my locality place requirements on the service provider (e.g. the adult website operator), not on random computer owners or manufacturers or software vendors.
Newsom signed a law that places those requirements on every operating system in California, and in practice, organizations tend to comply with California's terrible laws no matter where you are, rather than stopping doing business there or making two variants of their products.
With software it's trivial to have a switch for "California compliant" mode, but in any case, that makes it clear that such criticisms should be directed at California. Other (generally "red") states already had a more reasonable solution: make the sites offering the restricted service liable for their actions just like other businesses.
General purpose computing as it was done in the 1900s is cooked for the average user because there is no market incentive for it to exist. The actual market incentive revolves around apps as they provide user value along with the ability to deploy custom apps.
it is stupid to turn it off. It is incredibly easy to infect your system components without your knowning.
that being said, it does assume a certain trust in firmware vendors / oems. If you dont trust those, then dont buy from them.
i think for most ppl trusting OEM or trusting rando from interwebz with a custom hypervisor and requirement to cripple my system security are totally different things ..
u know they could actually make theyr HV support secure boot etc. to do it properly and have ur system run the cracks but not have gaping holes left by them -_-. lazy.
If you’re downloading torrents and running code with elevated privileges that infects your PC, 99% of people are absolutely hosed at that point anyway. I don’t see th real distinction between being owned at an elevated system level and owned by disabling system secure boot for a home user
pwned at the bios level means the pwnage can survive a complete OS reinstall
As always in security, It Depends™; there are vulnerabilities that only impact systems with secure boot (and result in a situation worse than not having secure boot to begin with).
> there are vulnerabilities that only impact systems with secure boot
Boring claim, obviously true.
> and result in a situation worse than not having secure boot to begin with
A very big claim that requires evidence.
If your system gets locked (I.e. ransomware) and you have secure boot active, then you are out of luck.
See Apple M chips which if they get locked you will never unlock them again.
It would work just as well if the instructions instead told you to enrol your own key and sign the cracks. Those instructions just aren't as popular.
Having an operating system purposefully allow support to installing rootkits should clearly be a bad idea. It shouldn't be surprising you have to turn off security features to install a rootkit.
Anti-cheat drivers are just as much of rootkits, and in practice, they have vulnerabilities that get a lot more hosts pwned than cheats do. Let's get Microsoft to stop loading their drivers.
Cheap take
What I'm wondering for a while now: How do the game streaming services run the Denuvo titles? Do they get special builds? They will not run on bare metal hardware but in some kind of VM right? Wouldn't Denuvo detect that and stop working?
They get their own build. E.g.
* GeForce NOW SDK: https://developer.geforcenow.com/learn/guides/offerings-sdk
* Stadia SDK: developer.stadia.com (offline)
* Xbox Cloud Gaming: https://learn.microsoft.com/en-us/gaming/gdk/docs/features/c...
* ...
Just like every Game Store requires its own build: Steamworks SDK, even GOG: https://docs.gog.com/sdk/
Some games allow browsing files locally for savegames, music libray, ... . Imagine if you could do that on the cloud VM.
> * Stadia SDK: developer.stadia.com (offline)
Stadia is completely shutdown and Archive.org has no captures of that subdomain so any content there is likely lost.
To add to this, almost every time a Denuvo game was “cracked” before the hypervisor methods it was because the dev accidentally published a demo with none of the Denuvo stuff. Happened to Lies of P a couple months after release.
That makes a lot of sense, thanks for clarifying!
> While security researchers love the entropy of randomized function layouts
I don't think any competent security researcher has anything positive to say about "security through obscurity"
at best this is lawyer position
You would think but in my experience, if you ask to just open something up they'll start talking about "defense in depth" and it suddenly matters a lot.
I disagree, obscurity wastes attacker resources and easily fools a lot of simple vulnerability scanners.
Obscurity is totally underrated. Attacker resources are limited.
It’s kind of having a line of cardboard tanks. Can be helpful in some circumstances, but it can’t always replace actual tanks
Actually decoys are very useful in Ukraine Russian war. It is usually decoys of air defense or long range precision fires like Himars and target is to waste resources of opponents long range fires which are limited and/or expensive.
Further more you can also reveal position of the attacker and counterfire.
If you have 500 tanks and 500 cardboard tanks, someone with only as many real tanks as you have may not bother attacking. Thus, having the cardboard tanks saved you a battle.
If someone with 1000 tanks attacks, it's a battle you would not have won anyway.
And yet, cardboard tanks have been useful only a handful of times during wartime. Tanks on the other hand have proven their usefulness many times.
thank you, I had this debate at work so many times.
Sure it's not a security measure as such, but it's still a worthwile component to the overall defense system.
The problem with this is, you spend a lot of effort for low benefit. You should spend it on actual security instead.
What would be "actual security" in this context?
This isn't about security of the same kind as authentication/encryption etc where security by obscurity is a bad idea. This is an effort where obscurity is almost the only idea there is, and where even a marginal increase in difficulty for tampering/inspecting/exploiting is well worth it.
The one not described as "security through obscurity".
My point is: the "security through obscurity is bad" and "security through obscurity isn't real security" are both incorrect.
They apply to different threats and different contexts. When you have code running in the attackers' system, in normal privilege so they can pick it apart, then obscurity is basically all you have. So the only question to answer is: do you want a quick form of security through obscurity, or do you not? If it delivers tangible benefits that outweigh the costs, then why would you not?
What one is aiming for here is just slowing an annoying down an attacker. Because it's the best you can do.
Changing a port and enabling aslr are not "a lot of effort".
Changing the port is not the kind of security measure that will consume a lot of the attacker resources
Sure, it'll do nothing to stop a determined attacker, but it does wonders to stop the noise from passive scanners.
Are you familiar with the Swiss cheese model of risk management[0]? Obscurity is just another slice of Swiss cheese. It's not your only security measure. You still use all the other measures.
[0] https://en.wikipedia.org/wiki/Swiss_cheese_model
It will conserve a lot of defender resources, it will completely bypass all mass scans, and it will make "determined attackers" much more visible as they will have to find the port first which will show up in logs and potentially land them in a tarpit.
You can consider obscurity as concealment. You can't be attacked if you are not seen. And to be seen attacker needs much more resources to see you.
Security through obscurity is bad only if the obscurity is the only measure
It's not something to over-index on, but it's not a strong protection measure. It simply raises the overall cost to attack and analyze a system.
Take the PS5 for example. It has execute-only memory. Even if you find a bug, how do you exploit it if you can't read the executable text of your ROP/JOP target?
Security through obscurity is an excellent first-line defense, as long as you have other real defenses at the next layer.
Security through obscurity is like a bike lock. It can be cracked with the right tools and effort, but massively improves security compared to leaving it out unlocked.
It’s not about security, it’s about wasting a crackers time.
Some people find cracking them interesting and fun.
Agreed. I’ve done trivial obfuscation for games. In my observation, if you make it trivial to hack your game, huge numbers will trivially hack it. If you make it even slightly non-trivial, the numbers decrease exponentially. The more you waste their time, put up hurdles, the lower the number of successful hackers goes.
The goal is not perfect security in all situations for all products. The goal is to make the effort required for your particular product excessive compared to the payoff.
ASLR (for example) is a pretty standard technique, I thought all commercial OSes enabled this generally. What's the purpose of picking at this portion?
I’ve noticed that LLMs can effortlessly read minified JS. How does it do with obfuscated binary code? I wonder if the days of obfuscation are numbered when the tedious job of de-obfuscation can be automated.
Between this and rootkits masquerading as anticheat, video games are starting to look indistinguishable from malware
there is an immense difference between obfuscating the binary you ship for your game and requiring rootkit-level anti-cheat systems to play your game.
it is wild to imply they are remotely the same in their effect on the user. one is literal malware, and the other shares 0 of the capabilities or effects of malware.
I'm a bit perplexed by the choice of Nintendo Switch as the example hardware. I was under the impression that the switch was locked down and you can't run offset based cheat software like cheatengine on it.
Echoing the other comments here - why? What is the threat model here and how does this protect you from it?
the threat is people who cheat in games. obfuscation slows them down, but incurs a performance cost. this work is focused on reducing the performance cost.
- from the slides
Exactly. That and in game currencies. You like competing in games, or for game-bucks? Well you need some level of obfuscation and hardening to make that viable.
From my understanding the goal is to prevent pirates and hackers from modifying the game's binary.
I have no idea why would anyone want to do that on Nintendo Switch though, Switch 1 doesn't have any headroom and Switch 2 OS security hasn't been defeated yet.
It also frustrates datamining of secret client-side game mechanics, story spoilers, and unreleased content (good branch management is not priority for some devs). Yeah this wouldn't stand up to the best of the best, but not all game communities have a George Hotz, so this suffices for most cases.
The amount of work that goes into moats, for stuff that nobody will care about in 6 months, is kind of insane. I understand it for security reasons, but in video games? Just more bloat for nothing
>Just more bloat for nothing
playing an online game, especially if it is competitive, alongside a bunch of cheaters is not fun.
reducing the number of cheaters is not "nothing"
Security through obscurity is not a good strategy
people love repeating this little line without a single thought of their own.
security through obscurity is an effective defensive layer with a relatively low implementation effort. it raises the minimum effort required for bypass.
the quote you have parroted is only applicable when obscurity is the only defense layer. when obscurity is used in addition to other defensive layers, it is a great first line of defense.
oh fascinating. i just finished reverse engineering Aegis and now working on their newest Eidolon. pretty cool technology.
why bother?
I guess it’s mainly to sell the technology and the illusion that comes with that.
So, money, for supposed control. Which is not true of course
and this is insight from "other" side :) https://www.unknowncheats.me/forum/overwatch/639855-overwatc...
What is the fps hit?
The reduction of Frames Per Second.
Yes, I think they're asking how big it is
Oh, of course… thanks for clarifying.