>The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion).
Carving out the particular military engagements your company deems less than justified sounds nice but isn't workable in practice. You have to swallow the whole pill if you want to sell to the DoD.
Better to have smart bombs than dumb ones. Or rather, better to have 1 smart bomb than 1000 dumb ones spread across an entire city in order to pick off the particular building, vehicle, or person you want.
Specially AI Hallucination bombs, that hit a park named "Police Park", because it thinks it's killing policemen[1], or a children school with Shahed in the name[2], because it thinks It has something to do with drones.
You or your subordinates target an elementary school: that's a war crime.
Your "battlefield AI" targets an elementary school: software bug, it happens, can't be helped.
The software is never accountable, so the human running it is always accountable.
that is how it should be, not how it is.
This isn't even that new. Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators. But AI does bring it to a new level.
Your links talk about the places that were bombed, but I don't see anything apart for conjecture that this was the product of AI targeting.
Also this is a vast underestimate of the ability of organizations that were able to locate most of Iranian leadership throughout the war in their hiding places, but suddenly their Farsi is so bad they need a twitter account to tell them this is a Park
It's a popular conspiracy theory, without evidence, and without any perspective on any information that intelligence had. Using civilians as shields is well documented/known for Iranian military and groups they sponsor. For example, hospitals [1].
This has nothing to do with AI, the school got hit because it was directly next door to a military base.
Channeling my inner Socrates:
You want consensus from non-experts for a plan to use 20 smart bombs.
Your opponent wants consensus for a plan to live-stream a demo of 1 smart bomb, and then use 19 dumb ones.
Your team has more expertise.
Your opponent's plan saves enough money to buy a better PR team than yours, and is still more cost effective than your plan.
Who wins?
That “smart” vs “dumb” distinction doesn’t apply here though. What is discussed has nothing to do with the ability to physically land a bomb in a precise location, that problem seems to be solved reasonably well already. “Smart” in this case has more to do with using ML/LLM to select a target.
You can rationalize anything by only considering the upside relative to alternatives' downsides.
You might be right, but that's terrible
Smart bombs are no good if they are directed by a dumb targeting system, dumb alcoholic accelerationist religious fanatic Secretary of War, or dumb narcissistic genocidal pedophile Presidents.
There is one more layer - America voted for this.
In fact, it didn’t. Trump continued to make “no new wars” a plank of his platform.
Some of his base will follow wherever he goes, but he would not have been elected without those who supported him on the basis of this (broken) promise.
[deleted]
> With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled “Compress the kill cycle with Red Hat Device Edge”, the 2024 white paper details how Red Hat’s products and technologies can make it easier and faster to, well, kill people.
IBM suffered no consequences for any of that so there were no lessons to learn. IBM dominated the computer industry from the 1960s-1980s ("Nobody ever got fired for buying IBM") and was a more brutal monopolist than any of the FANGAM corporations.
In evil mode it indents by mixing tabs and spaces.
> With things like the genocide in Gaza ...
Population: ~2,050,000[5]
Density: 15,455.8/sq mi
Words have meaning, and their emotional force derives from that meaning. The knowing misuse of a term like “genocide” for its emotional force is manipulative sophistry.
Besides external PR, does anyone know how this affects internal morale?
Some of the earlier Red Hat people I knew would not be OK with working on weapons systems even under the most legitimate circumstances. And they'd be much more opposed to collaborating with fascist regimes. And I think horrified by the idea of shoveling AI slop and grifter hype into life&death decisions.
Of course the tech industry makeup has changed (overall culture transitioning from hacker idealists, to finance bros), and some IBM-ification of Red Hat has has also happened. But I'd like to think Red Hat still attracts a more principled pool of talent than FAANG.
I dunno that 'removes from their website' is sufficient for 'trying to erase from the Internet'
Can we rename this "RedHat removes paper from website on using their software to 'shrink the kill-chain'"
They still might pull an Anthropic move and send a C&D or DMCA to archive.org.
So the hat is red because of all that blood?
Was this written by an Iranian propaganda machine?
How could it be? The US has won the war against them many times over, to the point that they no longer exist.
> I don’t think there’s something inherently wrong with working together with your nation’s military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies’ products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense
The core purpose of a military is to destroy things and kill people, and the world is controlled by the people who can do that better than others. You can put all the "defense" and "disaster aid" lipstick on that you like but that doesn't change what they train for and what their real purpose is.
> and the world is controlled by the people who can do that better than others
Yes, welcome to Earth.
There's absolutely no morality in deciding to be weaker than you have to be. If you are eaten by a predator when you had the option not to be eaten, you're not some high-minded righteous peace-lover, you're simply dead.
https://web.archive.org/web/20260402155236/https://www.redha...
Archive URL to original paper
>The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion).
Carving out the particular military engagements your company deems less than justified sounds nice but isn't workable in practice. You have to swallow the whole pill if you want to sell to the DoD.
Better to have smart bombs than dumb ones. Or rather, better to have 1 smart bomb than 1000 dumb ones spread across an entire city in order to pick off the particular building, vehicle, or person you want.
Specially AI Hallucination bombs, that hit a park named "Police Park", because it thinks it's killing policemen[1], or a children school with Shahed in the name[2], because it thinks It has something to do with drones.
[1] https://x.com/MarioNawfal/status/2029575052535173364
[2] https://www.aljazeera.com/news/2026/3/6/elementary-school-in...
There's also a chasm of (non-)accountability.
You or your subordinates target an elementary school: that's a war crime.
Your "battlefield AI" targets an elementary school: software bug, it happens, can't be helped.
The software is never accountable, so the human running it is always accountable.
that is how it should be, not how it is.
This isn't even that new. Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators. But AI does bring it to a new level.
Your links talk about the places that were bombed, but I don't see anything apart for conjecture that this was the product of AI targeting.
Also this is a vast underestimate of the ability of organizations that were able to locate most of Iranian leadership throughout the war in their hiding places, but suddenly their Farsi is so bad they need a twitter account to tell them this is a Park
It's a popular conspiracy theory, without evidence, and without any perspective on any information that intelligence had. Using civilians as shields is well documented/known for Iranian military and groups they sponsor. For example, hospitals [1].
Shitty, but possibly a valid military target.
[1] https://www.gatestoneinstitute.org/8666/yemen-human-shields
This has nothing to do with AI, the school got hit because it was directly next door to a military base.
Channeling my inner Socrates:
You want consensus from non-experts for a plan to use 20 smart bombs.
Your opponent wants consensus for a plan to live-stream a demo of 1 smart bomb, and then use 19 dumb ones.
Your team has more expertise.
Your opponent's plan saves enough money to buy a better PR team than yours, and is still more cost effective than your plan.
Who wins?
That “smart” vs “dumb” distinction doesn’t apply here though. What is discussed has nothing to do with the ability to physically land a bomb in a precise location, that problem seems to be solved reasonably well already. “Smart” in this case has more to do with using ML/LLM to select a target.
You can rationalize anything by only considering the upside relative to alternatives' downsides.
You might be right, but that's terrible
Smart bombs are no good if they are directed by a dumb targeting system, dumb alcoholic accelerationist religious fanatic Secretary of War, or dumb narcissistic genocidal pedophile Presidents.
There is one more layer - America voted for this.
In fact, it didn’t. Trump continued to make “no new wars” a plank of his platform.
Some of his base will follow wherever he goes, but he would not have been elected without those who supported him on the basis of this (broken) promise.
> With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled “Compress the kill cycle with Red Hat Device Edge”, the 2024 white paper details how Red Hat’s products and technologies can make it easier and faster to, well, kill people.
It appears IBM learned no lessons after WWII: https://en.wikipedia.org/wiki/IBM_and_the_Holocaust
That book will need a sequel soon.
IBM suffered no consequences for any of that so there were no lessons to learn. IBM dominated the computer industry from the 1960s-1980s ("Nobody ever got fired for buying IBM") and was a more brutal monopolist than any of the FANGAM corporations.
Ah, now I see where they got the name: https://en.wikipedia.org/wiki/Redcap
who let the Streisand effect out of its cage!?
"I give permission to IBM, its customers, partners, and minions, to use JSLint for evil."
I chuckled. This is, in fact, actual quote, see[1] for explanation.
[1] https://gist.github.com/kemitchell/fdc179d60dc88f0c9b76e5d38...
In evil mode it indents by mixing tabs and spaces.
> With things like the genocide in Gaza ...
Population: ~2,050,000[5] Density: 15,455.8/sq mi
Words have meaning, and their emotional force derives from that meaning. The knowing misuse of a term like “genocide” for its emotional force is manipulative sophistry.
Besides external PR, does anyone know how this affects internal morale?
Some of the earlier Red Hat people I knew would not be OK with working on weapons systems even under the most legitimate circumstances. And they'd be much more opposed to collaborating with fascist regimes. And I think horrified by the idea of shoveling AI slop and grifter hype into life&death decisions.
Of course the tech industry makeup has changed (overall culture transitioning from hacker idealists, to finance bros), and some IBM-ification of Red Hat has has also happened. But I'd like to think Red Hat still attracts a more principled pool of talent than FAANG.
I dunno that 'removes from their website' is sufficient for 'trying to erase from the Internet'
Can we rename this "RedHat removes paper from website on using their software to 'shrink the kill-chain'"
They still might pull an Anthropic move and send a C&D or DMCA to archive.org.
So the hat is red because of all that blood?
Was this written by an Iranian propaganda machine?
How could it be? The US has won the war against them many times over, to the point that they no longer exist.
> I don’t think there’s something inherently wrong with working together with your nation’s military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies’ products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense
The core purpose of a military is to destroy things and kill people, and the world is controlled by the people who can do that better than others. You can put all the "defense" and "disaster aid" lipstick on that you like but that doesn't change what they train for and what their real purpose is.
> and the world is controlled by the people who can do that better than others
Yes, welcome to Earth.
There's absolutely no morality in deciding to be weaker than you have to be. If you are eaten by a predator when you had the option not to be eaten, you're not some high-minded righteous peace-lover, you're simply dead.