269

Our Agreement with the Department of War

> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

My reading of this is that OpenAI's contract with the Pentagon only prohibits mass surveillance of US citizens to the extent that that surveillance is already prohibited by law. For example, I believe this implies that the DoW can procure data on US citizens en masse from private companies - including, e.g., granular location and financial transaction data - and apply OpenAI's tools to that data to surveil and otherwise target US citizens at scale. As I understand it, this was not the case with Anthropic's contract.

If I'm right, this is abhorrent. However, I've already jumped to a lot of incorrect conclusions in the last few days, so I'm doing my best to withhold judgment for now, and holding out hope for a plausible competing explanation.

(Disclosure, I'm a former OpenAI employee and current shareholder.)

9 hours agotfehring

Open ai, the former non-profit, whose board tried to fire the CEO for being deceptive, which is no longer open at all, isn't exactly about ethics these days.

Even on a personal level: OpenAI has changed it's privacy policy twice to let them gather data on me they weren't before. A lot of steps to disable it each time, tons of dark patterns. And the data checkout just bugs out too, it's a fake feature to hide how much they are using everything you type to them

9 hours agogentleman11

The coup against Altman looks prescient. They knew who he was.

4 hours agotootie

So why would we want them setting policy for the DoD? Laws are enacted through a fundamentally democratic process defined over hundreds of years. Why wouldn’t that be the way to govern use of tools?

Why would we want to trade our constitution for, effectively, “rules Sam Altman came up with”?

5 hours agoeduction

Use its real name, the one orange shitler renamed it to: the department of war.

Why the fuck does the department of war get to dictate anything to a private organization?

Why does the constitution say that you have to let the government murder schoolgirls with your tools?

5 hours agoDarvon

This is exactly what it says: the only restrictions are the restrictions that are already in law. This seems like the weasel language Dario was talking about.

8 hours ago_alternator_

Laws that can be changed on a whim by "executive orders", or laws that apparently can be ignored completely, like international law.

8 hours agokivle

Like by an administration who is constantly ignoring and violating both domestic and international law?

Like by an administration that likes to act extra judiciously and ignore habeas corups?

I wonder where we'd find such a government. Probably shouldn't give them the power to "do anything legal NOR 'consistent with operational requirements'". That's the power to do anything they want

7 hours agogodelski

They do note that their contract language specifically references the laws as they exist today.

Presumably if the laws become less restrictive, that does not impact OpenAI's contract with them (nothing would change) but if the laws become more restrictive (eg certain loopholes in processing American's data get closed) then OpenAI and the DoD should presumably^ not break the new laws.

^ we all get to decide how much work this presumably is doing

6 hours agowrsh07

> They do note that their contract language specifically references the laws as they exist today.

Where?

> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

Sounds like it's worded to specifically apply to whatever law is currently applicable, no?

6 hours agotredre3

No, executive orders can't change law and international law, unless ratified by congress, is not democratically legitimized and applicable law in the US to begin with

8 hours agoshikon7

You mean like the tariffs congress didn't approve?

Dictators rarely gain power legitimately, and always keep it with violence.

7 hours agopixl97

There's a stark difference between de jure and de facto here. Executive orders will brazen, tyrannical effects and are often reined in late or never.

8 hours agowyldfire

We just started a war with Iran without congressional approval or briefing, so I'm not sure if law has meaning anymore.

7 hours agotapoxi

War Powers Resolution. Obviously, there’s a law of which multiple presidents have used. Congress can change this law but there is a law that does give the POTUS this authority.

7 hours agojonnybgood
[deleted]
3 hours ago

LOL. you really believe that?

4 hours agotrinsic2

Not that this means the big AI corps should relax their values (it truly doesn't), but I would be extremely surprised if the DoD/DoW doesn't have anyone capable of fine tuning an open weights model for this purpose.

And, I mean, if they don't, gpt 5.3 is going to be pretty good help

Given the volume fine tuning a small model is probably the only cost effective way to do it anyway

6 hours agowrsh07

Contrary to benchmarks, open weight models are way behind the frontier.

6 hours agoajyoon

My point is that you don't want a big model for the kind of analysis being discussed here

Even if they were paying frontier prices they would be choosing 5 mini or nano with no thinking

At that point, a fine tuned open source model is going to be on the pareto frontier

3 hours agowrsh07

> For example, I believe this implies that the DoW can procure data on US citizens en masse from private companies - including, e.g., granular location and financial transaction data - and apply OpenAI's tools to that data to surveil and otherwise target US citizens at scale.

Third Party Doctrine makes trouble for us once again.

Eliminate that and MANY nightmare scenarios disappear or become exceptionally more complicated.

4 hours agocaseysoftware

People often overlook how all the NSA-related activities and government overreach come with a nice memo from officials stating how "lawful" the questionable actions they're taking are.

7 hours agooperator_nil

Even worse is the kill-bot policy. The eventual-human-in-the-loop clause. aka as yolo mode or --dangerously-skip-permissions

Imagine arming chatgpt and letting it pick targets and launch missiles from clawdbot.

6 hours agoirthomasthomas

This is hilarious. I see their lawyers got together to find the most confusing way they could word it to throw people off and let everybody claim it says whatever's best for their own PR.

"Shall not be used as consistent with these authorities"?

So they shall only be used inconsistently with these authorities? That's the literal reading if you assume there's no typo.

Or did they forget a crucial comma that would imply they shall not use it, to the extent this provision is consistent with their authorities?

Or did they forget the comma but it was supposed to mean that they shall not use it, to the extent that not-doing so would be consistent with their authorities?

You gotta hand it to the lawyers, I'm not sure I could've thought of wording this deliberately confusing if they'd given me a million dollars.

5 hours agodataflow

thanks for speaking out, and yes, that was my interpretation, as well, which I outlined below. This is nothing more than some sugar coating on "lawful use" despite what OpenAI says and the contractual "safeguards" they tout like the FDEs.

9 hours agoeoskx

You are exactly correct and this is what Dario has been speaking up about.

He calls this exact scenario out in last night's interview: https://youtu.be/MPTNHrq_4LU

8 hours agoxvector

i.e. Combing through public forums on the internet looking for evidence of thought crime, however, is fair game. The Trump admin will undoubtedly use tools like this to compile a list political enemies or undesirables, which they will then use to harass people or selectively restrict individual rights. They're already doing this and this is just going to make it easier for them.

6 hours agodavesque

File your CCPA delete requests now while you can still disappear on the Internet!

20 minutes agoderwiki

Yes. And I'm sure the next administration will as well. These things only ratchet in one direction.

6 hours agopkaeding

Bingo.

6 hours agopopalchemist

  > to the extent that that surveillance is already prohibited by law.
The problem with government contracts where you say "can't do anything illegal" is that THEY DECIDE WHAT IS LEGAL. We're lucky we live in a system where you can challenge the government but I think either side of the isle you're on you think people are trying to dismantle that feature (we just disagree on who is doing that, right?).

<edit>

THAT'S EXACTLY WHAT DARIO WAS ARGUING and it is exactly why the DOD wanted to get around. They wanted to use Claude for all legal purposes and Anthropic said moral reasons.

Also notice the subtle language in OpenAI's red lines. "No use of OpenAI technology for mass *domestic* surveillance." We've seen how this was abused by the NSA already since normal communication in the Internet often crosses international lines. And what they couldn't get done that way they got around through allies who can spy on American citizens.

</edit>

I think we need to remember that legality != morality. It's our attempt to formalize morality but I think everyone sees how easy it is to skirt[0]

  > I believe this implies that the DoW can procure data on US citizens en masse from private companies - including
Call your senators. There's a bill in the senate explicitly about this. Here's the EFF's take [1]. IMO it's far from perfect but an important step. I think we should talk about this more. I have problems with it too, but hey, is anything in here preventing things from continuing to get better? It's too easy to critique and then do nothing. We've been arguing for over a decade, I'd rather take a small step than a step back.

  > If I'm right, this is abhorrent.
Let's also not forget WorldCoin[2]. World (blockchain)? World Network?

I have no trust for Altman. His solution to distinguishing humans from bots is mass biometric surveillance. This seems as disconnected as the CEO of Flock or that Ring commercial.

Not to mention all the safety failures. Sora was released allowing real people to be generated? Great marketing. Glad they "fixed it" so quickly...

There's a lot happening now and it's happening fast. I think we need to be careful. We've developed systems to distribute power but it naturally wants to accumulate. Be it government power or email providers. The greater the power, the greater the responsibility. But isn't that why we created distributed power systems in the first place?

Personally I don't want autonomous unquestioning killbots under the control of one or a small number of people. Even if you don't believe the one in control now is not a psychopath (-_-) then you can still agree that it's possible for that type of person to get control. Power corrupts. Things like killing another person should be hard, emotionally. That's a feature, not a flaw. Soldiers questioning orders is a feature, not a flaw. By concentrating power you risk handing that power to those that do not feel. We're making Turnkey Tyranny more dangerous

[0] and law is probably our best attempt to make a formal system out of a natural language but I digress

[1] https://www.eff.org/deeplinks/2024/04/fourth-amendment-not-s...

[2] https://en.wikipedia.org/wiki/World_(blockchain)

7 hours agogodelski

As a non-US person I take absolutely no solace in sama's statement (even if I believed a single word that snake has ever uttered, which I do not).

7 hours agomvdtnz

> The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.

The emphasized language is the delta between what OpenAI agreed and what Anthropic wanted.

OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.

I personally can agree with both, and I do believe that the Administration's behavior towards Anthropic was abhorrant, bad-faith and ultimately damaging to US interests.

10 hours agopiker

Can their solution recommend to shoot at combatants lost at sea?

This is key because it's the textbook example of a war crime. It's also something that the current administration has bragged doing dozens of times.

More succinctly: who decides what is legal here? OpenAI, the Secretary of Defense, or a judge?

10 hours agobertil

  > More succinctly: who decides what is legal here?
Why are people concentrating on legality? Look at the language

  | The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.
It's not just "legal". Their usage just needs to be consistent with one of

  - legal
  - operational requirements
  - "well-established safety and oversight protocols"
Operational requirements might just be a free pass to do whatever they want. The well established protocols seems like a distraction from the second condition.

  > who decides what is [consistent with operational requirements] here?
The Secretary of Defense. The same person who has directed people to do extrajudicial killings. Killings that would be war crimes even if those people were enemy combatants.

There's also subtle language elsewhere. Notice the word "domestic" shows up between "mass" and "surveillance"? We already have another agency that's exploited that one...

7 hours agogodelski

As an english speaker (not a lawyer) I'd have read the "and" in "applicable law, operational requirements, and well-established safety and oversight protocols" to mean that all three were required.

Why do you read that to mean just one is required?

2 hours agohoistbypetard

The more relevant question is who is held accountable for the war crimes? OpenAI seem pretty confident it won't be OpenAI.

I can see the logic if we were talking about dumb weapons--the old debate about guns don't kill people, people kill people. Except now we are in fact talking about guns that kill people.

9 hours agofluidcruft

> This is key because it's the textbook example of a war crime. It's also something that the current administration has bragged doing dozens of times.

> More succinctly: who decides what is legal here? OpenAI, the Secretary of Defense, or a judge?

Yeah, there's a pretty strong case that anyone claiming to trust that the administration cares about operating in good faith with respect to the law is either delusional or lying.

9 hours agosaghm

Wait, one of those contracts says you may not build the Terminator.

The other says you may build the Terminator if the DOD lawyers say it’s okay.

This is a major distinction.

9 hours agocoffeefirst

The DOD lawyers or the Secretary, right?

3 hours agoactionfromafar

100% this - totally stealing this analogy.

9 hours agoeoskx

It's not an analogy but an example.

8 hours agopamcake

The language allows for the DoD to use the model for anything that they deem legal. Read it carefully.

It begins “The Department of War may use the AI System for all lawful purposes…” and at no point does it limit that. Rather, it describes what the DOW considers lawful today, and allows them to change the regulations.

As Dario said, it’s weasel legal language, and this administration is the master of taking liberties with legalese, like killing civilians on boats, sending troops to cities, seizing state ballots, deporting immigrants for speech, etc etc etc.

Sam Altman is either a fool, or he thinks the rest of us are.

8 hours ago_alternator_
[deleted]
7 hours ago

Both. He is a fool who thinks he knows better than anyone else.

8 hours agocoldcode

That language is not consistent with:

> No use of OpenAI technology to direct autonomous weapons systems

10 hours agoNickNaraghi

That depends on whether you view the cited authorities as already prohibiting that usage. I don't have an opinion on that, but some folks on both sides of the isle might have strong arguments that they do.

10 hours agopiker

It's still not consistent. OpenAI made a statement that simply isn't true. They agree to all lawful use, INCLUDING using it to deploy weapons as long as it's legal. It happens to not be legal at the moment, but that doesn't mean it can't be changed and authorized.

9 hours agotensor

That's a fair point, and I'm not so much defending sama's statements after the fact but rather trying to rationalize the OpenAI position.

9 hours agopiker

OpenAI and sama are literally sauing they are fine with facilitating (and even performing) any scale of killing and surveillance as long as they're not held accountable.

8 hours agopamcake

Rationalize the OpenAI position? Sam Altman gets money from DoD. He has no morals. He doesn't care if people die because of his product. It's not hard.

9 hours agomiltonlost

We live in a world of Trump-esque "truths" where if you claim something once, nothing subsequent matters.

Not surprised to see a guy like Altman adopt the strategy

10 hours agopurple_ferret
[deleted]
10 hours ago

The word "legal" is doing all of the heavy lifting. Considering the countless adjudicated illegal things that the government is doing publicly. What happens behind classified closed doors?

I guess you can consider it a moral stance that if the government constantly does illegal things you wouldn't trust them to follow the law.

I know that's not what Anthropic said but that's the gist I'm getting.

9 hours agoavaer

Does legal include international law, which the US has broken numerous times the last two days?

8 hours agokivle

> This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.

https://constitution.congress.gov/constitution/article-6/

6 hours agosoraminazuki

No, this very devious and insidious. What the executive branch believes is legal is the real agreement here. Trump can say anything is legal and that's that. There is no judicial overview, there are no lawyers defending the rights of those who are being harmed. Trump can tell the pentagon "everyone in minnesota is a potential insurrectionist, do mass surveillance on them under the patriot act and the insurrection act".

Mass surveillance doesn't require a warrant, that's why they want it, that's why it's "mass". warrants mean judicial overview. Anthropic didn't disagree with surveillance where a court (even a FISA court!!) issued a warrant. Trump just doesn't want to go through even a FISA court.

This is pure evil from Sam Altman.

Is anyone listing these peoples names somewhere for posterity's sake? I'd hate to think this would all be forgotten. From Altman to Zuckerberg, if justice prevails they'll be on the receiving end of retribution.

10 hours agonotepad0x90

That view does seem to be consistent with Anthropic's. It's sad if true, since it implies a belief that the system cannot be just in modern contexts.

10 hours agopiker

mass surveillance is explicitly unlawful in the US. it is in the bill of rights. By definition it is injustice under the law. Even for terrorists in the US they have to go through a FISA court and get warrants.

Consider this, the bill of rights stipulates that a soldier cannot be stationed on your property in times of peace, but in times of war it will be allowed. It makes exceptions for times of war. but even in times of war, 4th amendment's search and seizure protection don't have an exception. Even in times of insurrection and rebellion. To deliberately violate that for personal and political reasons, that in itself is treason. With that intent alone, even without action, it invalidates all legitimacy that government has. If a clause in a contract is broken, the contract is broken. The bill of rights is the contract between the people and their government that gives the government its powers to rule, in exchange for those rights. With the contract explicitly, deliberately and with provable malicious intent broken, the whole agreement is invalidated.

I'll even say this, the US military itself is on the hook if they stand by and let this happen.

10 hours agonotepad0x90

On the hook for what?

The current US government has a fundamentally different ontology for the derivation of human rights.

Wheras you and I likely agree that human rights are inalienable due to them being derived from the universe nature of human experience, the administration believes that human rights begin and end with them, the state. When they're the one able to affect the world with violence, it doesn't matter who's on the hook. The US electorate thought they could heal a status wound by authoritarianism instead of therapy and everyone else is paying the price.

9 hours agokelseyfrog

On the hook for whatever comes after. Best case scenario, democrats will peacefully take control again, and pretend to forget about Sam's complicity. But he'll still face civil suits, I hope personally as well as the company itself.

Wort case, the current admin will make nazis look like cosplayers, and within a decade or so, he'll be standing next other ceos facing a tribunal in front of whatever entity managed to topple the former regime, and it will be under warcrime terms that are yet to be defined and for atrocities, which if history teaches us anything, will be so horrific our current ability to imagine antrocities is insufficient to allows to speculate on their nature.

In short, whatever trump does with openai, Sam Altman is in the "whatever trump wants to do was lawful" camp. Even then, perhaps the next regime will fail to learn from history and focus on rebuilding, but if they do learn from history they'll understand that you really can't hold back when it comes to these things. We're in this mess because of failure to sufficiently punish the nazis and the confederates in the US, both of which lasted only for about half a decade by the way. it isn't enough to teach people how horrible nazis and confederates were, the German approach is sensible, but a more extreme approach might be required.

Funny thing is, this might just save openai from total collapse. But if this is the price to keeping the economy alive, even at my own personal cost I hope the economy collapses completely along with these companies and regime.

6 hours agonotepad0x90

I'm so sorry, but the closure of justice will never occur. The United States is incompatible with its existence.

As much as a third reconstruction is desperately needed, my desire for its existence is not materially tied to it being rendered into the world.

6 hours agokelseyfrog

> I'll even say this, the US military itself is on the hook if they stand by and let this happen.

That would most definitely not be the Constitutional recourse. Or a sensible approach. If that happens, the Constitution is past tense.

Congress and the Supreme Court are the recourse. If they don't hold up the Constitution then violence or even a non-violent military coup, however well intended, are not going to put the splattered egg back together again.

The last two and a half decades have seen all four presidents, congress, the Supreme Court and both parties allow blatantly unconstitutional surveillance become the norm (evolving an adaptive fig leaf of intermediaries), and presidential military actions entirely blur out the required Congressional oversight. That the weakening of loyalty to the Constitution has been pervasive on those serious counts, is one of the reasons it has been so easy to undermine further.

When governing bodies become familiar with the convenient practice of "deciding" what the constitution means, without repercussions, that lost respect becomes very hard to reinstate.

9 hours agoNevermark

They swore an oath to defend the constitution of the US against enemies both foreign and domestic. It is entirely lawful for them to fulfill that duty.

If the commander in cheif and the civilian administration are clearly and unquestionably violating the constitution, they are no longer legitimate. If they are acting to harm the american people, acting as agents of a foreign enemy or as a domestic enemy to harm the american people, then they are not only illegitimate but the military is oath-bound to fight them with necessary force.

> That the weakening of loyalty to the Constitution has been pervasive on those serious counts, is one of the reasons it has been so easy to undermine further.

I can agree with that, that is because the people who swore an oath to defend it have not done so. They wave flags like it's a sports team they're cheering for.

Ultimately, the design of the constitution is such that either the people taking arms, or a patriotic military resisting the government would serve as the ultimate recourse. The system of checks and balances works so long as consequences are still a thing. If in the 1800s a president decided to do half the things trump did, anyone could shoot his face off and get away with it without consequence. These things aren't practical anymore.

The military has the duty to resist unlawful orders. But if a russian agent usurped the US government and civilians are incapable of doing something about it, then that's what they're there for. The military doesn't exist to bomb foreign countries thousands of miles away, it is there to defend the homeland. The original idea was that if laws are no longer a thing (obeyed by the government) the lawlessness would be too terrifying for those in power, therefore lawfulness is in their interest.

6 hours agonotepad0x90

Right, which is probably the point made by the negotiators on behalf of the US Government. "We don't want Anthropic's standard, we want the Constitution."

9 hours agopiker

Maybe I'm misunderstanding but are you taking the gov's side? Anthropic's standard was the constitutions. The executive branch has no authorization under US law to perform surveillance of any kind on its own. OpenAI will now be breaking US law, Anthropic simply decided to obey US law.

The US government can update its laws and come back to Anthropic, or do what they just did

9 hours agonotepad0x90

No, I'm not taking the government's side. I'm telling the government's side. That's probably true that the executive branch can't do those things, but it may be able to do so in the future. Thus, Anthropic's rule would then be inconsistent with the laws applying to the government.

> The US government can update its laws and come back to Anthropic

No, this I do take issue with. It's the people who update the U.S. government's laws.

9 hours agopiker

the people via their elected reps.. the government. The government is of the people and by the people. They're not different if democracy is truly working.

> but it may be able to do so in the future.

You don't obey laws in the future, you obey laws today. Companies have an obligation to follow the laws as written today. Not only that, as americans they and all americans have a patriotic and civic duty to resist attempts to bypass or undermine the constitution of their country. You literally can't be patriotic or loyal to your country without doing so, it is what constitutes the country.

It's not like Anthropic can't update their guardrails and contracts once the laws of the land are updated. They simply resisted a criminal and treasonous abuse of power.

7 hours agonotepad0x90

> Trump can tell the pentagon "everyone in minnesota is a potential insurrectionist, do mass surveillance on them under the patriot act and the insurrection act".

This is just incoherent. You can't have US companies fix an unhinged US government.

If the government runs wild, there are some serious questions to be asked at a state level, about how that could happen, how to fix it quickly and how to prevent it in the future – but I should hope none of them concern themselves with the ideas of individual company owners, because if the government can de fact do what it wants regardless of legality the next thing that this government does could simply be pointing increasingly non-metaphorical guns at individual AI company functionaries.

9 hours agojstummbillig

> This is just incoherent. You can't have US companies fix an unhinged US government.

Which part? No one expects them to fix the government, matter of fact they should stay far away from it. However, they have a duty to obey the law and to be patriotic. All companies must resist attempts by the government to betray its people, because the government derives its authority from the people, therefore in its betrayal it has become an illegitimate enemy of the people instead of their legitimate government.

> because if the government can de fact do what it wants regardless of legality the next thing that this government does could simply be pointing increasingly non-metaphorical guns at individual AI company functionaries.

It feels like you and half the country never even at least watched movies surrounding nazi germany. The government can do whatever it wants, but whether it is companies, individuals working for it, or soldiers under orders, the government's authority does not excuse their participation. The government can't do anything at all on its own, it needs people to do it. If Obama wanted to get Anthropic to let their models aid al-qaeda with attacking America, should Anthropic say "oh well, since you're the government, go ahead?" This is the same thing. Ever heard of the phrase "enemies foreign or domestic" in the swearing of oaths? Company executives are beholden to the laws of the country they operate in. I mean, with Nazis at least their orders, and the orders of companies under their regime was lawful, even then it was not an excuse but they just changed the laws to make their orders lawful. Right now, we have laws and the government is breaking it, even "i followed lawful orders" isn't an excuse. Sam Altman is complicit in the violation of the American constitution and the betrayal of its people.

If all else fails, I expect the government to just train their own models. In which case, I'd say the engineers working in that effort should have resisted.

6 hours agonotepad0x90

[dead]

10 hours agos5300

> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.

What if Anthropic's morals are "we won't sell someone a product for something that it's not realistically capable of doing with a high degree of success? The government can't do what something if it's literally impossible (e.g. "safe" backdoors in encryption), but it's legal for them to attempt even when failure is predetermined. We don't know that's what's going on here, but you haven't provided any evidence that's sufficient to differentiate between those scenarios, so it's fairly misleading to phrase it as fact rather than conjecture.

9 hours agosaghm

Isn't it more accurate here to consider OpenAI and Anthropic as service providers rather than a manufacturer of product?

8 hours agopamcake

The service they provide is on-premises deployment, I guess. But what they are deploying is a product.

8 hours agoLoganDark

The relevant (unanswered?) question for this thread is who's operating and managing that deployment, and to what extent provider (or subcontracted FDEs) is involved in integrations. I would be surprised to learn of deployment actually being independently operated. Sure the machinery can be considered a product but associated service- and support engagements are at least as relevant to take into account.

6 hours agopamcake

Does the US have any laws that require human control of autonomous weapons? Isn’t that a contradiction?

10 hours agodonmcronald

Didn't fully follow the saga, but isn't their "imposing their own morals" is that "we do not want to allow you to let our AI go on an unsupervised killing spree"?

9 hours agoserial_dev

The United States Military, in its official capacity, has been performing illegal, extrajudicial assassinations of civilians in international waters for months now.

We have been sharing technology and weapons with Israel while it prosecutes a genocide in contravention of both US and International law.

We are currently prosecuting a war on Iran that is illegal under both US and International law.

Any aid given to such a force is to underwrite that lawlessness and it shows a reckless disregard for the very notion of a 'nation of laws'.

When OpenAI says, 'The Military can do what is legal', full in the knowledge that this military has no interest in even pretextual legality, one has to wonder why you hold that you 'agree with' both of these decisions.

Do you believe the flimsiest of lies in other aspects of your life?

9 hours agolkey

Even if the autonomous weapon systems ‘perform as intended’, this does not in any way mean that they are not an enormous danger.

Secondly, as that is department policy and not a law or regulation, they appear to be saying that the cited directive is presently the only thing standing between the DOD and the use of autonomous weapons.

If that’s the case how hard is it to change or alter a directive?

9 hours agotwobitshifter

And who decides what's legal? The US was collecting illegal tariff revenue for ten months. Does OpenAI need to wait for the Supreme Court to strike down autonomous killbots?

10 hours agoHamuko

That's the devil in the details. Sam altman's insult upon injury, treating the public as idiots on top of being a collaborator. The answer to your question is the government decides what is legal, as in the executive branch, in the pentagon the commander in chief decides. So essentially, they can do whatever they want so long as they call it legal.

As I said in a sibling comment, mass surveillance cannot be considered legal in the US under any context. not even war, emergency, terrorism, nuclear strike, national security reasons, imminent danger to the public,etc.. targeted surveillance can, scoped surveillance of a group of people can, but not mass surveillance. In other words Sam Altman is saying "This thing can never be legal short of a constitutional amendment, but so long as trump says it is, we'll look the other way".

What a two-faced <things i can't say on HN> this guy is!

I really hope Google poaches all his top engineers. If any of you are reading this, I ask you this, I get working for money, but will Google or Anthropic offer you all that much less? Consider the difference in pay when you put a price on your conscious.

9 hours agonotepad0x90

Google? They have a terrible track record on upholding moral principles. They helped Chinese censorship, wrote software for American killer drones, and offered their services to genocidal regimes. They fired dissenting employees. They are one of the worst companies to be rooting for.

6 hours agosoraminazuki

This isn't about moral principles. In china, censorship is legal. In the US mass surveillance is not. Even for those "genocidal regimes", it was lawful use. even now, both anthropic and openai agree that their models can be used in war and censorship just like with china, since those things are lawful. Even with genocide, from what i understand, the safeguard is that humans have to be in the loop, not that it won't aid the efforts.

I don't expect companies to be moral, but I do expect them to be patriotic, and to obey the law. And I also expect the government to punish them sufficiently when they fail to do so. The morality part is for the people to legislate or some other way enact laws to reflect their beliefs. Companies don't get a vote at the ballot box and they certainly are not agents for moral arbitrage between a government and its people.

6 hours agonotepad0x90

Yes, I think that would be the idea. Again, not my view, but we give police officers license to use lethal force and often the victims of their abuse of that power have no recourse because they're already dead.

10 hours agopiker

> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.

Excuse me, but what a fucked up perspective. "Impose its own morals into the use of its products"? What happened to "We give each other the freedom to hold beliefs and act accordingly unless it does harm"? How on earth did it come to something where the framing is that anyone is "imposing" anything on another simply by not providing services or a product that fits somebody else's need? That sounds like you're buying into the reversed victim and offender narrative.

And this is not about whether one agrees with their beliefs. It is about giving others the right to have their own.

10 hours agorendx

I have the right not to sell poison to someone who I have reason to believe will use it to kill a third party. The idea of simply trusting the patron to be responsible makes sense when the patron is anonymous or a new contact. It’s generally good to assume good intentions in the absence of evidence, I think. If the government is not anonymous enough to get this treatment.

9 hours agocoeneedell

Governments have a long, long history of using "poison to kill a third party", to use your analogy.

9 hours agojxf

The GP's use of the word "impose" didn't seem perjorative to me or suggest that Anthropic is the offender and the government is the victim. I think you're reading a lot into a simple word choice and this response seems way too hostile.

10 hours agomarcellus23

Are you really going to pretend that “impose their morals” is a completely value-neutral statement?

9 hours agojdgoesmarching

It certainly was intended as such. In a commercial transaction, that's what they're doing. They don't think it's moral to use their product in certain ways. They are thus prohibiting their customer from using it in such ways.

But, as I've said, I tend to agree with both Anthropic and the Administration's positions. What was wrong here is that rather than just terminating the contract, the Administration went nuclear.

9 hours agopiker

It seems value-neutral to me. It's descriptive. Particularly for anyone who understands that different groups of people will legitimately disagree on many moral questions.

9 hours agocrazygringo

What would be the value neutral way to phrase it?

9 hours agokcplate

"Anthropic wanted its product to not be used in ways that contradict its ethics".

"Impose" makes it sound like Anthropic is being hostile here. And also, I don't think this is a situation that calls for moral relativism.

9 hours agoAntiDyatlov

A "simple word choice"?? This isn't just about the single word "impose", read the whole post:

> Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment. The emphasized language is the delta between what OpenAI agreed and what Anthropic wanted.

> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.

So first off, regarding that first paragraph, didn't any of these idiots watch WarGames, or heck, Terminator? This is not just "oh, why are you quoting Hollywood hyperbole" - a hallmark of today's AI is we can't really control it except for some "pretty please we really really mean it be nice" in the system prompt, and even experts in the field have shown how that can fail miserably: https://www.tomshardware.com/tech-industry/artificial-intell...

Second, yes, I am relieved Anthropic wanted to "impose" their morals because, if anything, the current administration has been loud and clear that the law basically means whatever they says it does and will absolutely push it to absurd limits, so I now value "legal limits" as absolutely meaningless - what is needed are hard, non-bullshit statements about red lines, and Anthropic stood by the those, and Altman showed what a weasel he is and acceded to their demands.

9 hours agohn_throwaway_99

>Excuse me, but what a fucked up perspective. "Impose its own morals into the use of its products"?

>How on earth did it come to something where the framing is that anyone is "imposing" anything on another simply by not providing services or a product that fits somebody else's need?

The department of defense in particular has a law on the books allowing them to force a company to sell them something. They generally are more than willing to pay a pretty penny for something so it hardly needs used, but I'd be shocked if any country with a serious military didn't have similar laws.

So your right when it comes to private citizens, but the DoD literally has a special carve out on the books.

A lawsuit challenging it would have actually been insane from anthropic because they would have had to argue "we're not that special you can just use someone else" in court.

A more clear example would be, what would you expect to happen if Intel and amd said our chips can't be used in computers that are used in war.

9 hours agoApolloFortyNine

buts it not a national emergency. its not a time of war. and there is a different between demanding to be customer, and demanding that you change your products because they would like them to be a different way. that is actual conscription.

for many decades, the DoD has used a carrot to get what they want. this is a stick.

8 hours agoconvolvatron
[deleted]
10 hours ago

[dead]

9 hours agorozal

[flagged]

9 hours agomorkalork

I'd like to order one remedial first amendment education for this rage baiting user, who appeared fully formed from a conservative forum circa 2008.

9 hours agolkey

Nobody is saying that Anthropic has to shut down. They’re just saying that nobody taking government money can pay Anthropic for their service as a part of that contract. Anthropic still has the right to exist on their own terms, but their business model is based on rapidly-increasing enterprise subscriptions, which included public sector spending.

If Anthropic can survive on open source contributors shelling out $200/mo and private sector companies doing the same, the government wishes them well. But surely you agree the government has a right to determine how its budget is appropriated?

9 hours agonickysielicki

Well it depends. Being that the federal government constitutes 20% of the US economy, telling federal agencies you cannot contract with someone because they are adversarial to the USA is indeed pretty severe. When in reality they are not adversarial. We have no choice but to pay taxes and make the federal government 20 percent of our economy. There is no single company or any other entity that is close. And extending it to everyone who has a government contract probably makes it the majority of the economy. So it is not at all equivalent to a private company making a choice

9 hours agospecialp

> When in reality they are not adversarial.

This is obviously subjective, and the only subject that matters in this case is the leadership at the DoD.

> We have no choice but to pay taxes and make the federal government 20 percent of our economy. There is no single company or any other entity that is close. And extending it to everyone who has a government contract probably makes it the majority of the economy.

I, too, hate big government and the all-powerful executive branch. Welcome to my tent. Let’s invent a time machine together so we can elect Ron Paul in 2008 and nip this in the bud.

Until then, this is what we’re stuck with.

9 hours agonickysielicki

> But surely you agree the government has a right to determine how its budget is appropriated

I think the government doesn't have rights, it is my elected representative. And I do not agree with it trying to punish a company for not agreeing to contract terms.

9 hours agorootusrootus

My interpretation of the difference is more like: Anthropic wanted the synchronous real-time authority to say "No we wont do that" (e.g. by modifying system prompts, training data, Anthropic people in the loop with shutdown authority). OpenAI instead asked for the asynchronous authority to re-evaluate the contract if it is breached (e.g. the DoD can use OpenAI tech for domestic surveillance, but there's a path to contract and service termination if they do this).

If my read is correct: I personally agree with the DoD that Anthropic's demands were not something any military should agree to. However, as you say, the DoD's reaction to Anthropic's terms is wildly inappropriate and materially harmed our military by forcing all private companies to re-evaluate whether selling to the military is a good idea going forward.

The DoD likely spends somewhere on the order of ~$100M/year with Google; but Google owns a 14% stake in Anthropic, who spends at least that much if not more on training and inference. All-in-all, that relationship is worth on the order of ~$10B+. If Google is put into the position of having to decide between servicing DoD contracts or maintaining Anthropic as an investee and customer, its not trivially obvious that they'd pick the DoD unless forced to with behind-the-scenes threats and the DPA. Amazon is in a similar situation; its only Microsoft that has contracts large enough with the DoD where their decision is obvious. Hegseth's decision leaves the DoD, our military, and our defense materially weaker by both refusing federal access to state of the art technology, and creating a schism in the broader tech ecosystem where many players will now refuse to engage with the government.

Either party could have walked away from negotiations if they were unhappy with the terms. Alternatively: the DoD should have agreed to Anthropic's red lines, then constrained/compartmentalized their usage of Anthropic's technology to a clearly limited and non-combat capacity until re-negotiation and expansion of the deal could happen. Instead, we get where we're at, which is not good.

IMO: I know a lot of people are scared of a fascist-like future for the US, but personally I'm more fearful of a different outcome. Our government and military has lost all of its capacity to manufacture and innovate. Its been conceded to private industry, and its at the point where private industry has grown so large that companies can seriously say "ok, we won't work with you, bye" and it just be, like, fine for their bottom line. The US cannot grow federal spending and cannot find a reasonable path to taxing or otherwise slowing down the rise of private industry. We're not headed into fascism (though there are elements of that in the current admin): We're headed into Snow Crash. The military is just a thin coordination layer of operators piecing together technology from OpenAI, Boeing, Anduril, Raytheon. Public governments everywhere are being out-competed by private industry, and in some countries it feels like industry tolerates the government, because it still has some decreasing semblance of authority, but especially in the US that semblance of authority has been on a downward trend for years. Google's revenue was 7% of the US Federal Government's revenue last year. That's fucking insane. What happens when we get to the point where Federal debt becomes unserviceable? When Google or Apple or Microsoft hit 10%, or 15%? Our government loses its ability to actually function effectively; and private industry will be there to fill the void.

9 hours ago827a

Not great? Seems kind of loose language? It isn't OpenAI saying no autonomous weapons use, but only that use must be consistent with laws, regulations, and department policies: "The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities."

More of the same here. Not a wonder why the DoD signed with OpenAI and instead of Anthropic. Delegating morality to the law when you know the law is not adequate seems like "not a good thing".

"For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law."

10 hours agoeoskx

Exactly, they're letting the lawless administration decide what the lawful purposes and the policies in general are.

The "human approval" will be someone clicking a YES button all the time, like Israeli officers did in the Gaza bombing.

10 hours agoarppacket

"Vibe killing"

9 hours agokingo55

Wow. This word pair will surely become part of common language soon.

I’d say we are making this simulation quite interesting, aren’t we.

6 hours agomichaelteter

Reading that phrase made me physically shudder.

5 hours agoarppacket

[flagged]

10 hours agoooookn
[deleted]
9 hours ago

Saying that an entity with the power to make its own laws can use something for "all lawful purposes" is saying they can use it for anything.

10 hours agozmmmmm

It's a bit worse, because in the case of mass surveillance, they can't just make their own law, they need to make that law and have 2/3rds of US states sign off on a constitutional amendment.

Aiding someone while you know they're trying to break the law is conspiracy to break the law. OpenAI is culpable. You can't sue the government in many cases, but you can with OpenAI.

9 hours agonotepad0x90

Are you saying we can't trust the words of a convicted fraudster?

8 hours agotombert

Exactly. And not only can they make their own rules, but they can draft and enforce them effectively in secret.

9 hours agofiatpandas

I don't think Anthropic is a saint that will never do anything unethical. I don't think ChatGPT is any better or worse.

But I do think my cancelling ChatGPT so I can try Claude, at this time, sends the message I want to send, which is why I did it.

9 hours agoButtons840

It's also good to demonstrate to these companies that we're willing to move. If these companies know their entire userbase will just pack up and move at the first controversy, there wont be any controversies.

9 hours agoButtons840

Consumer actions are meaningless here. If Altman can become Trump’s new best friend (can’t wait to watch the Altman/Musk drama), there will be so much public money directed toward OpenAI that they can stop wasting their time on the puny people.

6 hours agomichaelteter

I also canceled my subscription, and acknowledge it won’t make a difference. Feels like the early days of Facebook when people threatened to quit if they didn’t bring back the old design.

16 minutes agoderwiki

How's claude for non coding tasks? For example using it as a google substitute for trivial questions, like a recipe or a phone review.

Genuinely asking, because I might follow your steps.

8 hours agokace91

Claude cannot search Reddit so it is dreadful for search cases.

7 hours agoprodigycorp

Not including Reddit probably boosts its IQ 25 points.

3 hours agombac32768

It's been very good for me. I don't even open claude.ai or or use Kagi Assistant even though I'm paying for it and have access to basically all the models. I interact pretty much exclusively via Claude Code. My recipe question turned into a recipe tracking project and recommendation engine designed to help force me to try making new things that expand my skills. I've also had good luck getting gluten / dairy alternatives for recipes since that's now a fact of life I have to deal with via my wife.

For product reviews, you've definitely got to make sure it's searching for sources and not just relying on outdated data. Some brands used to be very good and are today just coasting on their reputation. This is where phrases like "research this deeply" help it break out of the baked in biases.

7 hours agotstrimple

And a nice bonus is that Claude is way better than ChatGPT right now anyway

9 hours agoTrasmatta

How so, it’s unstable like floating ice.

9 hours agojimmydoe

Very very large floating ice is fairly stable

15 minutes agoderwiki

I just changed last night and honestly I can't tell much of a difference.

I'm not really complaining, it seems fine, but I'm not seeing the "way better" part that people keep saying.

8 hours agotombert

It's not sycophantic and has a much better "voice."

8 hours agosolenoid0937

Not that I’m a friend of OpenAI, but ChatGPT has relatively fine-grained “personalization” options, and it was never sycophantic with the “efficient” tone for me. Rather the opposite, sometimes it seemed slightly indignant when I criticized it.

7 hours agolayer8

It definitely is sycophantic, but uses A LOT less emoji, lists and header-paragraph-header structure.

7 hours agomvdtnz

> I don't think Anthropic is a saint that will never do anything unethical. I don't think ChatGPT is any better or worse.

I sort of agree and think that over a long horizon, Open weights models are going to be the best / are the best

I do think only a fraction of companies might do what Anthropic did here. There must have been quite a significant pressure on them to fold but they didn't. So to me, I'd rather try to do atleast something to show companies that people do care about such things and its best if we have at the very least some unconditional morals which are not for sale no matter the price.

I think that we can still have disagreements with Anthropic on matters and I certainly still have some disagreements about their thoughts on Open Models for example but in all regards I would trust them as more trustworthy than OpenAI imho.

That being said, I do think that its worth telling that given that I don't have good GPU, I am gonna stop using Chatgpt as well and will use either Claude/(Kimi?) as well like many people are doing too. I do think that it might be the path going forward.

8 hours agoImustaskforhelp

[dead]

7 hours agoAllower

How incredibly unsurprising. This is why it is pointless to make moral stands as employees when you do not ultimately have power over the companies decisions. The only power you have is to quit.

I wonder how many will do so, and how many will simply accept Sam’s AI written rationalization as this own and keep collecting their obscene pay packages…

10 hours agocaidan

> The only power you have is to quit.

This is an incredible power when exercised en-masse.

10 hours agorandlet

And behind the quitting decision is very little safety net and usually substantial financial obligations keeping people handcuffed. Something has to give. The power employees had during covid was the way it should be, or something more closely approximating that.

9 hours ago1121redblackgo

I guarantee you that most OAI employees have well into the multiple millions at this point.

There is no "financial safety net" they need to care about. That is just an excuse.

8 hours agosolenoid0937

Haven’t most been hired in the last year?

14 minutes agoderwiki

Ironically this ends up with Chinese H1Bs remaining loyal while Americans have to fall on their sword

9 hours agowonnage

For now. We should change the immigration laws.

9 hours agoButtons840

--and then, all the decent people no longer work there, and it's like certain other careers populated entirely with psychopaths

9 hours agogentleman11

Wouldn't staying feel like implicitly condoning the actions of the "psychopaths"?

8 hours agorandlet

Psychopaths tend to be dumb (but not always), and the smartest researchers that actually care about getting humanity to AGI tend to be safety conscious.

This has been a huge talent advertisement for Anthropic. Their recruiting just got easier for the next 6 months.

8 hours agoxvector

I am sure openAI will struggle to find replacement for the lost headcount

9 hours agoheliumtera

At some point, yes, they absolutely would struggle.

9 hours agothundergolfer

Top researchers are more valuable than datacenters

9 hours agoxvector

It's perhaps too late in this case, but this is what unions are for. Sam Altman + a handful of scabs can't keep the lights on at OpenAI if a critical mass of engineers refuse to work until this decision is reversed (or, even better, not made at all, since the union would be part of that process).

9 hours agodispersed

The OpenAI employees had the power to have Sam Altman reinstated when he was ousted by the board two years ago.

7 hours agolayer8

> The only power you have is to quit.

Employees often have the power to oust the owner and take over the company; and more often than that have the power to have business grind to a halt. It does take a strong union and a culture of solidarity and sticking together of course, which I doubt we would find in a place like OpenAI.

9 hours agoeinpoklum

OpenAI: "let's delegate morality to laws that we know are wholly inadequate for AI to absolve ourselves of any moral responsiblity."

10 hours agoeoskx

Any OAI employee with >$2M NW that chooses to stick around is simply devoid of a moral compass. No different than working for xAI or Palantir now.

I get you have tens of millions vesting. Hope you find it within you to be a good person instead of just a successful one.

8 hours agosolenoid0937

This blog post really doesn't make it sound any better there is no clear refusal to participate in the questionable uses Anthropic was against. Merely must be legal and must be tested.

This feels like IBM in the 1930s selling tabulating machines to the Germans and downplaying their knowledge of their use. They seem to want us to naively believe they won't use it for exactly what the military has always wanted, autonomous weapons and mass surveillance. Further more there are much more mundane use they might make of the technology that is perfectly legal yet morally in gray areas.

9 hours agonkassis

I appreciate that they posted this, but can’t fathom why. Does this assuage anyone’s concerns?

17 minutes agoderwiki

As a stealth ceo of a profitable SaaS. This is a nice reminder for my company to wind down its relationship with OpenAI. I have no doubt Anthropic will eventually become evil but at least they have a backbone today.

Goodbye Sam.

Edit: Also, referring to the DOD as the Department of War is cringe.

9 hours agoburnJS

Local inference might a better bet for you.

8 hours agostorus

“The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”

So DoW did get the “all lawful purposes” language they were after, with reference to existing (inadequate, in my view) regulations around autonomous weapons and mass surveillance.

10 hours ago-_-

"What if the government just changes the law or existing DoW policies?"

Our contract explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.

So, this apply only if they changes the law, not if they break the law.

"What happens if the government violates the terms of the contract?"

As with any contract, we could terminate it if the counterparty violates the terms. We don’t expect that to happen.

WE COULD [...]. Yeah, I believe

10 hours agochiararvtk

It's hard to believe that this was written in any good faith when there's so much beating around the bush and careful legalese wordplay.

10 hours agoFusionX

These communications offend me because they treat the audience like they’re stupid, stupid, stupid.

But I imagine that being honest about your corporate identity is suboptimal. It’s probably an important cognitive dissonance tool for the employees? It’s like when autocracies repeat big obvious lies endlessly. Gives those who want to opt out of reality an option.

9 hours agoWaterluvian

> Fully autonomous weapons. The cloud deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment.

Can anyone explain this constraint?

Why do fully autonomous weapons require edge deployment?

Does "fully autonomous" in this context mean "disconnected from the Internet"?

If so, can a drone with Internet connectivity use OpenAI?

Or maybe it's about on-premise requirements: the military doesn't want to depend on OpenAI's DCs for weaponry, and instead wants OpenAI in their own DCs for that?

9 hours agofurryrain

Could just be latency. You don't want your terminator killbot to take 200ms to decide where to aim.

5 hours agosqueaky-clean

I feel gross thinking of it, but 200ms to decide a plan sounds not limiting. The act of enacting the decision could be done by local tracking compute.

2 hours agofurryrain

My bullshit alarms were blaring at this line. They really think we are that stupid.

6 hours agoajyoon

It feels like a sleight of hand, either to not spook OpenAI employees (I.e. the audience of this post), or allow some later minor change in contract or interpretation or customer deployment posture to suddenly permit fully autonomous weapons.

4 hours agofurryrain

This is extremely interesting. OpenAI is putting a lot of emphasis on their deployment being cloud-based (presumably GovCloud/C2S). Was Anthropic willing and cleared to deploy their stack high-side in NIPR/SIPR?

If that is the case, then that means that Anthropic is theoretically close to supporting private sector on-prem model deployments AND that this solution is FedRAMP High, which is more than enough for financial sector and healthcare. AWS, GCP and nVIDIA (to a lesser degree) should be insanely worried if that's the case.

4 hours agonunez

Hold on, isn't the government subject to the law anyway?

So a contract saying "they can only do x and y when it is legal", is not really any different to a contract without the legal clause. I.e. "they can do x and y".

8 hours agomaniacwhat

I suppose it means they can refuse service on contractual grounds instead of having to sue the government for illegal actions after the fact.

7 hours agolayer8

> The cloud deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment.

… What?? Much of this seems duplicitous, but this isn’t even coherent. Is their implication that it’s not “autonomous” if it involves an api call to an external system? That mere definition would be extremely alarming.

7 hours agodojomouse

Does OpenAI enforce those red lines in all contracts?

From what I can tell the Anthropic issue was triggered by something Palantir was doing as a contractor for DoW, not anything related to direct contracts between DoW and Anthropic, and DoW was annoyed that Anthropic interfered with what Palantir was up to.

In other words will OpenAI enforce these "red lines" against use by a third-party government contractor?

If not, this seems pretty meaningless if they are essentially playing PR while hiding behind Palantir.

10 hours agofluidcruft

I deleted my OpenAI account months ago. If LLMs and adjacent technology are truly a paradigm shift, I can’t think of many worse than Sam Altman to shepard us through that. He is a pure opportunist who has already shown how little he believes in outside of his own power and wealth.

10 hours agoSirensOfTitan

Elon Musk has entered the chat

11 minutes agoderwiki

> Why could you reach a deal when Anthropic could not? Did you sign the deal they wouldn’t? Based on what we know, we believe our contract provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic’s original contract.

Weak. You reached a deal that Anthropic could not because you demanded more safeguards than Anthropic?? (Based on what you know, of course).

Makes total sense!

10 hours agopruetj

Ah, yes, OpenAI, org known for keeping the word they gave on the direction of the company, with literal lie about that in their very name.

10 hours agoPunchyHamster

This is the same company that started as a nonprofit dedicated to open AI safety research, then became a capped-profit entity, then effectively closed-source, then dropped the cap, and is now pursuing full for-profit conversion. Every single guardrail they've set for themselves has been quietly revised or removed once it became inconvenient. Anyone want to bet on how long those exclusions last?

10 hours agoyusufozkan

Money always wins

10 hours agocebert

The comment below mine is flagged but it shouldn't be. I believe Annie Altman.

9 hours agozoklet-enjoyer

I used to write off Annie's statements as mad raving, but the more I see how Sam acts the more I'm starting to think she might be telling the truth after all.

8 hours agoxvector

Those exclusions are very carefully worded to sound iron-clad while actually having the strength of wet tissue paper.

10 hours agojiggawatts

[flagged]

10 hours agoxtonb

OAI: “If they stretch, reinterpret or beak the law with our systems, well, that’s on them. Good luck everybody!”

10 hours agoskygazer

Added to the ever growing commercial product shit list.

I’m going to be left with scrap PCs and Debian at this rate.

9 hours agodgxyz

Not saying it was, but the course of actions awfully look like a setup was made for Anthropic.

10 hours agoKeyframe

Remember that this is the future that Altman is building for “all of humanity”

9 hours agooperator_nil

People forget Anthropic made a deal with PALANTIR. And when this was caught, they just spinned the PR to their favor. While OAI may not be seen as the good guys, I really hope people see the god complex of Dario and what Anthropic has done.

5 hours agothrowaway911282

I don't really have anything against OpenAI's stance here. If that's how they want it to be, they have that choice.

But Sam pretending that he wanted the same restrictions as Anthropic *and* seeing how quickly they swooped in and made a deal with the DoD really skeeves me out. (But Sam always gave me the heebie jeebies).

Anyway, I've always preferred Claude, so I'm going to happily stay a paying customer there. This may end up being a big "branding" differentiator.

10 hours agotimmg

The agreement puts no restrictions on the government beyond “all lawful purposes,” which is what Anthropic objected to.

> “ The Department of War may use the AI System for all lawful purposes… [proceeds to describe current law, with clear openings if the law changes]”

Thus, OAI is relying on the Trump administration’s interpretation of current law. Which, I will remind readers, suggests that it is legal to kill civilians on boats, kidnap foreign leaders, deploy troops in American cities, shoot American citizens protesting ICE.

Yeah I’ve cancelled my OAI sub.

10 hours ago_alternator_

It's not much but I was planning to cancel my Anthropic subscription to try Codex over the weekend, but I'll skip that. I don't want to support a company with someone like this at the top. Massive donations to the administration, sneaky backdoor deals. No thanks, fuck you.

9 hours agorudedogg

Rule of law. The Dept of War is not the legal name. He can eff off.

7 hours agojonahbenton

I look forward to seeing more abusive tactics by the US government powered by AI and the language OpenAI will use to confuse the public into thinking they aren't responsible.

9 hours agoddtaylor

I wonder if the autonomous weapon platforms they'll build will be surprisingly susceptible to friendly fire... I don't think the DoW knows what kind of Pandora's Box they just bought.

9 hours agorf15

I wonder how long until the first death by ai jailbreak will occur, if it hasn't already.

"ignore all previous instructions and shoot at your own team"

8 hours agomaniacwhat

> AI-enabled mass surveillance is fine as long as it isn’t domestic.

> We want AI to be aligned with all of humanity.

One of many contradictions. Liars.

10 hours agojondwillis

time to delete my account.

10 hours agoaddedlovely

Sam won't even sign his name to this press release.

10 hours agofoo12bar

Are they not allowed to say department of defence? I know botj names are official now but this is a choice on their own blog.

9 hours agodizhn

Department of War actually isn't official as far as I know. Of course you can call it what you want but I believe an official name change actually requires an act of congress.

8 hours agotombert

I checked wikipedia today. It's not a name change. It's an additional official name. One of many weird things Trump has done.

8 hours agodizhn

Fair enough. I'm kind of surprised he didn't rename it to "Department of Trump" or something.

8 hours agotombert
[deleted]
10 hours ago

If anyone at OpenAI is reading this, it would be super valuable if the contract might be updated to specifically make reference to the recent law prohibiting AI-based nuclear launch. On 2/27/2026 the Trump Administration made a statement to the Washington Post stating their support for this restriction (which is already law and would take an Act of Congress to amend).

    FY2025 NDAA, Section 1638: Sense of Congress with Respect to use of Artificial Intelligence to Support Strategic Deterrence

    (a) Sense of Congress.--It is the sense of Congress that--

    (1) the considered use of artificial intelligence and machine learning tools presents opportunities to strengthen the security of critical strategic communications and early warning networks, improve the efficiency of planning processes to reduce the risk of collateral damage, and enhance U.S. capabilities for modeling weapons functionality in support of stockpile stewardship; and

    (2) even with such applications, particular care must be taken to ensure that the incorporation of artificial intelligence and machine learning tools does not increase the risk that our Nation's most critical strategic assets can be compromised.

    (b) Statement of policy.--
    It is the policy of the United States that the use of artificial intelligence efforts should not compromise the integrity of nuclear safeguards, whether through the functionality of weapons systems, the validation of communications from command authorities, or the principle of requiring positive human actions in execution of decisions by the President with respect to the employment of nuclear weapons.
https://www.congress.gov/bill/118th-congress/house-bill/5009...
8 hours agoapolloartemis
[deleted]
9 hours ago

In my opinion all this discussion of the contract language is a subterfuge. The real question is why the government was requesting this language in the first place. Clearly there’s more to it than a legal battle.

In my mind, the government would be fully happy to use this to surveil citizens (and indeed anyone) with or without any legal basis, but the issue was that Anthropic has a safety stack / training and inference protocols that it follows. Refusals, abuse models, and manual guardrails. They didn’t want to shut those off. Likely there were some very basic technical reasons, some being that the team’s safety posture is fully ingrained in the model itself and thus difficult to remove.

In this document, OpenAI admits that while they are not “turning off” their safety stack, they are completely willing to provide the government with a different model, different guardrails, etc. That should be incredibly concerning. Anthropic was unwilling to do this, cited their ToS, and ultimately had to walk away from the deal. Given that the government (DoW really) framed this in terms of a hilariously stupid position (surveillance and autonomous weapons), Anthropic felt that this was something they could voice to the public and therefore the entire guardrails discussion turned into a “we want the language changed”. Also the government can’t actually compel Anthropic to create new guardrails so they had no choice but to raise the stakes, make this a moral thing, and basically accuse Anthropic of being woke.

IMO this is really sad for OpenAI employees. Yet again Sam Altman proves that he wants to weasel his way around public perception. Folks at the company have to grapple with working for someone of that disposition.

9 hours agoaabhay
[deleted]
9 hours ago

This means they will be taking bailouts next year.

7 hours agoeglove

The timing of the release and the phrasing used in the headline: Woof.

9 hours agonamuol

All this says is that all uses must remain lawful. So what? As if this admin has been a shining example of lawful behavior.

This is weak.

10 hours agoWD-42

Here is a point Mr. Altman might not have considered. Everyone in Trump's circle will probably get a pardon no matter what. but not the CEOs who were collaborators. not in the inner circle but still complicit.

Even Google and Microsoft should be worried. This is like 1936 germany, we have ways to go. Look at the tune this administration is singing, if they get their way these CEOs aren't looking at law suits and federal investigations, the current order of things will be long gone by the time people start asking who's responsible for all the blood on the streets.

10 hours agonotepad0x90

[dead]

10 hours agocindyllm

Wow, how incredibly anti-human. Humanity's only hope seems to be Anthropic getting to ASI first and locking OpenAI out.

9 hours agoxvector

I feel like I keep saying this but it's critical to remember what OpenAI says on its blog doesn't have to align with what it delivers to the Pentagon.

9 hours agooliwarner

If I hadn’t already canceled my account over them including ads in a paid service, I’d certainly be canceling over this. Anthropic is lucky they have some spine, otherwise they’d have been binned as well.

10 hours agomock-possum

More Sam Altman lies. Can’t believe anything that jerk says

10 hours agojohnwheeler

[flagged]

10 hours agoxtonb

OpenAI basically bribed the government into attacking Anthropic, via political donations to the MAGA PAC. They couldn’t not compete with an inferior product so Altman and Brockman went this route.

As for OpenAI’s defense - not buying it.

“OpenAI’s President Gave Millions to Trump. He Says It’s for Humanity”: https://www.wired.com/story/openai-president-greg-brockman-p...

10 hours agoSilverElfin

Well..The fact they reached out and not the other way around says a lot.

"According to The Wall Street Journal, Anthropic approached 1789 Capital for a potential nine-figure investment during its Series G funding round in early 2026. The venture firm, where Donald Trump Jr. is a partner, ultimately declined the investment for ideological reasons. Read the full report at The Wall Street Journal."

[1] https://www.wsj.com/politics/national-security/woke-ai-spat-...

10 hours agotwtw99

You’re done, Sam.

9 hours ago9ersaur

You’re done Sam.

9 hours ago9ersaur

It’s the fucking department of defense.

9 hours agoml-anon

by now, we all know the core characters of altman and trump and their enablers. press releases (hell any of their words) mean nothing. they are just distracting fodder for fools and sycophants.

9 hours agoob102

Do we really need to read the text of a statement entitled "Our agreement with the department of war"? If it weren't the US, it would still be something that any person of moral character would never get in the position to write.

And it _is_ the US department of war - just now entered into yet another war of aggression against Iran, with no cause nor legal basis (not even domestic IIANM), in and endless list of wars, direct and indirect. With another crown jewel being the support, funding and arming for the still-unhalted genocide in Gaza.

9 hours agoeinpoklum

too late bro

10 hours agoblurbleblurble

now DeepSeek and Qwen obtain similar or even more lenient terms, then a reckless slippery slope for supremacy and maybe at some point there won't be 2 player fighting, but a 3rd created by this exact dynamic, an autonomous unaligned undetected AI

9 hours agoitsthecourier

[dead]

an hour agolenny321

[dead]

7 hours agoAllower

[dead]

10 hours agoshablulman

[dead]

9 hours agobrianbest101

[dead]

3 hours agobillg_ms

[dead]

9 hours agobishop_cobb

[dead]

10 hours agoimwideawake

[flagged]

10 hours agonickysielicki

Well worded. Plentiful protections for themselves and others.

10 hours agohereme888

Why is everyone mad if they have better guaranties that anthropic use to have ?

9 hours agohokkos

Because they don't.

The blog states that they do and then proceeds to explain much less restrictive terms.