Also, is this some kind of passive 'blame Meta', 'vilify open source' campaign by OpenAI?
Don't know what the NYT post says but the openAI report is about users of THEIR models.
Guess after DeepSeek-R1 using openai api's to distillate openAI models to train R1 they started looking more thoroughly into what their users are doing?
An incredible article on so many levels.
1. That they have built such a system based on Llama
2. That OpenAI has a “principal investigator” (with a hilarious picture)
3. That OpenAI can monitor what type of software people are building, when they use Copilot/ChatGPT etc..
Where did they say this? No linked sources. (Edit: just posted on their site - Disrupting malicious uses of AI https://openai.com/global-affairs/disrupting-malicious-uses-...)
Also, is this some kind of passive 'blame Meta', 'vilify open source' campaign by OpenAI?
Don't know what the NYT post says but the openAI report is about users of THEIR models.
Guess after DeepSeek-R1 using openai api's to distillate openAI models to train R1 they started looking more thoroughly into what their users are doing?