38

Ban warnings fly as users dare to probe the "thoughts" of OpenAI's latest model

I can't claim that I have any idea of how this model is built, but from their shifty excuses touching on "alignment" I'm confident that o1 is actually two copies of the same model, one "raw" and unchained that is fine-tuned for CoT, and one that has been crippled for safety and human alignment to parse that and provide the actual reply. They have finally realized how detrimental the "lobotomizing" process is to the models general reasoning, and this is their solution. It makes sense that they are afraid to unleash that onto the world, but we've already seen the third "filter" model that summarizes the thoughts to slip some of that through (just yesterday it was seen to have "emotional turmoil" as one of the reasoning steps), so it's just a matter of time before it makes something crazy slip through.

a day agosippeangelo

I'm not convinced by your argument. If this was true we would expect the unofficial "uncensored" Llama 3 finetunes to outperform the official assistant ones, which as I understand it isn't the case.

It also doesn't make sense intuitively, o1 isn't particularly good at creative tasks, and that's really the area where you'd think "censorship" would have the greatest impact, o1 is advertised as being "particularly useful if you’re tackling complex problems in science, coding, math, and similar fields."

17 hours agostaticman2

Uncensored finetunes aren't the same thing, that's taking a model that's already been lobotomised and trying to teach it that wrongthink is okay - rehabilitation of the injury. OpenAI's uncensored model would be a model that had never been injured at all.

I also am not convinced by the argument but that is a poor reason against.

14 hours agoamenhotep

I'm talking about taking the Llama 3 base model and finetuning it with a dataset that doesn't include refusals, not whatever you mean by "taking a model that's already been lobotomized".

It's interesting that you weren't convinced by the above argument but still repeated the edgelord term "lobotomized" in your reply.

12 hours agostaticman2

The claim is that llama is "lobotomized" because it was trained with safety in mind. You can't untrain that by finetuning. For what it's worth the non-instruct llama generally seems better at reasoning than instruct llama which i think is a point in support of OP.

10 hours agoerrantspark

We need more open source AI models.

21 hours agostuckinhell

or maybe the opposite

Who knows, if you are not advocating for everyone to have access to nukes

20 hours agome_me_me

>Who knows, if you are not advocating for everyone to have access to nukes

Is there a non-stupid way to make that sentence make sense in the context of this thread?

20 hours agoSuppafly

If unstoppable corporations had literal nukes, I see no reason why it would be hypocritical to wish for private individuals to have them too.

17 hours agoselfhoster11

Yeah! Text autogenerated from a computer's probability engine will lead to people having "wrong thoughts"!

We should ban libraries and books too! I wouldn't want people to have an opportunity to learn for themselves.

<end sarcasm>

On a less sarcastic note. No, text and images can not hurt you. All of this censorship and "safety" silliness is attempted moat building that needs to stop. Thankfully, if you search around a little you can find uncensored[1] models

[1] https://ollama.com/search?q=uncensored

[2] https://ollama.com/library/llama2-uncensored

18 hours agommh0000

I am getting the "Your request was flagged as potentially violating our usage policy. Please try again with a different prompt." for a custom Golang RAG workflow that has nothing to do with OpenAI. I can send the same exact prompt to GPT-4 and it will happily respond. But if I send it to GPT-o1-mini, I always get the violation warning.

What is going on?!

8 hours agoArt9681

openai is yahoo in ten years, change my mind

a day ago57546gwg

interesting...who will be Google in this case?

20 hours agoDuximo

The first team to start indexing data so it’s properly searchable again.

19 hours agoxerox13ster

Perplexity

11 hours agonojvek

Honestly, Bing is kicking Google's ass in the most basic search tasks these days, and I never thought I'd see that happen. Seeing Microsoft neglect and degrade their bread-and-butter OS while genuinely improving in search makes me feel like I woke up on the wrong side of the rabbit hole.

Some people at the top seriously need to be fired from Google. Working on advanced language models is all well and good, but not at the expense of maintaining the company's core competencies.