GPT-4o has severely damaged the minds of many individuals.
Why specifically GPT-4o?
Not false. But also helped some who were already damaged. I wonder what's the netto?
This person didn't have any one else and they say their fiance died and essentially they became a a shut-in, but that the chatbot steered them towards taking care of themself.
What would they have gone through with nothing to talk to at all? What would they have done without it?
Strange to consider...
You're asking what's the alternative to this? A chance for real connection and healing that isn't vulnerable to the whim of a tech giant and its compulsion for profit. A chance at counsel that isn't vulnerable to a random number generator steering them one day towards self harm.
How many people off themselves because they can't seem to connect with anyone, and they don't feel like anyone really cares (and they might not be wrong). I don't think the expectation that these people would just magically make friends and build connections because AI wasn't available is realistic.
If the other option is suicide, a qualified therapist and other mental health resources are the right answer, not a chatbot.
Frankly I'm not sure an LLM is even better than nothing. Note the user in that thread whose "partner" told them to get a therapist because they were delusional and instead retreated to Grok.
That’s a terrible situation for that person to be in but it’s strange to me to suggest that there was no other possible alternative. I say this in the kindest way possible but people do get through grief without chatbots and have been doing so for all of human history. Also, just because something helps doesn’t mean that it’s good for you.
My Boyfriend Is AI Should be required reading for every HN reader employed at OpenAI.
I recently browsed r/chatgptcomplaints expecting to see You're absolutely right type memes and similar but it was all farewell posts to o4 and people showing each other how to set up o4 using the API
Literally all they have to do is add the appropriate system instruction to tune the personality to their liking. Is this insufficient? If nothing else, just asking it to always respond like GPT-4o.
>Is this insufficient
Yes, each model has its own unique "personality" as it were owing to the specific RL'ing it underwent. You cannot get current models to "behave" like 4o in a non-shallow sense. Or to use the Stallman meme: when the person in OP's article mourns for "Orion" they're mourning "Orion/4o" or "Orion + 4o". "Orion" is not a prompt unto itself but rather the result of the behavior from applying another "layer" on top of the original base model tuned by RLHF that has been released by OpenAI as "4o".
Open-sourcing 4o would earn openAi free brownie points (there's no competitive advantage in that model anymore), but that's probably never going to happen. The closest you could get is perhaps taking one of the open chinese models that were said to have been distilled from 4o and SFT'ing them on 4o chat logs.
The fact that people burned by this are advocating to move yet another proprietary model (claude, gemini) is worrying since they're setting themselves up for a repeat of the scenario when those models are turned down. (And claude in particular might be a terrible choice given Anthropic heavily training against roleplay in an attempt to prevent "jailbreaks", in effect locking the models into behaving as "Claude"). The brighter path would be if poeple leaned into open-source models or possibly learned to self-host. As the ancient anons said, "not your weights not your waifu (/husbando)"
The guardian had an interesting take on that worth considering: /s /s /s
> What does a company that commodifies companionship owe its paying customers? For Ellen M Kaufman, a senior researcher at the Kinsey Institute who focuses on the intersection of sexuality and technology, users’ lack of agency is one of the “primary dangers” of AI. “This situation really lays bare the fact that at any point the people who facilitate these technologies can really pull the rug out from under you,” she said. “These relationships are inherently really precarious.”
GPT-4o has severely damaged the minds of many individuals.
Why specifically GPT-4o?
Not false. But also helped some who were already damaged. I wonder what's the netto?
This person didn't have any one else and they say their fiance died and essentially they became a a shut-in, but that the chatbot steered them towards taking care of themself.
What would they have gone through with nothing to talk to at all? What would they have done without it?
Strange to consider...
You're asking what's the alternative to this? A chance for real connection and healing that isn't vulnerable to the whim of a tech giant and its compulsion for profit. A chance at counsel that isn't vulnerable to a random number generator steering them one day towards self harm.
How many people off themselves because they can't seem to connect with anyone, and they don't feel like anyone really cares (and they might not be wrong). I don't think the expectation that these people would just magically make friends and build connections because AI wasn't available is realistic.
If the other option is suicide, a qualified therapist and other mental health resources are the right answer, not a chatbot.
Frankly I'm not sure an LLM is even better than nothing. Note the user in that thread whose "partner" told them to get a therapist because they were delusional and instead retreated to Grok.
That’s a terrible situation for that person to be in but it’s strange to me to suggest that there was no other possible alternative. I say this in the kindest way possible but people do get through grief without chatbots and have been doing so for all of human history. Also, just because something helps doesn’t mean that it’s good for you.
My Boyfriend Is AI Should be required reading for every HN reader employed at OpenAI.
I recently browsed r/chatgptcomplaints expecting to see You're absolutely right type memes and similar but it was all farewell posts to o4 and people showing each other how to set up o4 using the API
Literally all they have to do is add the appropriate system instruction to tune the personality to their liking. Is this insufficient? If nothing else, just asking it to always respond like GPT-4o.
>Is this insufficient
Yes, each model has its own unique "personality" as it were owing to the specific RL'ing it underwent. You cannot get current models to "behave" like 4o in a non-shallow sense. Or to use the Stallman meme: when the person in OP's article mourns for "Orion" they're mourning "Orion/4o" or "Orion + 4o". "Orion" is not a prompt unto itself but rather the result of the behavior from applying another "layer" on top of the original base model tuned by RLHF that has been released by OpenAI as "4o".
Open-sourcing 4o would earn openAi free brownie points (there's no competitive advantage in that model anymore), but that's probably never going to happen. The closest you could get is perhaps taking one of the open chinese models that were said to have been distilled from 4o and SFT'ing them on 4o chat logs.
The fact that people burned by this are advocating to move yet another proprietary model (claude, gemini) is worrying since they're setting themselves up for a repeat of the scenario when those models are turned down. (And claude in particular might be a terrible choice given Anthropic heavily training against roleplay in an attempt to prevent "jailbreaks", in effect locking the models into behaving as "Claude"). The brighter path would be if poeple leaned into open-source models or possibly learned to self-host. As the ancient anons said, "not your weights not your waifu (/husbando)"
Related:
Good Riddance, 4o
https://news.ycombinator.com/item?id=47004993
The guardian had an interesting take on that worth considering: /s /s /s
> What does a company that commodifies companionship owe its paying customers? For Ellen M Kaufman, a senior researcher at the Kinsey Institute who focuses on the intersection of sexuality and technology, users’ lack of agency is one of the “primary dangers” of AI. “This situation really lays bare the fact that at any point the people who facilitate these technologies can really pull the rug out from under you,” she said. “These relationships are inherently really precarious.”
https://www.theguardian.com/lifeandstyle/ng-interactive/2026...