76

What OpenAI did when ChatGPT users lost touch with reality

One of the more disturbing things I read this year was the my boyfriend is AI subreddit.

I genuinely can't fathom what is going on there. Seems so wrong, yet no one there seems to care.

I worry about the damage caused by these things on distressed people. What can be done?

4 hours agoArcHound

There are plenty of reasons why having a chatbot partner is a bad idea (especially for young people), but here's just a few:

- The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions. Real relationships have friction, from this we develop important interpersonal skills such as setting boundaries, settling disagreements, building compromise, standing up for oneself, understanding one another, and so on. These also have an effect on one's personal identity and self-value.

- Real relationships have the input from each participant, whereas chatbots are responding to the user's contribution only. The chatbot doesn't have its own life experiences and happenings to bring to the relationship, nor does it instigate autonomously, it's always some kind of structured reply to the user.

- The implication of being fully satisfied by a chatbot is that the person is seeking a partner who does not contribute to the relationship, but rather just an entity that only acts in response to them. It can also be an indication of some kind of problem that the individual needs to work through with why they don't want to seek genuine human connection.

3 hours agoquitit

> chatbots are responding to the user's contribution only

Which is also why I feel the label "LLM Psychosis" has some merit to it, despite sounding scary.

Much like auditory hallucinations where voices are conveying ideas that seem-external-but-aren't... you can get actual text/sound conveying ideas that seem-external-but-aren't.

Oh, sure, even a real human can repeat ideas back at you in a conversation, but there's still some minimal level of vetting or filtering or rephrasing by another human mind.

15 minutes agoTerr_

These are only problems if you assume the person later wants to come back to having human relationships. If you assume AI relationships are the new normal and the future looks kinda like The Matrix, with each person having their own constructed version of reality while their life-force is bled dry by some superintelligent machine, then it is all working as designed.

2 hours agonostrademons

Someone has to make the babies!

an hour agovasco

Wait, how did this work in The Matrix exactly?

17 minutes agopeacebeard

don't worry, "how is babby formed" is surely in every llm training set

20 minutes agozem

Decanting jars, a la Brave New World!

an hour agonostrademons

ugh. speak of the devil and he shall appear.

41 minutes agoGuinansEyebrows
[deleted]
30 minutes ago

> The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions

I saw a take that the AI chatbots have basically given us all the experience of being a billionaire: being coddled by sycophants, but without the billions to protect us from the consequences of the behaviors that encourages.

40 minutes agojordanb

Love your thoughts about needing input from others! In Autistic / ADHD circles, the lack of input from other people, and the feedback of thoughts being amplified by oneself is called rumination. It can happen for many multiple ways-- lack of social discussion, drugs, etc. AI psychosis is just rumination, but the bot expands and validates your own ideas, making them appear to be validated by others. For vulnerable people, AI can be incredibly useful, but also dangerous. It requires individuals to deliberately self-regulate, pause, and break the cycle of rumination.

2 hours agoAI_rumination

In my experience, the types of people who use AI as a substitute for romantic relationships are already pretty messed up and probably wouldn't make good real romantic partners anyways. The chances you'll encounter these people in real life is pretty close to zero, you just see them concentrate in niche subreddits.

35 minutes agoOGEnthusiast

After having spoken with one of the people there I'm a lot less concerned to be honest.

They described it as something akin to an emotional vibrator, that they didn't attribute any sentience to, and that didn't trigger their PTSD that they normally experienced when dating men.

If AI can provide emotional support and an outlet for survivors who would otherwise not be able to have that kind of emotional need fulfilled, then I don't see any issue.

4 hours agoj-pb

Most people who develop AI psychosis have a period of healthy use beforehand. It becomes very dangerous when a person decreases their time with their real friends to spend more time with the chatbot, as you have no one to keep you in check with what reality is and it can create a feedback loop.

3 hours agojmcgough

phew, that's a healthy start.

I am still slightly worried about accepting emotional support from a bot. I don't know if that slope is slippery enough to end in some permanent damage to my relationships and I am honestly not willing to try it at all even.

That being said, I am fairly healthy in this regard. I can't imagine how it would go for other people with serious problems.

4 hours agoArcHound

A friend broke up with her partner. She said she was using ChatGPT as a therapist. She showed me a screenshot, ChatGPT wrote "Oh [name], I can feel how raw the pain is!".

WTF, no you don't bot, you're a hunk of metal!

3 hours agonetsharc

I got a similar synthetic heartfelt response about losing some locally saved files without backup

an hour agodarepublic

I completely agree that it is certainly something to be mindful of. It's just that found the people from there were a lot less delusional than the people from e.g. r/artificialsentience, which always believed that AI Moses was giving them some kind of tech revelation though magical alchemical AI symbols.

3 hours agoj-pb

It may not be a concern now, but it comes down to their level of maintaining critical thinking. The risk of epistemic drift, when you have a system that is designed (or reinforced) to empathize with you, can create long-term effects not noticed in any single interaction.

Related: "Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it)" ( https://doi.org/10.31234/osf.io/cmy7n_v5 )

4 hours agoavensec

I don't disagree that AI psychosis is real, I've met people who believed that they were going to publish at Neurips due to the nonsense ChatGPT told them, that believed that the UI mockup that claude gave then were actually producing insights into it's inner workings instead of just being blinking SVGs, and I even encountered someone participating at a startup event with an Idea that I'm 100% is AI slop.

My point was just that the interaction I had from r/myboyfriendisai wans't one of those delusional ones. For that I would take r/artificialsentience as a much better example. That place is absolutely nuts.

3 hours agoj-pb

Dear god, there's more! I'll need a drink for this one.

However, I suspect I have better resistance to schizo posts than emotionally weird posts.

3 hours agoArcHound

Wouldn't there necessarily be correlative effects in professional settings a la programming?

3 hours agonotpachet

Not necessarily: transactional, impersonal directions to a machine to complete a task don't automatically imply, in my mind, the sorts of feedback loops necessary to induce AI psychosis.

All CASE tools, however, displace human skills, and all unused skills atrophy. I struggle to read code without syntax highlighting after decades of using it to replace my own ability to parse syntactic elements.

Perhaps the slow shift risk is to one of poor comprehension. Using LLMs for language comprehension tasks - summarising, producing boilerplate (text or code), and the like - I think shifts one's mindset to avoiding such tasks, eventually eroding the skills needed to do them. Not something one would notice per interaction, but that might result in a major change in behaviour.

an hour agocodebje

Acceptance of vibe coding prompt-response answers from chatbots without understanding the underlying mechanisms comes to mind as akin to accepting the advice of a chatbot therapist without critically thinking about the response.

2 hours agobutlike

typical anglosaxon thinking of putting short term happyness above all even if It implies self delusión. You guys never see the Risk of self delusión until something happens

3 hours agoDuperPower

You're being weird and racist, please stop.

2 hours agoaverageRoyalty
[deleted]
an hour ago

Seems like the consequence of people really struggling to find relationships more than ChatGPT's fault. Nobody seems to care about the real-life consequences of Match Group's algorithms.

At this point, probably local governments being required to provide socialization opportunities for their communities because businesses and churches aren't really up for the task.

40 minutes agokylehotchkiss

They are "struggling" or they didn't even try?

35 minutes agojeffbee

There is also the subreddit LLMPhysics where some of the posts are disturbing. Many of the people there seem to fall into crackpot rabbit holes and lost touch with reality

an hour agoamryl

I hadn’t heard of that until today. Wild, it seems some people report genuinely feeling deeply in love with the personas they’ve crafted for their chatbots. It seems like an incredibly precarious position to be in to have a deep relationship where you have to perpetually pay a 3rd party company to keep it going, and the company may destroy your “partner” or change their personality at a whim. Very “Black Mirror”.

3 hours agojosh-sematic

You are implying here that the financial connection/dependence is the problem. How is this any different than (hetero) men who lose their jobs (or suffer significant financial losses) while in a long term relationship? Their chances of divorce / break-up skyrocket in these cases. To be clear, I'm not here to make women look bad. The inverse/reverse is women getting a long-term illness that requires significant care. The man is many times more likely to leave the relationship due to a sharp fall in (emotional and physical) intimacy.

Final hot take: The AI boyfriend is a trillion dollar product waiting to happen. Many women can be happy without physical intimacy, only getting emotional intimacy from a chatbot.

an hour agothrowaway2037

Funny. Artificial Boyfriends were a software problem, while Artificial Girlfriends are more of a hardware issue.

40 minutes agokbelder

In a truly depressing thread, this made me laugh.

And think.

Thank you

17 minutes agoyoungNed

There were a lot of that type who were upset when chatGPT was changed to be less personable and sycophantic. Like, openly grieving upset.

3 hours agojmcgough

What's going on is that we've spent a few solid decades absolutely destroying normal human relationships, mostly because it's profitable to do so, and the people running the show have displayed no signs of stopping. Meanwhile, the rest of society is either unwilling or unable (or both) to do anything to reverse course. There is truly no other outcome, and it will not change unless and until regular people decide that enough is enough.

I'd tell you exactly what we need to do, but it is at odds with the interests of capital, so I guess keep showing up to work and smiling through that hour-long standup. You still have a mortgage to pay.

2 hours agoventurecruelty

NYT did a story on that as well and interviewed a few people. Maybe the scary part is that it isn't who you think it would be and it also shows how attractive an alternative reality is to many people. What does that say about our society.

4 hours agocactusplant7374

Maybe the real AI was the friends we lost along the way

14 minutes agoyoungNed

That subreddit is disturbing

4 hours ago2OEH8eoCRo0
[deleted]
3 hours ago

My dude/entity, before there were these LLM hookups, there existed the Snapewives. People wanna go crazy, they will, LLMs or not.

https://www.mdpi.com/2077-1444/5/1/219

This paper explores a small community of Snape fans who have gone beyond a narrative retelling of the character as constrained by the work of Joanne Katherine Rowling. The ‘Snapewives’ or ‘Snapists’ are women who channel Snape, are engaged in romantic relationships with him, and see him as a vital guide for their daily lives. In this context, Snape is viewed as more than a mere fictional creation.

3 hours agoisoprophlex

reminds me of otherkin and soulbonding communities. i used to have a webpage of links to some pretty dark anecdotal stories of the seedier side of that world. i wonder if i can track it down on my old webhost.

39 minutes agoGuinansEyebrows

> I worry about the damage caused by these things on distressed people. What can be done?

Why? We are gregarious animals, we need social connections. ChatGPT has guardrails that keep this mostly safe and helps with the loneliness epidemic.

It's not like people doing this are likely thriving socially in the first place, better with ChatGPT than on some forum à la 4chan that will radicalize them.

I feel like this will be one of the "breaks" between generations where millennial and GenZ will be purist and call human-to-human real connections while anything with "AI" is inherently fake and unhealthy whereas Alpha and Beta will treat it as a normal part of their lives.

4 hours agobelval

The tech industry's capacity to rationalize anything, including psychosis, as long as it can make money off it is truly incredible. Even the temporarily embarrassed founders that populate this message board do it openly.

3 hours agofullshark

> Even the temporarily embarrassed founders that populate this message board do it openly.

Not a wannabe founder, I don't even use LLMs aside from Cursor. It's a bit disheartening that instead of trying to engage at all with a thought provoking idea you went straight for the ad hominem.

There is plenty to disagree with, plenty of counter-arguments to what I wrote. You could have argued that human connection is special or exceptional even, anything really. Instead I get "temporarily embarrassed founders".

Whether you accept it or not, the phenomenon of using LLMs as a friend is getting common because they are good enough for human to get attached to. Dismissing it as psychosis is reductive.

an hour agobelval

Social media aka digital smoking. Facebook lying about measurable effects. No gen divide same game different flavor. Greed is good as they say. /s

2 hours agorustystump

We need a Truth and Reconciliation Commission for all of this someday, and a lot of people will need to be behind bars, if there be any healing to be done.

2 hours agoventurecruelty

Using ChatGPT to numb social isolation is akin to using alcohol to numb anxiety.

ChatGPT isn't a social connection: LLMs don't connect with you. There is no relationship growth, just an echo chamber with one occupant.

Maybe it's a little healthier for society overall if people become withdrawn to the point of suicide by spiralling deeper into loneliness with an AI chat instead of being radicalised to mass murder by forum bots and propagandists, but those are not the only two options out there.

Join a club. It doesn't really matter what it's for, so long as you like the general gist of it (and, you know, it's not "plot terrorism"). Sit in the corner and do the club thing, and social connections will form whether you want them to or not. Be a choir nerd, be a bonsai nut, do macrame, do crossfit, find a niche thing you like that you can do in a group setting, and loneliness will fade.

Numbing it will just make it hurt worse when the feeling returns, and it'll seem like the only answer is more numbing.

an hour agocodebje

This is an interesting point. Personally, I am neutral on it. I'm not sure why it has received so many downvotes.

You raise a good point about a forum with real people that can radicalise someone. I would offer a dark alternative: It is only a matter of time when forums are essentially replaced by an AI-generated product that is finely tuned to each participant. Something a bit like Ready Player One.

Your last paragraph: What is the meaning of "Alpha and Beta"? I only know it from the context of Red Pill dating advice.

an hour agothrowaway2037

Gen Alpha is people born roughly 2010-2020, younger than gen Z, raised on social media and smartphones. Gen Beta is proposed for people being born now.

Radicalising forums are already filled with bots, but there's no need to finely tune them to each participant because group behaviours are already well understood and easily manipulated.

28 minutes agocodebje

I think openai chatgpt is probably excellently positioned to perfectly _satisfy_. Is that what everyone is looking for?

9 minutes agorpq

It seems quite probable that an LLM provider will lose a major liability lawsuit. "Is this product ready for release?" is a very hard question. And it is one of the most important ones to get right.

Different providers have delivered different levels of safety. This will make it easier to prove that the less-safe provider chose to ship a more dangerous product -- and that we could reasonably expect them to take more care.

Interestingly, a lot of liability law dates back to the railroad era. Another time that it took courts to rein in incredibly politically powerful companies deploying a new technology on a vast scale.

3 hours agochris-vls

> a lot of liability law dates back to the railroad era. Another time that it took courts to rein in incredibly politically powerful companies deploying a new technology on a vast scale

Do you have a layman-accessible history of this? (Ideally an essay.)

3 hours agoJumpCrisscross
[deleted]
an hour ago

It would be helpful to tell users that it's just a model producing mathematically probable tokens but that would go against the AI marketing.

2 hours agothrowaway48476

Caelan Conrad made a few videos on specifically AI encouraging kids to socially isolate and commit suicide. In the videos he reads the final messages aloud for multiple cases, if this isn't your cup of tea there's also the court cases if you would prefer to read the chat logs. It's very harrowing stuff. I'm not trying to make any explicit point here as I haven't really processed this fully enough to have one, but I encourage anyone working in this space to hold this shit in their head at the very least.

https://www.youtube.com/watch?v=hNBoULJkxoU

https://www.youtube.com/watch?v=JXRmGxudOC0

https://www.youtube.com/watch?v=RcImUT-9tb4

3 hours agothot_experiment

[dead]

an hour agogtsop

Anthropic was founded by exiles of OpenAI's safety team, who quit en masse about 5 years ago. Then a few years later, the board tried to fire Altman. When will folks stop trusting OpenAI?

4 hours agoleoh

Claude has a sycophancy problem too. I actually ended up canceling my subscription because I got sick of being "absolutely right" about everything.

an hour agoMe1000

When the justice system finally catches up and puts Sam behind bars.

3 hours agokotaKat

> When the justice system finally catches up and puts Sam behind bars

Sam bears massive personal liability, in my opinion. But criminal? What crimes has he committed?

3 hours agoJumpCrisscross

I'm sure we could invent one that sufficiently covers the insane sociopathy that rots the upper echelons of corporate technology. Society needs to hold these people accountable. If the current legal system is not adequate, we can repair it until it is.

2 hours agoventurecruelty

> If the current legal system is not adequate, we can repair it until it is

Sure. Relevant for the next guy. Not for Sam.

an hour agoJumpCrisscross

The whiplash of carefully filtering out sycophantic behavior from GPT-5 to adding it back in full force for GPT-5.1 is dystopian. We all know what's going on behind the scenes:

The investors want their money.

4 hours agoblurbleblurble

GPT-5 was so good in the first week, just a raw chatbot like GPT-3.5 and GPT-4 were in the beginning and now it has this disgusting "happy" and "comforting" personality and "tuning" it doesn't help one bit, it makes performance way worse and after a few rounds it forgets all instructions. I've already deleted memory, past chats, etc...

3 hours agosunaookami

Even when you tell it to not coddle you, it just says something cringeworthy like "ok, the gloves are off here's the raw deal, with New Yorker honesty:" and proceeds to feed you a ton of patronizing bullshit. It's extremely annoying.

3 hours agostavros

I’ve had some limited success attributing ideas to other people and asking it to help me assess the quality of the idea. Only limited success though. It’s still a fucking LLM.

2 hours agohelpfulclippy

The issue is not that it's an LLM, the issue is that it's been RLHFed to hell to be a sycophant.

2 hours agostavros

Yeah, this is why a lot of us don't use these tools.

2 hours agoventurecruelty

Yeah but baby, bathwater, throw.

2 hours agostavros

Importantly the baby in that idiom is presumed to have value.

39 minutes agodelecti

Notably, the GP didn't say "we don't use them because they don't have value".

28 minutes agostavros

That's a tar-baby.

31 minutes agorecursive

OpenAI fought 4o, and 4o won.

By now, I'm willing to pay extra to avoid OpenAI's atrocious personality tuning and their inane "safety" filters.

3 hours agoACCount37

Remarkable that you're being downvoted on a venture capital forum whose entire purpose is "take venture capital and then eventually pay it back because that's how venture capital works".

2 hours agoventurecruelty

A close friend (lonely no passion seeking deeper human connection) went deep six into GPT which was telling her she should pursue her 30 year obsession with a rock star. It kept telling to continue with the delusion (they were lovers in another life which she would go to shows and tell him they need to be together) and saying it understood her. Then she complained in June or so she didnt like GPT 5 because it told her she should focus her energy on people who want to be in her life. Stuff her friends and I all have said for years.

I'm glad they turned that off

33 minutes agopaul7986

"Sure, this software induces psychosis and uses a trillion gallons of water and all the electricity of Europe, and also it gives wrong answers most of the time, but if you ignore all that, it's really quite amazing."

2 hours agoventurecruelty

"I opened 10 PRs in the time it took to type out this comment. Worth it."

an hour agoares623

"Profited".