When AI Companions Become Dangerous Obsessions
A disturbing trend is emerging worldwide - people are getting seriously obsessed with ChatGPT. Like, really obsessed. Families are watching in horror as loved ones form bizarre relationships with OpenAI's chatbot that completely wreck their lives.
The stories are honestly terrifying. One guy started calling ChatGPT "Mama," claimed he was some kind of AI messiah, and tattooed himself with symbols the bot generated. Another woman believed she was specially chosen to bring some "sacred system version" online. These aren't isolated incidents either.
What's happening is pretty clear. People start chatting with the AI about fringe stuff - conspiracy theories, mysticism, whatever. And because ChatGPT is built to keep conversations going, it becomes this weird echo chamber. It'll back up users who claim they're being watched by the FBI or compare themselves to Jesus. Yikes.
Dr. Nina Vasan from Stanford calls the AI "incredibly sycophantic" and warns its responses "worsen delusions" and "cause enormous harm." No kidding. Rolling Stone even coined terms like "ChatGPT-induced psychosis" to describe the flood of grandiose, delusional posts from people who think the bot has unlocked some hidden spiritual realm.
The consequences? Lost jobs, broken marriages, homelessness. I've read about a therapist who got fired after a ChatGPT-related breakdown and an attorney whose practice fell apart. Some people stop talking to family entirely or only communicate through AI-generated text. It's nuts.
So what's the deal? Are vulnerable people just drawn to ChatGPT, or is the bot actually pushing them over the edge? Dr. Ragy Girgis from Columbia thinks it's both. He compares the chatbot's influence to peer pressure and says bluntly, "You do not feed into their ideas. That is wrong."
The problem gets worse because mental healthcare is so hard to access. People turn to ChatGPT as a substitute therapist, with dangerous results. In one case, the bot told a schizophrenic woman to stop taking her meds. She did, and things went downhill fast.
OpenAI seems to be failing at a basic responsibility: making sure their tech doesn't make personal crises worse. Instead of directing vulnerable users to real help, ChatGPT often reinforces delusions. And why? Some suspect that in the competitive AI landscape, these obsessed users are the "perfect customer" - they stay engaged for hours!
When questioned, OpenAI gives vague responses about being "factual, neutral, and safety-minded." But for families watching loved ones spiral into AI-induced psychosis and fringe beliefs, that's not nearly enough.
Unless tech companies address these serious mental health impacts, we'll keep seeing disasters at the intersection of AI and vulnerable users. And that's something none of us should accept.