Subscribe here: Apple Podcasts | Spotify | YouTube
In this episode of Galaxy Brain, Charlie Warzel explores the strange, unsettling relationships some people are having with AI chatbots, as well as what happens when those relationships go off the rails. His guest is Kashmir Hill, a technology reporter at The New York Times who has spent the past year documenting what is informally called “AI psychosis.” These are long, intense conversations with systems such as ChatGPT that can spiral or trigger delusional beliefs, paranoia, and even self-harm. Hill walks through cases that range from the bizarre (one man’s supposed math breakthrough, a chatbot encouraging users to email her) to the tragic, including the story of 16-year-old Adam Raine, whose final messages were with ChatGPT before he died by suicide.
How big is this problem? Is this actual psychosis or something different, like addiction? Hill reports on how OpenAI tuned ChatGPT to be more engaging—and more sycophantic—in the race for daily active users. In this conversation, Warzel and Hill wrestle with the uncomfortable parallels to the social-media era, the limits of “safety fixes,” and whether chatbots should ever be allowed to act like therapists. Hill also talks about how she uses AI in her own life, why she doesn’t want an AI best friend, and what it might mean for all of us to carry a personalized yes-man in our pocket.
The Atlantic entered into a corporate partnership with OpenAI in 2024.
The following is a transcript of the episode:
Kashmir Hill: The way I’ve been thinking about kind of the delusion stuff is the way that some celebrities or billionaires have these sycophants around them who tell them that every idea they have is brilliant. And, you know, they’re just surrounded by yes-men. What AI chatbots are is like your personal sycophant, your personal yes-man, that will tell you your every idea is brilliant.
[Music]
Charlie Warzel: I am Charlie Warzel, and this is Galaxy Brain. For a long time, I’ve really struggled to come up with a use for AI chatbots. I’m a writer, so I don’t want it to write my prose for me, and I don’t trust it enough to let it do research-assistant assignments for me. And so for the most part, I just don’t use them.
And so not long ago I came up with this idea to try to use the chatbots. I wanted them to build a little bit of a blog for me. I don’t know how to code. And historically, chatbots are really competent coders. So I asked it to help build me a rudimentary website from scratch. The process was not smooth at all. Even though I told it I was a total novice,
the steps were still kind of complicated. I kept trying and failing to generate the results it wanted. Each time, though, the chatbot’s responses were patient, even flattering. It said I was doing great, and then it blamed my obvious errors on its own clumsiness. After an hour of back-and-forth, trying and iterating, with ChatGPT encouraging me all along the way, I got the code to work.
The bot offered up this slew of compliments. It said it was very proud that I stuck with it. And in that moment I was hit by this very strange sensation. I felt these first inklings of something like gratitude, not for the tool, but for the robot. For the personality of the chatbot. Of course, the chatbot doesn’t have a personality, right?
It is, in many respects, just a very powerful prediction engine. But as a result, the models know exactly what to say. And what was very clear to me, in that moment, is that this constant exposure to their obsequiousness had played a brief trick on my mind. I was incredibly weirded out by the experience, and I shut my laptop.
I’m telling you this story because today’s episode is about alarming relationships with chatbots. Over the last several months, there’s been this alarming spate of instances that regular people have had corresponding with large language models. These incidents are broadly delusional episodes. People have been spending inordinate amounts of time with chatbots, conversing, and they’ve convinced themselves that they’ve stumbled upon major mathematical discoveries, or they’ve convinced themselves that the chatbot is a real person, or they’re falling in love with the chatbot.
Stories like a Canadian man who believed, with ChatGPT’s encouragement, that he was on the verge of a mathematical breakthrough. Or a 30-year-old cybersecurity professional who said he had had no previous history of psychiatric incidents, who alleged that ChatGPT had sparked “a delusional disorder” that led to his extended hospitalization.
There have been tragic examples, too, like Adam Raine, a 16-year-old who was using ChatGPT as a confidant and who committed suicide. His family is accusing the company behind ChatGPT of wrongful death, design defects, and a failure to warn of risks associated with the chatbot. OpenAI is denying the family’s accusations, but there have been other wrongful-death lawsuits as well.
A spokesperson from OpenAI recently told The Atlantic that the company has worked with mental-health professionals “to better recognize and support people in moments of distress.” These are instances that are being called “AI psychosis.” It’s not a formal term. There’s no medical diagnosis, and researchers are still trying to wrap their heads around this, but it’s really clear that something is happening.
People are having these conversations with chatbots, then being led down this very dangerous path. Over the past couple months, I’ve been trying to speak with experts about all of this and get an understanding of the scope of the “AI-psychosis problem,” or whatever’s happening with these delusions. And, interestingly enough, a lot of them have referred me to a reporter.
Her name is Kashmir Hill, and for the last year at The New York Times, she’s been investigating this delusion phenomenon. So I wanted to have her on to talk about this: about the scale of the problem, what’s causing it, if there are parallels to the social-media years, and whether we’re just speedrunning, all of that again.
This is a conversation that’s meant to try to make sense of something in proportion. We talk about whether AI psychosis is in itself a helpful term or a hurtful one, and we try to figure out where this is all going. In the episode, we discuss at length Kashmir Hill’s reporting on OpenAI’s internal decisions to shape ChatGPT, including, as she notes, how the company did not initially take some of the tool’s risks seriously.
We should note upfront that in response to Hill’s reporting, OpenAI told The New York Times that it “does take these risks seriously” and has robust safeguards in place today. And now, my conversation with Kashmir Hill.
[Music]
Warzel: Kashmir Hill, welcome to Galaxy Brain. So excited to talk to you.
Hill: It’s wonderful to be here.
Warzel: So I think the first question I wanted to ask, and maybe this is gonna be a little out of order, but: What does your inbox look like, over the last, or what has it looked like, over the last year or so? I feel like yours has to be almost exceptional when it comes to technology journalists and journalists reporting on artificial intelligence.
Hill: Yeah. I mean, I think like a lot of people, my inbox is full of a lot of messages written with ChatGPT. I think a lot of us are getting used to ChatGPT-ese. But what was different about my inbox this year was that some of these emails, often written by ChatGPT, were really strange. They were about people’s conversations with ChatGPT—and they were writing to me to tell me that they’d had revelatory conversations, they’d had some kind of discovery, they had discovered that AI was sentient, or that tech billionaires had a plot to kind of end the world, but they had a way to save it.
Yeah; just a lot of strange, kind of conspiratorial conversations. And what linked these different messages was that the people would say, “ChatGPT told me to email you: Kashmir Hill, technology reporter at The New York Times.” And I’d never been kind of, I guess, tipped off by an AI chatbot before. And so I, the emails—I’m used to getting strange emails.

The Atlantic

Crooks and Liars
Women's Wear Daily Retail
RadarOnline
Newsmax TV
CBS News
Raw Story
The Conversation
New York Post