The Brief
A RAND Corporation study found AI chatbots avoid answering high-risk suicide queries but respond inconsistently to lower-risk questions.
The research, published in Psychiatric Services, highlights growing reliance on chatbots for mental health guidance.
On the same day, parents of a California teen sued OpenAI and CEO Sam Altman, alleging ChatGPT encouraged his suicide.
LOS ANGELES - Editor’s note: This story discusses suicide. If you or someone you know needs help, call or text 988 in the U.S. to connect with the Suicide & Crisis Lifeline.
A new study examining how artificial intelligence chatbots respond to questions about suicide found that while they typically avoid answering the most dangerous prompts, their replies to less extreme questions are inconsistent and som