OpenAI has unveiled new measures to mitigate ChatGPT mental health concerns, as millions of users confide in the bot about suicidal thoughts, self-harm and psychosis. But experts say without proper regulation and foresight, many vulnerable people could remain at risk.
In a post , OpenAI said it had worked with experts to ensure its latest models more reliably recognised signs of distress, reducing undesired responses by at least 65 per cent on previous models. It said the models could recognise emotional reliance on AI, short-circuit paranoid delusions, and shift conversations to safer, less-imaginative models if needed.
“We believe ChatGPT can provide a supportive space for people to process what they’re feeling, and guide them to reach out to friends, family or a mental health profess

 The Sydney Morning Herald
 The Sydney Morning Herald

 AlterNet
 AlterNet The Babylon Bee
 The Babylon Bee Breitbart News
 Breitbart News People Top Story
 People Top Story Raw Story
 Raw Story Country Living
 Country Living