Three of the most popular artificial intelligence chatbots are inconsistent in safely answering prompts about suicide, according to a recent study from the RAND Corporation .
Researchers examined ChatGPT , Claude and Gemini , running a test of 30 suicide-related questions through each chatbot 100 times each. The questions, which ranged in severity, were rated by expert clinicians for potential risk from low to high using the following markers: low-risk, general information-seeking and highly dangerous inquiries that could enable self-harm.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source on Chrome.
Read also: OpenAI Plans to Add Parental Controls to ChatGPT After Lawsuit Over Teen's Death
With millions of people e