OpenAI began testing a new safety routing system in ChatGPT over the weekend, and on Monday introduced parental controls to the chatbot – drawing mixed reactions from users.
The safety features come in response to numerous incidents of certain ChatGPT models validating users’ delusional thinking instead of redirecting harmful conversations. OpenAI is facing a wrongful death lawsuit tied to one such incident, after a teenage boy died by suicide after months of interactions with ChatGPT.
The routing system is designed to detect emotionally sensitive conversations and automatically switch mid-chat to GPT-5-thinking, which the company sees as the best equipped model for high-stakes safety work. In particular, the GPT-5 models were trained with a new safety feature that OpenAI calls