Meta is instituting interim safety changes to ensure the company's chatbots don't cause additional harm to teen users, as AI companies face a wave of criticism for their allegedly lax safety protocols .

In an exclusive with TechCrunch , Meta spokesperson Stephanie Otway told the publication that the company's AI chatbots were now being trained to no longer "engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations." Previously, chatbots had been allowed to broach such topics when "appropriate."

Meta will also only allow teen accounts to utilize a select group of AI characters — ones that "promote education and creativity" — ahead of a more robust safety overhaul in the future.

Earlier this month, Reuters reported that so

See Full Page