Australia has asked four artificial intelligence (AI) chatbot companies to explain how they keep children safe from sexual or self-harm content, as the country’s internet regulator strengthens its online safety rules.
The realistic conversational abilities of such services have taken the world by storm, but have also fanned concern that a lack of guardrails exposes vulnerable individuals to dangerous content.
In a statement, the eSafety Commissioner said it sought details of safeguards against child sexual exploitation, pornography and material promoting suicide or eating disorders.
It sent notices to Character Technologies, owner of celebrity simulation chatbot tool character.ai, and rivals Glimpse.AI, Chai Research and Chub AI.
‘Darker Side’
“There can be a darker side to some of