California Gov. Gavin Newsom (D) signed a bill Monday placing new guardrails on how artificial intelligence (AI) chatbots interact with children and handle issues of suicide and self-harm.
S.B. 243, which cleared the state legislature in mid-September, requires developers of "companion chatbots” to create protocols preventing their models from producing content about suicidal ideation, suicide or self-harm and directing users to crisis services if needed.
It also requires chatbots to issue “clear and conspicuous” notifications that they are artificially generated if someone could reasonably be misled to believe they were interacting with another human.
When interacting with children, chatbots must issue reminders every three hours that they are not human. Developers are also required to