Amid the controversy over AI -generated “actress” Tilly Norwood, the State of California yesterday adopted one of the first laws to regulate the development, oversight and safety of AI.

The language in SB53 repeatedly targets potential “catastrophic risks” posed by AI models that may “materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from a single incident.”

It includes the Transparency in Frontier Artificial Intelligence Act, which sets forth mechanisms to ensure “greater transparency, whistleblower protections and incident reporting systems” among AI developers.

Specifically, SB53 requires that an AI developer:

1.) Incorporate “ national standards, in

See Full Page