For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users' privacy? Could it cause harm to minors? Could it worsen the spread of misleading or toxic content?
Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators.
But now, according to internal company documents obtained by NPR, up to 90% of all risk assessments will soon be automated.
In practice, this means things like critical updates to Meta's algorithms, new safety features and changes to how content is allowed to be shared across the company's platforms will be mostly approved by a system powered by artificial intelligence — no longer subject to scrutiny by staffers task