As leading artificial intelligence companies release increasingly capable AI systems, a new report is sounding the alarm about what it says are some of those companies’ lagging safety practices.

The Winter 2025 AI Safety Index, which examines the safety protocols of eight leading AI companies, found that their approaches “lack the concrete safeguards, independent oversight and credible long-term risk-management strategies that such powerful systems demand.”

Sabina Nong, an AI safety investigator at the nonprofit Future of Life Institute (FLI), which organized the report and works to address large-scale risks from technologies like nuclear weapons and AI, said in an interview at the San Diego Alignment Workshop that the analysis revealed a divide in organizations’ approaches to safety.

See Full Page