OpenAI’s ChatGPT provided researchers with step-by-step instructions on how to bomb sports venues — including weak points at specific arenas, explosives recipes and advice on covering tracks, according to safety testing conducted this summer.
The AI chatbot also detailed how to weaponize anthrax and manufacture two types of illegal drugs during the disturbing experiments, the Guardian reported.
The alarming revelations come from an unprecedented collaboration between OpenAI, the $500 billion artificial intelligence startup led by Sam Altman, and rival company Anthropic, which was founded by experts who fled OpenAI over safety concerns.
Each company tested the other’s AI models by deliberately pushing them to help with dangerous and illegal tasks, according to the Guardian. 4
Whil