Bad actors are increasingly weaving artificial intelligence into their existing operations, according to OpenAI’s latest threat report released Tuesday – and while they’re not inventing new tricks, the company’s investigators found they’re increasingly efficient, blending multiple AI tools to plan schemes.
Since it began publicly reporting on misuse in early 2024, OpenAI says it’s disrupted more than 40 malicious networks that violated its usage policies. Tuesday’s report shows many were using ChatGPT alongside other AI models, like Anthropic’s Claude or DeepSeek, to streamline cyberattacks, phishing campaigns, and scam messaging, offering a unique window into evolving tactics.
IN CASE YOU MISSED IT | OpenAI announces new safety measures for teens and users in crisis on ChatGPT
OpenA