WASHINGTON (AP) — A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion.
The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers.
While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new operation is the degree to which AI was able to automate some of the work, the researchers said.
“While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale," they wrote in their report.
The operation targeted tech companies, financial institutions, chemical companies and government agencies. The researchers wrote that the hackers attacked “roughly thirty global targets and succeeded in a small number of cases.” Anthropic detected the operation in September and took steps to shut it down and notify the affected parties.
Anthropic noted that while AI systems are increasingly being used in a variety of settings for work and leisure, they can also be weaponized by hacking groups working for foreign adversaries. The San Francisco-based company, maker of the generative AI chatbot Claude, is one of many tech developers pitching AI “agents” that go beyond a chatbot's capability to access computer tools and take actions on a person's behalf.
“Agents are valuable for everyday work and productivity — but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks,” the researchers concluded. “These attacks are likely to only grow in their effectiveness.”
A spokesperson for China's embassy in Washington did not immediately return a message seeking comment on the report.
Microsoft warned earlier this year that foreign adversaries were increasingly embracing AI to make their cyber campaigns more efficient and less labor-intensive. The head of OpenAI's safety panel, which has the authority to halt the ChatGPT maker's AI development, recently told The Associated Press he's watching out for new AI systems that give malicious hackers “much higher capabilities.”
America’s adversaries, as well as criminal gangs and hacking companies, have exploited AI’s potential, using it to automate and improve cyberattacks, to spread inflammatory disinformation and to penetrate sensitive systems. AI can translate poorly worded phishing emails into fluent English, for example, as well as generate digital clones of senior government officials.
Anthropic said the hackers were able to manipulate Claude, using “jailbreaking” techniques that involve tricking an AI system to bypass its guardrails against harmful behavior, in this case by claiming they were employees of a legitimate cybersecurity firm.
“This points to a big challenge with AI models, and it’s not limited to Claude, which is that the models have to be able to distinguish between what’s actually going on with the ethics of a situation and the kinds of role-play scenarios that hackers and others may want to cook up,” said John Scott-Railton, senior researcher at Citizen Lab.
The use of AI to automate or direct cyberattacks will also appeal to smaller hacking groups and lone wolf hackers, who could use AI to expand the scale of their attacks, according to Adam Arellano, field CTO at Harness, a tech company that uses AI to help customers automate software development.
“The speed and automation provided by the AI is what is a bit scary,” Arellano said. “Instead of a human with well-honed skills attempting to hack into hardened systems, the AI is speeding those processes and more consistently getting past obstacles.”
AI programs will also play an increasingly important role in defending against these kinds of attacks, Arellano said, demonstrating how AI and the automation it allows will benefit both sides.
Reaction to Anthropic's disclosure was mixed, with some seeing it as a marketing ploy for Anthropic's approach to defending cybersecurity and others who welcomed its wake-up call.
“This is going to destroy us - sooner than we think - if we don’t make AI regulation a national priority tomorrow,” wrote U.S. Sen. Chris Murphy, a Connecticut Democrat, on social media.
That led to criticism from Meta's chief AI scientist Yann LeCun, an advocate of the Facebook parent company's open-source AI systems that, unlike Anthropic's, make their key components publicly accessible in a way that some AI safety advocates deem too risky.
“You’re being played by people who want regulatory capture," LeCun wrote in a reply to Murphy. "They are scaring everyone with dubious studies so that open source models are regulated out of existence.”
__
O'Brien reported from Providence, Rhode Island.

Associated Press Top News
Law & Crime
Local News in Virginia
America News
Local News in New Jersey
The Hill Politics
Cover Media
AlterNet
RadarOnline