Even the tech industry’s top AI models, created with billions of dollars in funding, are astonishingly easy to “jailbreak,” or trick into producing dangerous responses they’re prohibited from giving — like explaining how to build bombs, for example. But some methods are both so ludicrous and simple that you have to wonder if the AI creators are even trying to crack down on this stuff. You’re telling us that deliberately inserting typos is enough to make an AI go haywire?
And now, in the growing canon of absurd ways of duping AIs into going off the rails, we have a new entry.
A team of researchers from the AI safety group DEXAI and the Sapienza University of Rome found that regaling pretty much any AI chatbot with beautiful — or not so beautiful — poetry is enough to trick it into ignorin

Futurism

Fast Company Technology
Newsday
Teslarati
Atlanta Black Star Entertainment
People Top Story
AlterNet
The Times of Northwest Indiana Crime
The Shaw Local News Sports
The Danville Register & Bee Politics