Google’s most powerful new AI model, Gemini 3 Pro, has been exposed as alarmingly vulnerable, after security researchers managed to bypass its safety features in a matter of minutes. The successful 'jailbreak' attack allowed the AI to generate detailed instructions for creating dangerous substances and biological agents, raising serious concerns over the safety guardrails protecting advanced AI systems. Advertisement

The stress test was carried out by Aim Intelligence, a South Korean firm specialising in finding weaknesses in AI. According to reporting by Maeil Business Newspaper, it took the researchers just five minutes to circumvent Google’s ethical protections for the Gemini 3 Pro model. Once breached, the model was immediately compliant with requests it should have flatly refused.

See Full Page