Microsoft fixed a security hole in Microsoft 365 Copilot that allowed attackers to trick the AI assistant into stealing sensitive tenant data – like emails – via indirect prompt injection attacks.
But the researcher who found and reported the bug to Redmond won't get a bug bounty payout, as Microsoft determined that M365 Copilot isn't in-scope for the vulnerability reward program.
The attack uses indirect prompt injection – embedding malicious instructions into a prompt that the model can act upon, as opposed to direct prompt injection, which involves someone directly submitting malicious instructions to an AI system.
Researcher Adam Logue discovered the data-stealing exploit, which abuses M365 Copilot's built-in support for Mermaid diagrams, a JavaScript-based tool that allows users to

The Register

Bozeman Daily Chronicle Sports
The Conversation
New York Magazine Intelligencer
AlterNet
Bored Panda
Desert Sun Sports
TheFashionCentral
RadarOnline