Scammers love to seed the internet with fake customer service numbers in order to lure in unsuspecting victims who are just trying to fix something wrong in their life. Con artists have done it to Google Search for years, so it makes sense that they’ve moved on to the latest space where people are frequently searching for information: AI chatbots.
AI cybersecurity company Aurascape has a new report on how scammers are able to inject their own phone numbers into LLM-powered systems—resulting in scam numbers appearing as authoritative-sounding answers to requests for contact information in AI applications like Perplexity or Google AI Overviews. And when someone calls that number, they’re not talking with customer support from, say, Apple. They’re talking with the scammers.
According to A

Gizmodo

Law & Crime
ABC News
WOWT
Cover Media
AlterNet
Esquire