(StatePoint) You answer the phone and hear a familiar voice, but are you sure you know who it is on the other end of the line? The correct answer should be “no.”

Rapid advancement of artificial intelligence (AI) has armed bad actors with sophisticated tools to enable impersonation fraud using deepfakes. A deepfake can consist of audio, video or imagery that has either been created or altered using AI. The danger is that with a simple sample of audio or video or even a few images, a criminal can create a deepfake that is almost impossible to detect.

A National Institutes of Health study in 2023 found that even when individuals were given a warning that one out of five videos for them to review was a deepfake, only 21.6% were able to correctly identify the fraudulent option.

In today’s te

See Full Page