A survey of cybersecurity bosses has shown that 62 percent reported attacks on their staff using AI over the last year, either by the use of prompt injection attacks or faking out their systems using phony audio or video generated by AI.
The most common attack vector is deepfake audio calls against staff, with 44 percent of businesses reporting at least one instance of this happening, six percent of which resulted in business interruption, financial loss, or intellectual property loss. Those loss rates drop to two percent when an audio screening service is used.
For video deepfakes, the figure was slightly lower, 36 percent, but still five percent of those also caused a serious problem.
The problem is that deepfake audio is getting too convincing and cheap, Chester Wisniewski, global fi