For years, security teams focused on protecting systems, identities, and networks. Those still matter. But a newer risk is becoming harder to ignore: attackers are increasingly targeting the decisions people make, not just the infrastructure they use.
AI-generated voices, manipulated video calls, altered screenshots, reused images, and authentic media taken out of context can all be used to create trust where it does not belong. The result is not only deception. It is operational risk. A fraudulent payment may be approved. A false instruction may be followed. A reputational crisis may be amplified before anyone verifies what they are seeing or hearing.
That is why more organizations are beginning to treat media itself as part of the attack surface.
Why “Deepfakes” Is Too Narrow a Label
One of the biggest mistakes organizations make is treating this issue as only a “deepfake problem.” In practice, many real-world incidents involve more than one type of deception.
Sometimes the attack uses cloned audio. Sometimes it uses a live or recorded video impersonation. In other cases, it involves authentic media that has been edited, stripped of context, or combined with emails, messages, and documents to make the fraud appear legitimate. The threat is broader than synthetic media alone.
A more useful framing is media-driven fraud: the use of audio, video, images, screenshots, or supporting media artifacts to influence trust, accelerate action, and reduce skepticism at the exact moment a decision is being made.
What Recent Incidents Are Showing
Recent public warnings and reported incidents point to a clear pattern.
In May 2025, the FBI and IC3 warned that malicious actors were impersonating senior U.S. officials using text and AI-generated voice messages as part of an ongoing messaging campaign. The warning matters because it confirms that AI-assisted impersonation is not hypothetical or isolated; it is already being used in targeted social engineering operations.
The risk is not limited to government contexts. In 2024, engineering firm Arup confirmed that an employee in Hong Kong was deceived into transferring roughly $25 million after participating in a video call in which fraudsters used deepfake representations of senior leaders. That case is especially important because it shows how media can be used to reinforce trust inside an otherwise familiar business process.
