For years, security teams focused on protecting systems, identities, and networks. Those still matter. But a newer risk is becoming harder to ignore: attackers are increasingly targeting the decisions people make, not just the infrastructure they use.
AI-generated voices, manipulated video calls, altered screenshots, reused images, and authentic media taken out of context can all be used to create trust where it does not belong. The result is not only deception. It is operational risk. A fraudulent payment may be approved. A false instruction may be followed. A reputational crisis may be amplified before anyone verifies what they are seeing or hearing.
That is why more organizations are beginning to treat media itself as part of the attack surface.
Why “Deepfakes” Is Too Narrow a Label
One of the biggest mistakes organizations make is treating this issue as only a “deepfake problem.” In practice, many real-world incidents involve more than one type of deception.
Sometimes the attack uses cloned audio. Sometimes it uses a live or recorded video impersonation. In other cases, it involves authentic media that has been edited, stripped of context, or combined with emails, messages, and documents to make the fraud appear legitimate. The threat is broader than synthetic media alone.
A more useful framing is media-driven fraud: the use of audio, video, images, screenshots, or supporting media artifacts to influence trust, accelerate action, and reduce skepticism at the exact moment a decision is being made.
What Recent Incidents Are Showing
Recent public warnings and reported incidents point to a clear pattern.
