Note (for transparency): I’m not a journalist. I’m a researcher and AI/IT risk practitioner whose PhD work focused on journalists’ experiences with deepfakes and verification. This post translates that research into a practical, adaptable workflow. It supports decision-making. It does not replace editorial judgment, legal advice, or platform enforcement.

A Risk-Scaled Verification Workflow for High-Stakes Media (RSV Model)
Why verification must be risk-scaled (not “one checklist for everything”)
Most verification failures don’t happen because teams “don’t care.” They happen because teams are forced to verify under:
- Time pressure (breaking news windows are short)
- Fragile signals (authenticity metadata/provenance is often missing or not surfaced)
- Variable stakes (a low-impact repost isn’t the same as content that could cause public harm, legal exposure, or significant reputational damage)
A major lesson from recent real-world testing is that even when AI content includes authenticity markers like Content Credentials, platforms may not preserve or clearly display those signals to end users, leaving “trust signals” inconsistent across the distribution chain.
This creates two common operational failure modes:
- Over-verification: treating every item as a crisis → slow, expensive, and often too late.
- Under-verification: moving fast on high-stakes media → credibility, legal, and safety risk.
A risk-scaled workflow solves this by matching verification rigor to impact, likelihood of manipulation, and time pressure.