The governance landscape for digital media is changing rapidly. In early 2026, the UK government announced a new rule requiring technology platforms to remove non-consensual intimate images within 48 hours after they are reported. Platforms that fail to comply may face fines of up to 10% of their global revenue or even risk being blocked from operating in the UK.
The proposal also introduces a significant operational change: victims will only need to report the image once, and platforms will be required to remove it across multiple services while preventing re-uploads through digital hashing technology. This development signals something larger than a new online safety rule. It reflects the emergence of a new category of risk: digital media integrity.
The Escalating Scale of Image-Based Abuse
Non-consensual intimate imagery sometimes referred to as image-based sexual abuse has grown rapidly with the rise of smartphones, social media platforms, and increasingly accessible AI tools. Data from the UK Safer Internet Centre shows that reports of intimate image abuse increased 20.9% in 2024, reaching more than 22,000 reported cases in a single year. The problem extends far beyond isolated incidents.
Researchers estimate that over 3 million non-consensual intimate images may circulate annually in the UK alone, demonstrating the scale and persistence of this form of digital harm.
Global studies on image-based abuse also show that the issue is widespread internationally. A large multinational survey found that more than 1 in 5 adults reported experiencing some form of image-based sexual abuse during their lifetime, including threats or non-consensual sharing of intimate images. These numbers highlight why policymakers increasingly see this issue not just as harassment, but as a systemic digital risk.
