When justice fails: Why women can't get protection from AI deepfake abuse
What is deepfake abuse and why laws, platforms, and justice systems are failing women
She woke up to messages flooding her phone. Doctored images of her — sexualized, viral — had spread while she slept.
Whispers followed her offline. Online, the abuse imploded, unchecked: comments, ridicule, shares, screenshots. She had never consented to any of it. That hadn't stopped anyone.
Within minutes, thousands had seen the content. Within hours, millions.
The nightmare had only begun.
Days passed before platforms responded. By then, the images had been seen, saved, and replicated. She was left asking: Who do I report this to? Will anyone believe me? Will the people who did this ever face consequences? Or will the blame land on me?
This is the reality for thousands of women and girls every single day. AI deepfakes are destroying real lives and justice remains out of reach for most survivors.
Her story could be yours.
What is deepfake abuse and how common are deepfake sexual images?
Deepfakes are AI-manipulated images, audio, or videos that make it appear someone said or did something they never did. The technology itself isn't new. But its weaponization against women and girls is a newer phenomenon, and it’s accelerating fast.
A 2023 report showed that deepfake pornography made up 98 percent of all deepfake videos online, and 99 per cent depicted women. Deepfake videos were an estimated 550 per cent more prevalent in 2023 than in 2019, and the tools to create them are widely available, usually free, and require very little technical expertise.
Once posted, AI-generated content can be replicated endlessly, saved to private devices, and shared across platforms, making it nearly impossible to fully remove.
In a recent high-profile case, UK journalist Daisy Dixon discovered AI-generated, sexualized images of herself on X in December 2025, created using the platform's own Grok AI tool. It took days for the platform to geoblock the function, while the abuse kept spreading.
Deepfake abuse can serve as online catalyst for so-called “honour-based crimes” in certain cultural contexts, where perceived breach of honour norms on digital platforms can result in extreme physical violence against women, or even death.
According to a recent research, more than half of deepfake victims in the United States of America contemplated suicide. This is not a niche internet problem. It is a global crisis.
The impunity crisis: Why deepfake creators rarely face justice
Despite the scale of harm, prosecutions are rare, platforms routinely fail to act, and survivors are often re-traumatized when they try to seek help. Here's why the accountability gap is so wide and so gendered.
1. The law hasn't caught up
Less than half of countries have laws that address online abuse. Even fewer have legislation that specifically covers AI-generated deepfake content. Most "revenge porn" or image-based abuse laws were written before deepfakes existed, leaving gaping loopholes that perpetrators walk straight through.
In many countries, deepfake porn or AI-generated nude images fall into legal grey areas, leaving survivors unsure whether the abuse is even illegal and whether perpetrators can be prosecuted.
A handful of jurisdictions are starting to act. The EU AI Act, for example, imposes transparency obligations around deepfakes. The Take It Down Act in the United States of America explicitly covers AI-generated intimate imagery and requires platform removal within 48 hours — but such laws remain rare.
In April 2025, Brazil amended its criminal code, increasing penalty for causing psychological violence against women using AI or other technology to alter their image or voice.
The United Kingdom’s Online Safety Act prohibits sharing digitally manipulated explicit images, but does not address the creation of deepfakes and may not apply where intent to cause distress cannot be proven.
Ambiguity in law and the lack of focus on consent give perpetrators room to escape consequences even where some framework exists.
2. Enforcement is lagging behind
Even when laws exist, enforcement frequently fails. Investigators need digital forensics expertise, cross-border coordination, and platform cooperation to build a case, and most justice systems don't have adequate resources for any of these.
Evidence disappears fast as content spreads and copies multiply. Perpetrators hide behind anonymity or operate across jurisdictions. Platforms are slow — or unwilling — to share data with law enforcement, especially in cross-border cases. Digital forensics backlogs mean cases stall before they even get started.
The result is a justice system that is often under-resourced, fragmented, and structurally unable to keep pace with the speed of the technology — while the harms it fails to address are overwhelmingly experienced by women and girls.
3. Survivors are silenced before they even report
Underreporting is one of the biggest barriers to accountability, and it's not hard to understand why survivors stay silent.
For women survivors, reporting deepfake abuse means showing their artificially sexualized images to police officers, lawyers, and platform moderators. It means having their names on official records, risking media attention, and potentially facing defamation lawsuits from the very people who abused them. It also means navigating a justice system that routinely questions women’s credibility in court – victim-blaming, scrutinizing their past relationships and behaviours, and reflecting the persistent misogyny.
Many survivors choose instead to block, withdraw, and try to survive, because taking down the content and protecting themselves is urgent. But that lets perpetrators off the hook.
4. Platforms are failing survivors
Tech platforms have long hidden behind "intermediary" status to avoid responsibility for user-generated content. In practice, this means platforms that are slow to remove abusive content, have opaque and inconsistent reporting processes, give automated rejection of takedown requests, and little to no cooperation with law enforcement.
The removal burden placed on survivors is cruelly unrealistic. They are expected to track down every copy of their non-consensual intimate images across multiple platforms and keep reporting them. Meanwhile, the content keeps spreading.
What you can do for Rights. Justice. Action.
Justice doesn’t just happen. It is built and must be funded.
Join and support UN Women this international women's day as we continue to stand with women’s movements worldwide and work with governments that choose equality.
Seeking justice, finding more harm – Why survivors don't report and what happens when they do
For survivors who do come forward, the justice system often becomes another source of trauma.
They are asked repeatedly to view and describe abusive content — with police, lawyers, platform moderators. They face questions like, "are you sure it's not real?" or "did you share intimate images before?"
If a case reaches court, their clothing, relationships, and past behaviour go under the microscope, not the perpetrator's.
And the harm doesn't stay online. In a UN Women survey, 41 per cent of women in public life who experienced digital violence also reported facing offline attacks or harassment linked to it. Deepfake abuse bleeds into every corner of a survivor's life.
Reimagining justice: What survivors actually need
Survivors of deepfake abuse aren't asking for sympathy. They're asking for a system that delivers justice and prevents the abuse.
That means being believed.
It means having access to justice that delivers real consequences for perpetrators.
It means support to recover from harm that is real, lasting, and devastating.
And it means platforms and governments that act to prevent future abuse — not just respond after the damage is done.
From crisis to action: What must happen now
Deepfake abuse is human-made — generated by AI tools at the request of humans. It is neither inevitable nor unstoppable. But stopping it requires urgent, coordinated action from governments, institutions, and tech platforms. Here's what needs to happen:
1. Laws that actually cover deepfake abuse.
Governments must pass legislation with clear definitions of AI-generated abuse and focusing on consent, strict liability for perpetrators, fast-track removal obligations for platforms, and cross-border enforcement protocols — because abuse doesn't respect national borders.
2. Justice systems that can investigate and prosecute.
Law enforcement needs training, resources, and dedicated capacity to collect and preserve digital evidence. Digital forensics backlogs must be addressed, and international cooperation frameworks must be fast, functional, and fit for purpose.
3. Platforms held accountable.
Tech companies must be legally required to proactively monitor for and remove abusive content within mandatory timelines, cooperate with law enforcement, and face real financial consequences when they fail to act. Self-regulation has not worked.
4. Real support for survivors.
Trained, trauma-informed law enforcement and legal professionals, and free legal aid, so that reporting processes do not revictimize the people they're meant to help, and accessing justice doesn’t become impossibly costly. A well-coordinated justice system where different actors connect survivors with the services they need.
5. Education that prevents abuse.
Digital literacy, including consent education, online safety, and what to do when experiencing abuse, needs to start young and reach everyone. Prevention is as important as prosecution.
Deepfake abuse is the sharp edge of a much broader pattern of digital violence targeting women and girls. It is escalating. It is gendered. And right now, the systems designed to protect people are failing, while the tools to cause harm become cheaper, faster, and easier to use every day.
That has to change. And it has to change now.