Skip to main content
Legal Advice Centre

Deepfakes and Consent: The Law Finally Catches Up

Someone wakes up to find deepfake nude photos circulating online. Photos that they never took, never posed for, never consented to. There was no legal protection for them … until now.

Published:

What are deepfakes?

You may have heard the term “deepfake” in the news recently,  but what does it really mean?  Deepfakes are videos, pictures or audio clips made with artificial intelligence (AI) to look real. In some cases, this deepfaked content can take on the exact likeness of a real person – someone you may know or even you.

In practical terms, AI tools can stitch together realistic fake images or videos using photos taken from social media or other online sources.

What are deepfakes used for?

While there are benign uses for deepfakes such as in movies or for historical reconstructions, the misuses of deepfakes has exploded: fake intimate images, fake endorsements, scams, and identity deception.

Many deepfakes are pornographic. A staggering 96% of the 15,000 deepfake videos online in September 2019 found by the AI firm Deeptrace were pornographic, and 99% of those mapped faces from female celebrities on to porn stars. Scarlet Johanson, Taylor Swift, and Kamala Harris have been victims of pornographic deepfakes. 99% of sexual deepfakes are women.

It’s not just celebrities being targeted, the BBC documentary: “Deepfake Porn: Could You Be Next”, explored the reality of deepfakes being made of private individuals, with images pulled from social media or video calls to deepfake creators and websites.   

Grok, the AI chatbot on Elon Musk’s X (formerly Twitter), has been in the news for creating AI deepfakes of women through the “@grok put her in a bikini” trend. The trend began quietly at the end of last year, before exploding at the start of 2026.

According to the Guardian, by 8th January 2026, as many as 6,000 bikini demands were being made to Grok every hour. Whilst for some the trend was humorous, many were open about their desire for instant explicit and extreme content.

Some example requests - many by men - that Grok completed, are as follows:

  • adding blood and bruising to the bodies of women,
  • stripping images of teenage girls and children down to revealing swimwear,
  • giving women disabilities, and
  • making Zendaya and US politician Alexandria Ocasio-Cortez appear as white women.

How it impacts victims:

When you hear about deepfakes, you might think “well it’s just online, it doesn’t affect the real world” or “people will figure out it’s fake”. But the reality is that the impact on victims (whether celebrities or private individuals) can be deep, traumatic and everlasting.

Emotional and psychological harm

Imagine waking up to find a video or image of you doing something sexual that you never did, but it’s been made to look real. The shock and violation are immediate. The harm isn’t just the image itself – it’s the fear that people will believe it, the worry it will keep resurfacing, and the loss of control over how you’re seen online. 

Ashley St Clair, the mother of one of Elon Musk’s children and a victim of the Grok trend, told the Guardian she felt “horrified and violated”, and felt she was being punished for speaking up against Musk, from whom she is estranged, describing the images as revenge porn.

Reputation, career and personal life

Deepfakes don’t discriminate, and whether you are a celebrity or not, the stakes are high: a deepfake may damage your reputation, career and personal relationships. For private individuals, especially young people, it can mean bullying, social ostracisation and mental health decline. A 12-year-old victim of deepfake bullying porn from West Yorkshire was left “traumatised” when school bullies posted a “deepfake” pornographic image of her on Snapchat. The lingering sense of being “seen” is deeply traumatic.

Systemic and gendered harms

It’s not just one person’s horror story: deepfakes are frequently part of a broader pattern of gendered abuse. Danielle Citron, a professor of law at Boston University, said: “Deepfake technology is being weaponised against women”. The harm is both personal and societal – it’s part of wider gendered abuse that uses sexualised imagery as a tool of control.

Jessaline Caine, a survivor of child sexual abuse and another victim of the Grok trend, stated that the trend is “a humiliating new way of men silencing women. Instead of telling you to shut up, they ask Grok to undress you to end the argument. It’s a vile tool.”

What does the law say in the UK?

For Adults:

Following amendments made by the introduction of the Online Safety Act 2023, as inserted into the Sexual Offences Act 2003, it is already a criminal offence to share or threaten to share intimate images without consent. This includes images that ‘appear to show’ a person – meaning it includes the sharing of (but not the creation of) deepfakes relating to individuals.

The Online Safety Act 2023 itself makes tech companies legally responsible for user safety, especially for children, by requiring them to prevent illegal and harmful content like CSAM (child sexual abuse material). It puts new duties on social media, search engines, and other platforms to proactively assess risks, remove illegal content, and enforce their terms, with Ofcom (the UK’s communication services regulator) enforcing it through potential fines of up to £18 million or 10% of the company’s “qualifying worldwide revenue”.

Section 138 of the Data (Use and Access) Act 2025 (which came in to force on 6th February 2026) inserts further new offences into the Sexual Offences Act 2003 and specifically relates to deepfakes.  It creates new offences, such as one to create a ‘purported intimate image of an adult without consent’.

The government has also introduced the Crime and Policing Bill 2025 (the 3rd Reading took place on 25th March 2026, meaning the bill will now pass to the Commons for consideration of Lords amendments). If this bill receives Royal Assent and turns into an Act, it will make numerous amendments and specific criminal offences relating to the creation and dissemination of deepfake content.

For Children:

It is illegal to create indecent images of children (a person under 18). The Coroners and Justice Act 2009, Protection of Children Act 1978, and Criminal Justice Act 1988 together make it illegal to create, possess or share any computer-generated indecent image of someone under 18 – including deepfakes.

Furthermore, the Online Safety Act 2023, introduced duties for platforms to remove illegal content, including child sexual abuse material (CSAM). These duties include conducting risk assessments to identify potential dangers to children on their platform, with a focus on preventing child exploitation and abuse. It also put the duty on platforms to take active measures to remove illegal content, including CSAM. 

Finally, if the Crime and Policing Bill 2025 passes and receives Royal Assent, it will introduce new criminal offences to possess, create or distribute AI models that have been designed to generate CSAM. It will also criminalise “paedophile manuals” (guides and instructions on how to use or abuse AI tools to create child sexual abuse imagery).

Legal Ramifications of Grok’s trend

The recent Grok trend and the legal protections in the UK show that something doesn’t add up. Whilst most social media companies have been complying with UK law, there is a problem with X. Musk, “unhappy about over-censuring”, ordered staff to loosen the guardrails on Grok last year. Musk himself initially made light of the “put her in a bikini” trend, and later defended the chatbot, posting that critics “just wanted to suppress free speech” along with two AI-generated images of UK Prime Minister Sir Keir Starmer in a bikini.

After immense backlash, X announced Grok will no longer be able to edit photos of real people to show them in revealing clothes, in jurisdictions where it is illegal. This policy extends to paid users of X. However, the Guardian found that it was still possible to create and post videos of real women being stripped down to bikinis. It also found that the standalone version of Grok, known as Grok Imagine - which is easily accessible through a web browser - was still responding to prompts to digitally remove clothes from images of women. In fact, the platform responded by going further than the prompts, creating short videos of women removing their clothes in a sexually provocative striptease.

People can also use virtual private networks (VPNs) to disguise their location and allow them to use the internet as if they are in a different country – a different country that doesn’t have law in place to protect someone from deepfakes.

Due to X’s non-compliance, Ofcom has launched a formal investigation. If found to have broken the law, Ofcom can potentially issue X with a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater.

If X does not comply, Ofcom can seek a court order to force internet service providers to block access to the site in the UK altogether.

The Response to the changes:  

Campaign groups have welcomed the new laws on deepfake abuse, but many believe the reforms still don’t go far enough. 

The #StopImageBasedAbuse coalition praised the creation of an offence of making sexually explicit deepfakes, but argued the law should be based on consent, not on proving the perpetrator’s intent to cause harm. This does not apply to the sharing of image-based abuse.

After the campaign of Georgia Harrison in 2023, who was a victim of image-based abuse at the hands of her former partner, the prosecution will no longer need to prove the intention of distress, making it easier to charge and convict offenders.

Harrison said: “The reforms to the law that has been passed today are going to go down in history as a turning point for generations to come and will bring peace of mind to so many victims who have reached out to me whilst also giving future victim’s the justice they deserve.” For those who do share intimate images with the intention to cause distress, they can face tougher punishment on conviction.

Child protection organisations like the Internet Watch Foundation said that they were pleased to see the government prioritise the issue, but stressed that stronger action is needed as AI tools become more easily accessible.

Overall, campaigners argue that new measures mark real progress, but they are only the start of what’s needed to protect adults and children from this fast-growing form of abuse.

What can Victims do?

So, if you or someone you know finds yourself targeted by a deepfake – or you just want to be prepared – here are some practical steps to help.

1.    Document Everything

Take screenshots, note URLs, time-stamps etc. If it is a deepfake of yourself and you are over 18, save copies. The police recommend recording the following details:

Having a record helps if you later want to report it or take legal action.

2.    Report the Content

Most social media platforms have the option to report manipulated, non-consensual intimate images, as it violates Community Guidelines:

3.    Report it to the Police

You can report illegal deepfakes online.

Your report will be sent directly to the police control room where it will be received by the same team who answer police calls.

4.    Legal advice

Consult a lawyer who has experience in online image-based abuse, privacy or defamation.

The Queen Mary Legal Advice Centre specialises in image – based sexual abuse. You can contact them for free legal advice.

5.    Support and wellbeing

Being targeted or knowing someone who has been targeted is distressing. Don’t underestimate the emotional impact. Talk to trusted friends/family. Below are support organisations you can contact:

Revenge Porn Helpline

The helpline can support adult victims of intimate image abuse (often known as “revenge porn”), including deepfakes, for confidential support and assistance with reporting content that has been shared online.

Childline

A national charity that offers confidential help and advice for young people under 18 to get images and videos removed from the internet. 

Take It Down

A free service that can help you remove or stop the online sharing of nude, partially nude, or sexually explicit images or videos taken of you if you are/when you were under 18 years old.

Report Harmful Content 

A national reporting centre designed to assist everyone in reporting harmful content online in the following areas: threats, impersonation, bullying & harassment, self-harm or suicide, online abuse, violent content, unwanted sexual advances, pornographic content.

The Cyber Helpline

An organisation providing free, expert advice and help for victims of cybercrime, digital fraud and online harm.

Prevention and Awareness

For public and private individuals alike:

  • be aware of your digital footprint and monitor your name in search results. The more images/videos of you online, the more material there is to build deepfakes.
  • speak up about the effects of deepfakes – raise awareness.

The more people that speak up, the more pressure on platforms and regulators to act. Efforts led by campaigners and charities have already helped push for the creation of deepfake offences. It forced X to control Grok.

Deepfakes are more than just a tech gimmick – they represent a genuine challenge to personal privacy, reputation and consent in the digital world. Laws are finally catching up, platforms are under pressure, and awareness is increasing. If you’re reading this and thinking “maybe it won’t happen to me” — that’s okay. But being informed and prepared gives you a head start. And if you ever are targeted: you’re not alone, there are steps you can take, and you don’t have to suffer in silence.

By Fatimah Shah, Student Blog Writer at QMLAC and LLB Law Student. 

This blog is for information only and does not constitute legal advice on any matter. While we always aim to ensure that information is correct at the date of posting, the legal position can change, and the blogs will not ordinarily be updated to reflect any subsequent relevant changes. Anyone seeking legal advice on the subject matter should contact a specialist legal representative

Bibliography

Articles:

https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them

https://opdv.ny.gov/tfgbv-deepfakes-and-image-based-abuse

https://www.bbc.co.uk/iplayer/episode/m001c1mt/deepfake-porn-could-you-be-next

https://www.theguardian.com/news/ng-interactive/2026/jan/11/how-grok-nudification-tool-went-viral-x-elon-musk

https://www.bbc.co.uk/news/articles/ckvgezk74kgo

https://www.theguardian.com/technology/2024/jan/31/inside-the-taylor-swift-deepfake-scandal-its-men-telling-a-powerful-woman-to-get-back-in-her-box

https://www.theguardian.com/news/ng-interactive/2026/jan/11/how-grok-nudification-tool-went-viral-x-elon-musk

https://www.theguardian.com/news/ng-interactive/2026/jan/11/how-grok-nudification-tool-went-viral-x-elon-musk

https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo

https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo

https://www.theguardian.com/technology/2026/jan/16/x-still-allowing-sexualised-images-grok-ai-nudification

https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo

https://www.bbc.co.uk/news/articles/ceqz7pyd303o

https://www.police.uk/pu/contact-us/

https://www.endviolenceagainstwomen.org.uk/campaign-win-law-to-stop-deepfake-abuse/

https://www.glamourmagazine.co.uk/article/data-access-bill-amendment-deepfake-laws

https://www.gov.uk/government/news/home-secretary-joins-forces-with-big-tech-to-fight-ai-child-sex-abuse-images

https://care.org.uk/news/2024/10/a-ban-on-deepfakes-must-be-included-in-the-online-safety-act-charity-says

Government and police articles:

https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer

https://www.gov.uk/government/news/government-crackdown-on-image-based-abuse

https://www.police.uk/advice/advice-and-information/online-safety/online-safety/deepfakes-what-is-a-deepfake/deepfakes-reporting-it-to-us/

Where to report:

https://help.snapchat.com/hc/en-gb/articles/7012399221652-How-do-I-report-abuse-or-illegal-content-on-Snapchat

https://help.x.com/en/safety-and-security/report-abusive-behavior

https://www.facebook.com/help/181495968648557/

https://support.google.com/youtube/answer/2802027?co=GENIE.Platform%3DDesktop&hl=en-GB#zippy=%2Creport-a-video

https://support.tiktok.com/en/safety-hc/report-a-problem/report-a-video

 

 

 

Back to top