James Bulger’s Mother Urges AI Law to Stop Sharing Graphic Murder Videos Online


Mother of James Bulger Calls for New Law to Ban AI Videos of Child Murder Victims

Denise Fergus, the mother of murdered toddler James Bulger, is urging the UK government to introduce new legislation to tackle the spread of AI-generated videos that depict child murder victims.

Fergus said that TikTok failed to respond to requests to remove videos featuring AI-generated versions of her two-year-old son speaking about his abduction and murder.

While the government maintains that such videos are already illegal under the Online Safety Act and should be removed by platforms, Fergus argues that the current law does not go far enough to compel platforms to act or prevent the misuse of AI in this way.

TikTok told the BBC it had removed AI videos flagged by the broadcaster for violating its guidelines. A spokesperson added: “We do not allow harmful AI-generated content on our platform and proactively detect 96% of such content before it is reported.”

Similar content was also discovered by the BBC on YouTube and Instagram. Both platforms stated the videos had been taken down for breaching their content policies.

Speaking to the BBC, Fergus described the AI depictions of her son as “absolutely disgusting” and said those sharing the videos don’t realise the emotional damage they cause. “It stays with you. It plays on your mind,” she said. “When you see that image, you can’t escape it.”

Fergus said she planned to raise the issue during a meeting with Justice Secretary Shabana Mahmood.

James Bulger was abducted in 1993 from a shopping centre in Merseyside by two ten-year-old boys, Jon Venables and Robert Thompson. They led him two and a half miles to a railway track, where they tortured and killed him. The case shocked the nation and the two boys became the youngest convicted murderers in modern British history.

The AI videos on social media often feature animated child avatars narrating James’s murder in the first person. These clips appear to be part of a wider trend in which accounts use AI avatars to dramatise violent crimes for views and monetisation.

A YouTube spokesperson said its policies prohibit content that realistically simulates deceased individuals describing their own deaths. A channel named Hidden Stories was permanently removed for “severe violations” of this rule.

“We go on social media and someone who’s no longer with us is suddenly talking to us. How sick is that?” Fergus said. “It’s corrupt, it’s weird—and it should not be happening.”

A government source noted that individuals posting such content could be prosecuted under the Communications Act for public order offences involving obscene or grossly offensive material.

A government spokesperson added: “Using technology for such disturbing purposes is vile. The Online Safety Act considers this content illegal where an offence is committed and requires platforms to swiftly remove it. But we are prepared to go further to safeguard children online.”

The Online Safety Act, passed in 2023 by the previous Conservative government, holds platforms and search engines responsible for protecting users from illegal or harmful content. It is enforced by Ofcom, which is currently developing guidance for compliance.

However, Ofcom cannot compel companies to remove specific posts. Kym Morris, chair of the James Bulger Memorial Trust, believes the government should amend the Act to explicitly address harmful AI-generated content and pass new laws regulating synthetic media and AI misuse.

“This isn’t about censorship,” Morris said. “It’s about protecting dignity, truth, and the emotional wellbeing of victims’ families.”

Plans to include rules for removing “legal-but-harmful” content in the Online Safety Act were dropped due to censorship concerns, leaving gaps that campaigners say still need to be addressed.

Earlier this year, Technology Secretary Peter Kyle admitted the current legislation was unsatisfactory and signalled openness to future reforms. However, sources told the BBC the government has no immediate plans to introduce new laws focused specifically on AI-generated online content.

A narrow AI bill focusing on the regulation of advanced AI models is expected later this year.

Jemimah Steinfeld, CEO of Index on Censorship, acknowledged that the AI videos of child murder victims likely violate existing laws. She warned, however, that expanding regulation could risk unintended consequences, such as restricting legitimate content.

“If it’s already illegal, we don’t necessarily need new regulation,” she said. Still, Steinfeld expressed deep sympathy for Fergus. “Having to relive that trauma again and again as technology evolves—it’s unimaginable.”

An Ofcom spokesperson stated that platforms must assess whether reported content breaks UK law and take appropriate action. The regulator is currently evaluating how platforms are meeting their new obligations under the Online Safety Act and said those failing to protect UK users can expect enforcement.