Damned if you do ... Studio Romantic

Young teenagers on TikTok can easily access hardcore porn content, a new study has found.

By creating fake accounts for 13-year-olds, researchers at the non-governmental organization Global Witness were quickly offered highly sexualised search terms. Despite setting the app to “restrictive mode”, the researchers were able to click through to videos showing everything from women flashing to penetrative sex.

Most of these videos had been created in such a way that they were designed to evade discovery, and TikTok has moved swiftly to take them down.

But the fact that they were easy to find is a fresh blow to the UK Online Safety Act. The act, which came into force over the summer, requires tech companies to prevent children from being able to access pornography and other harmful content.

There have already been reports that Meta’s efforts to make Instagram safer for young people are similarly ineffective.

There are also concerns about a surge in VPN downloads and traffic to pirate sites by users aiming to get around the restrictions in the act. Some are now calling for VPNs to be banned, though that seems unlikely.

It would be easy for parents reading these reports to conclude that there is nothing that can be done to make the internet safe for their children. Many will increasingly be tempted to ban access outright rather than try to navigate the risks in a more measured way.

Certainly, the act was introduced for good reason. According to research published in 2025, one in 12 children were being exposed to online sexual exploitation or abuse. An EU report from 2021 that surveyed more than 6,000 children also found that 45% reported they had seen violent content and 49% had encountered cyberbullying.

Yet there is a danger in downplaying progress. Since the act was introduced, most of the top 100 adult sites have introduced age checks or blocked UK access – and so have sites that allow pornographic content such as X and Reddit.

It wouldn’t be the first time that media coverage has over-focused on the online risks to children. Take Roblox, for instance. Launched in 2006, it’s a “virtual universe” that allows users to create their own content. As of 2025 it has over 85 million daily active users, of which 39% are below the age of 13.

The site has come in for heavy criticism for incentivising harmful content by rewarding creators for attracting high engagement from other users, while lacking adequate content moderation to prevent violations of the rules.

This has exposed children to undesirable things such as Nazi roleplaying games and sexual content. One much-quoted report published in 2024 even declared it an “X-rated paedophile hellscape”.

True enough, children can potentially be exposed to harmful content on the platform – as with any platform of user-generated content. But a paedophile hellscape? It’s worth reflecting that Roblox is also being studied by researchers for its ability to help young people learn and explore their identities in more wholesome ways (it also introduced extra safeguards for children a few months ago).

To stress, there are online risks parents need to contend with, but the way these risks are reported does not help. TikTok, for instance, has been in trouble over its content before. In 2022, research by the Centre for Countering Digital Hate concluded that children can be exposed to harmful content every 39 seconds – with one newspaper turning this into a headline about TikTok’s “thermonuclear algorithm”.

Given that some parents already lack confidence in managing digital technology, this kind of sensationalist language doesn’t help. This 2024 study points to the “joy, connection and creativity” that children also experience on the platform.

We’ve been here before

In truth, we’ve been hearing about the technological threat to children for a very long time. The 1935 New York study, Radio and the Child, argued that radio presented a new insidious threat as it encroached into the family home and children’s bedrooms: “No locks would keep this intruder out, nor can parents shift their children away from it.”

A 1941 study from San Francisco, Children’s Reactions to Movie Horrors and Radio Crime, called the technology a “habit-forming practice very difficult to overcome, no matter how the aftereffects are dreaded”.

A few years later, television had become the focus of parental fears. According to the 1962 BBC Handbook: “Nobody can afford to ignore the dangers of corruption by television through violence or through triviality, especially the young.”

Soon came the 1964 Television Act, which introduced the 9pm watershed. It prevented broadcasters from showing programmes unsuitable for children before that time, which seems quaint next to today’s concerns.

The clear pattern is that one generation’s moral panic becomes a source of amusement to the next one as they focus on a new threat. Time and again, the coverage is so distorted and inflamed that it makes parents feel more anxious and estranged. This makes managing the actual risks much more difficult.

The negotiated alternative

The reality is that restrictive approaches by parents can be counterproductive, especially as they may encourage children to be evasive. Children’s instincts, according to the research, are to talk about potentially harmful material with their parents, but they’re less likely to do so if parents take a hard line since it makes them fear they’ll be judged. In other words, they’ve become more likely to consume harmful content as a result.

The alternative is for parents to adopt a strategy of negotiated decision-making with their children. Instead of viewing online material as alien or inherently negative, it becomes proactively integrated into family life. One researcher described it as “living out family values through technology”. It becomes about accepting risk with a view to building children’s resilience.

Father and daughter not talking to one another
Negotiation helps. Maya Lab

Unfortunately this sits uncomfortably with the current rhetoric around the internet, since it’s recommending moderated, negotiated exposure to something “thermonuclear”. Compounding this is the abundance of digital pundits offering reactionary and unworkably prescriptive advice. Any parent who deviates from the “recommended” screen-time for their children runs a risk of judgement, treating technology as a “digital pacifier”.

Just like the watershed before it, the Online Safety Act mitigates risks but won’t remove them altogether. It is ultimately still on parents to decide how to deal with them in accordance with their family values. The more the reporting around this area is endlessly negative, the more difficult that becomes.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Phil Wilkinson, Bournemouth University

Read more:

Phil Wilkinson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.