Essential UK Rules Target Viral Illegal Content Online

essential-uk-rules-target-viral-illegal-content-on-686298a817ce6

The UK is tightening its grip on online safety, proposing new measures designed to force tech platforms to proactively tackle dangerous material spreading rapidly across their services. These potential rules, put forward by the regulator Ofcom, aim to prevent illegal content from “going viral” and introduce stricter controls, particularly concerning child safety on livestreams. The move signals an escalating global effort by governments to hold major online services accountable for the harms facilitated on their platforms.

Why Stronger Online Safety Measures Are Proposed

On Monday, Ofcom initiated a consultation period seeking public and industry feedback on these proposed online safety enhancements. The regulator emphasizes that while existing UK online safety laws provide a foundation, risks online are “constantly evolving.” Oliver Griffiths, Ofcom’s online safety group director, highlighted the need to build on current rules to keep citizens, especially children, safer in the digital realm. Ofcom is already taking “swift enforcement action” against platforms causing concern, but acknowledges that the dynamic nature of technology and online harms requires continuous adaptation.

The new proposals stem from Ofcom’s assessment that more action is needed in three key areas:
Halting the rapid spread of illegal and harmful content, preventing it from achieving viral status.
Addressing harmful content and activities closer to their origin point.
Implementing additional specific protections for children using online services.

Key Proposals Targeting Platforms

The proposed measures cover a range of issues, from intimate image abuse to the danger posed by witnessing physical harm on livestreams. A central focus is shifting the burden onto platforms to prevent harmful trends before they explode online.

Specific proposals include:
Preventing Virality: Requiring platforms to implement systems that limit or stop illegal content from rapidly spreading and reaching a large audience.
Child Livestreaming Safeguards: Imposing limitations on features like the ability for users to send virtual gifts to children who are livestreaming, or recording a child’s livestream without explicit permission.
Proactive Detection: Potentially requiring larger platforms to assess whether they need to deploy technology to proactively detect harmful content, such as terrorist material.

These measures vary in scope depending on the platform’s size and the type of content it hosts. For instance, platforms allowing single users to livestream to many, where illegal activity might be depicted, could be required to have a user reporting mechanism for content showing “the risk of imminent physical harm.” Potential requirements for using proactive content detection technology to protect children, however, would typically apply only to the largest tech firms, identified as posing the highest risks for relevant harms.

Addressing Systemic Weaknesses?

While regulators like Ofcom push for stricter rules, the proposed measures are not without criticism. Ian Russell, chair of the Molly Rose Foundation, established in memory of his daughter who died after viewing self-harm content online, argues that these proposals are merely “sticking plasters” that fail to address fundamental issues within the existing Online Safety Act.

Russell believes Ofcom’s approach lacks ambition and will struggle to keep pace with the rapidly evolving landscape of online harm, including emerging threats related to suicide and self-harm content. He called for the Prime Minister to intervene, advocating for a strengthened Online Safety Act that would comprehensively compel tech companies to identify and fix all risks associated with their platforms, rather than focusing on specific types of harm after they emerge. This criticism highlights the ongoing debate between targeted regulation and systemic reform in the fight for a safer internet.

Industry Response and Global Context

Tech platforms are already navigating a complex global landscape of increasing regulatory demands. The UK’s Online Safety Act is part of this broader trend. Some platforms have already made changes in anticipation or response to concerns about child safety, particularly around livestreaming features. For example, following investigations, TikTok raised the minimum age for users to go live from 16 to 18 in 2022, and YouTube recently announced it would raise its livestreaming threshold to 16 starting July 22nd.

However, the stringent nature of the UK’s laws, which include potential fines of up to £18 million or 10% of global annual turnover, may have unintended consequences. Legal experts suggest that some smaller social media platforms might choose to avoid the UK market entirely rather than comply with the demanding moderation requirements. These platforms may prefer to maintain policies allowing a wider range of content, including material others deem illegal or harmful, rather than face significant penalties. Ofcom’s chief executive has reportedly confirmed that some “smaller companies” have indeed opted to “geo-block” the UK to avoid hosting “very, very risky and illegal content.”

This highlights a tension: while large platforms like Meta and TikTok are expected to remain and grapple with compliance, smaller entities might simply withdraw, potentially limiting user choice in the UK market. The pressure on platforms isn’t unique to the UK; governments globally, in varying political contexts, are seeking to influence content moderation, sometimes through legal threats or economic leverage, as seen in countries like India and Thailand.

Adding another layer of complexity, major platforms are also signaling shifts in their content moderation philosophies. Meta CEO Mark Zuckerberg recently indicated Facebook would scale back its third-party fact-checking and ease restrictions on certain topics, citing concerns about “too much censorship” and political bias. This move, potentially influenced by political shifts in the US, demonstrates a counter-pressure on platforms to allow broader expression, even as regulators in other regions like the UK push for stricter controls on harmful and illegal content. This creates a challenging balancing act for platforms operating globally, navigating often contradictory demands from different jurisdictions.

The Path Forward

Ofcom’s consultation on these proposed measures is open until 20 October 2025. The regulator seeks feedback from a wide range of stakeholders, including online service providers, civil society organizations, law enforcement, and members of the public.

The consultation marks the next step in implementing the UK’s sweeping online safety framework. It signals that the regulatory push to make the internet safer is ongoing and adaptable, constantly seeking to address new threats and hold platforms accountable for the content they host and how it spreads. The proposals underscore a determination to move beyond reactive content removal towards proactive prevention, particularly when it comes to illegal material achieving viral reach and protecting vulnerable users, notably children.

Frequently Asked Questions

What types of illegal content do UK’s new online safety proposals target?

Ofcom’s proposals aim broadly at preventing illegal content from going viral. Specific examples highlighted include content related to intimate image abuse, terrorist material, and potentially illegal activity depicted in livestreams, such as content showing the risk of imminent physical harm. The focus is on content that spreads rapidly or poses significant risks, especially to children.

How can the public share feedback on Ofcom’s latest online safety plans?

Ofcom has opened a public consultation to gather views on its proposed online safety measures. Interested parties, including members of the public, service providers, civil society groups, and law enforcement, can submit their feedback directly to Ofcom. The consultation period is open until 20 October 2025.

What are the consequences for tech platforms ignoring UK online safety laws?

The UK’s Online Safety Act carries significant penalties for non-compliance. Platforms found to be in breach of the rules face potential fines of up to £18 million or 10% of their global annual turnover, whichever amount is greater. In severe cases of non-compliance, the regulator could even seek to have sites blocked in the UK. This has led some smaller platforms to reportedly avoid operating in the UK market altogether.

Ultimately, the UK’s proposed measures reflect a commitment to evolving online safety regulation alongside technological changes. They place increased responsibility on tech firms to not just react to harmful content, but to prevent its spread and protect users proactively. The consultation process will be crucial in shaping the final form of these important rules.

Word Count Check: 1132

References

Leave a Reply