A turning point in Europe’s bid to regulate the internet
The safety of users’ online experience is not a new concern and has been increasingly debated as technology is more and more ingrained in each aspect of our lives, across all age groups. However, despite much discussion, it is only in the last few months that material legislation has been published that addresses head-on the difficult task of regulating activity deemed harmful, and the requirements of online service providers to act on such activity.
The Public Policy team across Instinctif’s Dublin, Brussels and London offices have examined how policymakers in these three jurisdictions have approached the dilemma of placing online harmful content on a legal footing while balancing the rights of expression and privacy. The Irish Online Safety & Media Regulation Bill (OSMR); the EU Digital Services Act (DSA); and the UK Online Safety Bill have all made progress in their bid to become the gold standard in creating a safer, more accountable online environment.
Each piece of legislation seeks a similar outcome but invariably prioritises certain aspects of online activity over others. In this three-part series, we examine the current state of play of each jurisdiction’s policy, and importantly, how they interact with each other. As Europe grapples with regulating the internet, from data protection to cybersecurity; online commerce to competition – it is online safety that is at a pivotal crossroads. That is where there’s the potential to alter how society interacts with the internet in the future and how online services will disseminate user-generated content.
UK’s Online Safety Bill targets a “duty of care” approach
By Sophie Wheale & Ross Melton
On 12 May 2021, the UK Government published the Online Safety Bill, implementing a range of fines and other sanctions for those posting illegal or harmful content online. Last week, on 17 March 2022, the Online Safety Bill was introduced in Parliament.
This Bill is based upon the approach outlined within the Government’s Online Harms White Paper, published in April 2019, and the responses to the supporting Department for Digital, Culture, Media and Sport consultations, released in December 2020.
While the Bill was initially presented as a limited response to growing public concerns around the risks of pornography, bullying and grooming of children online, it has evolved considerably in the intervening three years.
The scope of the Bill has widened to include far stronger regulation of online content hosts (particularly Facebook and Google) in response to declining public trust in the capability of digital giants to self-regulate after a series of scandals exposing unethical behaviour.
Online safety and real-world consequences
The evidence of Facebook whistleblower Frances Haugen to the DCMS select committee exposing the digital giant’s role profiting from facilitating the spread of dangerous misinformation during a pandemic, and the tragic murder of Sir David Amess MP in October 2021 by an extremist radicalised by online content (noteworthy as the second murder of a sitting UK MP in five years linked to online extremism), have strengthened calls for the Bill to impose a stronger duty of care upon large social networks to monitor and moderate the content they host.
However, critics of the Bill have suggested that it risks imposing unreasonable censorship, while strengthening the position of market incumbents by raising regulatory barriers to innovative new entrants and healthy competition. Some suggested provisions, such as the proposal that the Bill implement a form of online identity verification, are impractical.
More recently, a new legal duty has been added to the Online Safety Bill requiring the largest and most popular social media platforms and search engines to prevent paid-for fraudulent adverts appearing on their services. This is in response to the recent proliferation of fake ads during the pandemic, including those where fraudsters impersonate celebrities or companies to steal people’s personal data, peddle dodgy financial investments or break into bank accounts. Online networks currently profit from hosting these fraudulent adverts. This has been a great win for campaigners, including MoneySavingExpert.com (MSE) and its founder Martin Lewis, who is a frequent target of fraudulent impersonators.
Last week, Tech and Digital Minister Chris Philip MP confirmed plans that the Bill would also criminalise “cyberflashing” – the unsolicited sharing of pornographic content online. This is particularly prevalent on peer-to-peer communication networks, such as Facebook-owned encrypted chat Whatsapp, and dating apps.
Separately, the government is also launching a consultation on proposals to tighten the rules for the online advertising industry. This would bring more of the major players involved under regulation and create a more transparent, accountable and safer ad market. The consultation is open for 12 weeks from Wednesday 9 March 2022.
Harmful or misleading adverts, such as those promoting negative body images, and adverts for illegal activities such as weapons sales, could be subject to tougher rules and sanctions. Influencers failing to declare they are being paid to promote products on social media could also be subject to stronger penalties.
The Bill’s progress
The Bill’s First Reading last Thursday is a welcome progress following three years of delays and uncertainty.
The Online Safety Bill imposes duties of care regarding illegal content and content that is harmful to children upon online network providers that allow users to upload and share user-generated content, known as “user-to-user services” – essentially social networks like Facebook and Instagram. The Bill also goes on to impose duties on providers of search engines like Google, known as “search services”, to moderate search results.
The Bill confers powers on the Office of Communications (Ofcom) to oversee and enforce the new regulatory regime and to establish codes of practice. If found to have breached their responsibilities, networks face fines of up to £18 million or 10% of their global annual turnover – whichever is higher.
As highlighted by the Shadow Culture Secretary Lucy Powell MP, the Bill does little to address the growing threat of misinformation online, from COVID and climate change deniers, to Russian attempts to undermine UK democracy.
Writing on Conservative Home on Tuesday 15 March shortly before introducing the Bill, Culture Secretary Nadine Dorries MP defended the delays while arguing that a revised Online Safety Bill would include new provisions to protect online free speech. Her intervention illustrates the tricky balance the Government needs to strike between providing urgently needed protections for the public UK political system and appeasing divided backbenchers. It will be interesting to see how both tech giants and safety campaigners react, and whether Dorries has succeeded in correctly striking the balance that eluded so many of her predecessors.
Next time in this series we’ll be looking at the latest developments of online safety legislation in the European Union.