Child safety is at the heart of the bill.
ilkercelik/Getty
This week British lawmakers took a major step toward agreeing that the leaders of tech companies could be prosecuted and sentenced to jail time if they fail to keep users safe on their platforms.
The policy is a part of an amendment to the UK’s Online Safety Bill, legislation that is working its way through Parliament. The amendment won support from both ends of the political spectrum, although it had been opposed by Prime Minister Rishi Sunak and his government. Sunak was forced to back down this week, as members of Parliament agreed to support the policy of holding senior managers personally liable for safety failures.
It’s the latest change to a key piece of legislation that has been a long time in the making. As the bill has evolved, it’s provoked debate on the best way to keep people – especially children – safe online. There are many critics of the bill – both those who think it is draconian and poses a risk to cybersecurity and freedom of speech, and those who believe it isn’t far-reaching enough when it comes to cracking down on content that could pose harm to internet users. It also opens the door to real-world consequences beyond monetary fines.
As the bill progresses further through Parliament, it looks more and more likely that it will eventually pass into UK law, potentially providing a model for other countries around the world that want to introduce internet safety legislation of their own. What is less likely is that the final version of the bill will satisfy digital rights groups and child safety campaigners equally. It may also force tech companies to make significant changes to how they operate in the UK in order to protect themselves and their senior managers from being held criminally liable.
What is the Online Safety Bill?
The Online Safety Bill is the UK’s landmark piece of internet safety legislation. It’s designed to give tech companies a legal duty of care toward their users by protecting them from illegal content and activity, including certain types of pornography and fraud.
The bill will require companies to:
- remove all illegal content
- remove content that’s banned by its own terms and conditions
- empower users with tools to protect themselves against types of content they don’t want to see
The bill also includes specific provisions aimed at protecting children from being exposed to content including porn and material relating to suicide and self-harm. Social media platforms will be required by law to vet the ages of users and publish risk assessments about the threats their services pose to younger users. How they will verify the ages of users will be a challenge that tech platforms have to figure out an answer to.
A number of new criminal offenses will be introduced under the bill, including the sharing of pornographic deepfakes (digitally generated images of real people), cyberflashing (sending pornographic images without consent) and downblousing (taking and sharing images down women’s tops).
What will happen to tech companies that fail to comply?
The draft text of the bill names UK media watchdog Ofcom as the regulator in charge of holding tech companies accountable.
The bill will give Ofcom the power to fine tech companies £18 million ($22.2 million) or 10% of their annual revenue, whichever is higher, if they fail to remove illegal content. It will also have the power to block sites and services.
According to January’s amendment, managers at tech companies will also be liable for failing to protect children from exposure to harmful content. This could lead to prosecution and jail sentences of up to two years.
When was the bill introduced and how far along is it?
A draft of the bill was first published in May 2021, but the origins of the legislation date back much further. Previously known as the Online Harms Bill, the legislation originated from a 2019 government white paper looking into the lack of regulation around harmful content and activity on the internet. It also swept up the UK’s previous failed attempt to bring in age verification to access porn sites.
The government at the time concluded it was necessary to introduce regulation to protect users, especially children, from harmful content. Due to a combination of the COVID-19 pandemic and political upheaval in the UK, the bill was delayed but was later reintroduced as the Online Safety Bill under Prime Minister Boris Johnson.
The bill received increased attention in October at the conclusion of the inquest into the death of British teenager Molly Russell. Russell died by suicide in November 2017 at the age of 14 after viewing extensive material relating to self-harm on Instagram and Pinterest. The coroner in her case concluded that the content Russell viewed was responsible for her death and recommended that social media sites introduce stricter provisions for protecting children.
The bill is just concluding its passage through the House of Commons. It will next proceed to the House of Lords where further amendments will be debated, before it can be voted on and become law.
What criticisms has the bill faced?
One major criticism of the Online Safety Bill is that it poses a threat to freedom of expression due to its potential for censoring legal content.
Rights organizations strongly opposed the requirement for tech companies to crack down on content that was harmful but not illegal. An amendment in November 2022 removed mention of “lawful but harmful” content from the text, instead obliging tech companies to introduce more sophisticated filter systems to protect people from exposure to content that could be deemed harmful. Ofcom will ensure platforms are upholding their terms of service.
Child safety groups opposed this amendment, claiming that it watered down the bill. But as the most vocal proponents of the bill, their priority remains ensuring that the legislation passes into law.
Meanwhile, concerns over censorship continue. An amendment to the bill introduced this week would make sharing videos that showed migrants crossing the channel between France and the UK in “a positive light” illegal. Tech companies would be required to proactively prevent users from seeing this content.
There has also been tension between digital rights groups and child safety groups over the topic of encryption. In a letter to the government in November, over 70 organizations, including cybersecurity experts, presented their concerns over the wording of the bill. They were worried that it posed a threat to end-to-end encryption, forcing tech companies to create backdoors that could be exploited by criminals.
“Undermining protections for end-to-end encryption would make UK businesses and individuals less safe online, including the very groups that the Online Safety Bill intends to protect,” the signatories said.
What do tech companies say?
Industry body TechUK, which represents almost 1,000 tech companies including Google, said Friday that the Online Safety Bill was a “much-needed piece of legislation which will create a regulatory framework to enable tech companies and the regulator, Ofcom, to work effectively to protect children online.”
It doesn’t, however, support the amendment to hold senior managers personally liable for failure to comply with the bill. Tech companies of all sizes believe this wouldn’t help them make the internet safer, but would instead cause damage to the UK economy, it said in a statement.
Representatives for Meta, Twitter, Google and Pinterest didn’t respond to individual requests for comment about their positions on the bill.