In an attempt to enhance online security, UK government has undertaken
new measures, penalising tech companies if they fail to act on risky online
content, PTI reported. The changes would be part of an Online Safety Bill, to
be presented next year. The government is also considering making promotion of
self-harm illegal.

The government said the companies could face fines of up to 18 million
pounds or 10% of their annual global turnover, whichever is higher.

“We are giving internet users the protection they deserve and are
working with companies to tackle some of the uses happening on the web,” said Priti
Patel, Home Secretary.

“We will not allow child sexual abuse, terrorist material and other
harmful content to fester on online platforms. Tech companies must put public
safety first or face the consequences,” she said.

The plans for new laws have been defined by Patel and UK Digital
Secretary Oliver Dowden. They ensure that social media sites, websites, apps
and other services which permit user-generated content will be under strict
vigilance to curb the spread of illegal content.

Also Read | UK PM Boris Johnson accepts India’s invitation to be 2021 Republic Day chief guest

The UK’s regulatory body Office of Communications (Ofcom) is the
official regulator, which will be able to block the access to non-complaint
services in the country as well as impose criminal sanctions on senior
managers.

“Being online brings huge benefits, but four in five people have
concerns about it. That shows the need for sensible, balanced rules that
protect users from serious harm, but also recognise the great things about online,
including free expression,” said Dame Melanie Dawes, Ofcom’s Chief Executive.

The tech platforms will go all the way to protect children from harmful
content such as grooming, bullying and pornography. The most popularly visited
media sites will have to clearly state terms and conditions elaborating on
their protocol for handling content that could cause physical or psychological
harm to adults.

Companies will categorise content and activity, and assess the risk of
legal content on their services. They will clearly specify what type of ‘legal
but harmful’ content is acceptable on their platform in their terms and
conditions. Companies like Facebook, Twitter, TikTok and Instagram will be
under Category 1, required to publish transparency reports about their protocol
to tackle harmful content.