Money

Tough new UK tech rules herald end to ‘era of self-regulation’


Technology companies in the UK face new rules, sanctions and oversight by an independent regulator as the government declared the “era of self-regulation” in the sector to be over.

The government will impose a new legal “duty of care” on companies to take steps to tackle illegal and harmful activity on their services, according to plans to be announced in a white paper on Monday.

Under the proposals, companies will have to take “reasonable and proportionate action” to tackle “online harms” — ranging from terrorist content and child sexual exploitation to problems that “may not be illegal but are nonetheless highly damaging” such as disinformation, extremist content and cyberbullying.

Senior managers could be held personally liable if the rules are breached, and their companies could be fined or banned in the UK.

Policymakers in many countries are beginning to try to tame the powers of tech companies, but Britain stands out for being particularly sweeping in its new approach.

Margot James, digital minister, called the government’s plans the most ambitious adopted by any G7 country. “I think this is groundbreaking in its scale and scope,” she said.

The new rules will apply to a broad range of online companies including file-hosting sites, public discussion forums, messaging services and search engines, rather than just social media platforms.

An independent regulator, funded by the industry, will oversee and enforce the rules.

There will be a 12-week consultation period on the proposals, after which the government will set out its final plans. One key decision still to be made is whether the regulator should be a new body or an existing one such as Ofcom. Ms James said the government was “genuinely undecided” on that question, adding that a hybrid entity, combining Ofcom with the Information Commissioner’s Office, was another option.

TechUK, the trade body, called the white paper a “significant step forward” but added that making directors liable for content, or blocking providers, were “heavy handed” approaches.

“By going down this route the UK risks setting precedents that will be abused by less open and less democratic jurisdictions,” said Vinous Ali, head of policy at TechUK. “The UK government should be mindful that governments and investors around the world are watching closely how it handles these issues.”

On terrorism and child sexual exploitation, the Home Office will have the power to direct the regulator on codes of practice that set out what companies should do to fulfil their new “duty of care”. Terrorism content shared on social media will have to be taken down “in a short pre-determined timeframe,” the government added.

“The era of self-regulation for online companies is over,” said Jeremy Wright, the UK’s digital secretary. “Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough.”

Britain’s plans reflect growing concern among policymakers about the role of social media in spreading harmful content. Facebook, Twitter and YouTube were criticised heavily after a gunman live-streamed graphic footage of deadly shootings at two mosques in New Zealand last month.

Last week, Australia passed some of the world’s toughest laws aimed at preventing the “weaponisation” of social media, which include the possibility of jailing technology executives and fining companies 10 per cent of global turnover for failing to remove “abhorrent violent content”. Singapore last week published far-reaching draft legislation to tackle “fake news”. The issue has also risen up the political agenda in the US.

But while other countries have started chipping away at the immunities that internet companies enjoy, Britain’s proposals appear to take a broader approach to laying down standards and expectations.

The plans have already provoked controversy. John Whittingdale, a former Conservative culture secretary, said the “duty of care” on tech firm bosses was well-meaning but risked “dragging British citizens into a draconian censorship regime instead”.

Rebecca Stimson, head of UK policy at Facebook, said the company had tripled the size of the team that identifies harmful content to 30,000 people but admitted there was “much more to do”.

“New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech,” she said. “These are complex issues to get right and we look forward to working with the government and parliament to ensure new regulations are effective.”

Mark Zuckerberg, Facebook’s chief executive, last month called for more global regulation of technology companies, though US lawmakers met his intervention with scepticism.

The Coalition for a Digital Economy, a lobby group for UK tech start-ups, said the rules would “benefit the largest platforms with the resources and legal might to comply — and restrict the ability of British start-ups to compete fairly”.

Additional reporting by Aliya Ram in London and Richard Waters in San Francisco



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.