Posts Tagged ‘Internet Censorship’

Read more uk_internet_censors.htm at MelonFarmers.co.uk

open rights group 2016 logo There is every reason to believe that the government and opposition are moving to a consensus on introducing a duty of care for social media companies to reduce harm and risk to their users. This may be backed by an Internet regulator, who might decide what kind of mitigating actions are appropriate to address the risks to users on different platforms.

This idea originated from a series of papers by Will Perrin and Lorna Woods and has been mentioned most recently in a recent Science and Technology committee report and by NGOs including children’s charity 5Rights.

A duty of care has some obvious merits: it could be based on objective risks, based on evidence, and ensure that mitigations are proportionate to those risks. It could take some of the politicisation out of the current debate.

However, it also has obvious problems. For a start, it focuses on risk rather than process . It moves attention away from the fact that interventions are regulating social media users just as much as platforms. It does not by itself tell us that free expression impacts will be considered, tracked or mitigated.

Furthermore, the lack of focus that a duty of care model gives to process means that platform decisions that have nothing to do with risky content are not necessarily based on better decisions, independent appeals and so on. Rather, as has happened with German regulation, processes can remain unaffected when they are outside a duty of care.

In practice, a lot of content which is disturbing or offensive is already banned on online platforms. Much of this would not be in scope under a duty of care but it is precisely these kinds of material which users often complain about, when it is either not removed when they want it gone, or is removed incorrectly. Any model of social media regulation needs to improve these issues, but a duty of care is unlikely to touch these problems.

There are very many questions about the kinds of risk, whether to individual in general, vulnerable groups, or society at large; and the evidence required to create action. The truth is that a duty of care, if cast sensibly and narrowly, will not satisfy many of the people who are demanding action; equally, if the threshold to act is low, then it will quickly be seen to be a mechanism for wide-scale Internet censorship.

It is also a simple fact that many decisions that platforms make about legal content which is not risky are not the business of government to regulate. This includes decisions about what legal content is promoted and why. For this reason, we believe that a better approach might be to require independent self-regulation of major platforms across all of their content decisions. This requirement could be a legislative one, but the regulator would need to be independent of government and platforms.

Independent self-regulation has not been truly tried. Instead, voluntary agreements have filled its place. We should be cautious about moving straight to government regulation of social media and social media users. The government refuses to regulate the press in this way because it doesn’t wish to be seen to be controlling print media. It is pretty curious that neither the media nor the government are spelling out the risks of state regulation of the speech of millions of British citizens.

That we are in this place is of course largely the fault of the social media platforms themselves, who have failed to understand the need and value of transparent and accountable systems to ensure they are acting properly. That, however, just demonstrates the problem: politically weak platforms who have created monopoly positions based on data silos are now being sliced and diced at the policy table for their wider errors. It’s imperative that as these government proposals progress we keep focus on the simple fact that it is end users whose speech will ultimately be regulated.

Advertisements
Read more eu.htm at MelonFarmers.co.uk

EFF logo Governments around the world are grappling with the threat of terrorism, but their efforts aimed at curbing the dissemination of terrorist content online all too often result in censorship. Over the past five years, we’ve seen a number of governments–from the US Congress to that of France and now the European Commission (EC)–seek to implement measures that place an undue burden on technology companies to remove terrorist speech or face financial liability.

This is why EFF has joined forces with dozens of organizations to call on members of the European Parliament to oppose the EC’s proposed regulation, which would require companies to take down terrorist content within one hour . We’ve added our voice to two letters–one from Witness and another organized by the Center for Democracy and Technology –asking that MEPs consider the serious consequences that the passing of this regulation could have on human rights defenders and on freedom of expression.

We share the concerns of dozens of allies that requiring the use of proactive measures such as use of the terrorism hash database (already voluntarily in use by a number of companies) will restrict expression and have a disproportionate impact on marginalized groups. We know from years of experience that filters just don’t work.

Furthermore, the proposed requirement that companies must respond to reports of terrorist speech within an hour is, to put it bluntly, absurd. As the letter organized by Witness states, this regulation essentially forces companies to bypass due process and make rapid and unaccountable decisions on expression through automated means and furthermore doesn’t reflect the realities of how violent groups recruit and share information online.

We echo these and other calls from defenders of human rights and civil liberties for MEPs to reject proactive filtering obligations and to refrain from enacting laws that will have unintended consequences for freedom of expression.

Read more gcnews.htm at MelonFarmers.co.uk

DCMS logo Wrangling in Whitehall has held up plans to set up a social media censor dubbed Ofweb, The Mail on Sunday reveals.The Government was due to publish a White Paper this winter on censorship of tech giants but this Mail has learnt it is still far from ready. Culture Secretary Jeremy Wright said it would be published within a month, but a Cabinet source said that timeline was wholly unrealistic. Other senior Government sources went further and said the policy document is unlikely to surface before the Spring.

Key details on how a new censor would work have yet to be decided while funding from the Treasury has not yet been secured. Another problem is that some Ministers believe the proposed clampdown is too draconian and are preparing to try to block or water down the plan.

There are also concerns that technically difficult requirements would benefit the largest US companies as smaller European companies and start ups would not be able to afford the technology and development required.

The Mail on Sunday understands Jeremy Wright has postponed a visit to Facebook HQ in California to discuss the measures, as key details are still up in the air.

Read more uk_internet_censors.htm at MelonFarmers.co.uk

arms of the british governmentjpg logo The U.K. government is rushing to finalize a draft internet censorship law particularly targeting social media but key details of the proposal have yet to be finalised amid concerns about stifling innovation.Government officials have been meeting with industry players, MPs, peers and other groups over the past month as they try to finalise their proposals.

People involved in those discussions said there is now broad agreement about the need to impose a new duty of care on big tech companies, as well as the need to back up their terms and conditions with the force of law.

A white paper is due be published by the end of winter. But the Department for Digital, Culture, Media and Sport, which is partly responsible for writing up the new rules alongside the Home Office, is still deliberating over key aspects with just weeks to go until the government said it would unveil an outline of its proposals.

Among the sticking points are worries that regulation could stifle innovation in one of the U.K. economy’s most thriving sectors and concerns over whether it can keep pace with rapid technological change. Another is ensuring sufficient political support to pass the law despite likely opposition from parts of the Conservative Party. A third is deciding what regulatory agency would ultimately be responsible for enforcing the so-called Internet Safety Law.

A major unresolved question is what censorship body will be in charge of enforcing laws that could expose big tech companies to greater liability for hosted content, a prospect that firms including Google and Facebook have fought at the European level.

Several people who spoke to POLITICO said the government does not appear to have settled on who would be the censor, although the communications regulator Ofcom is very much in the mix, however there are concerns that Ofcom is already getting too big.

Read more uk_internet_censors.htm at MelonFarmers.co.uk

dirty boyz logo gaystarnews.com has published an article outlining the dangers of porn viewers submitting their identity data and browsing history to age verifiers and their websites. The article explains that the dangers for gay porn viewers are even mor pronounced that for straight viewers. The artisle illustrates this with an example:

David Bridle, the publisher of Dirty Boyz , announced in October that last month’s issue of the magazine would be its last. He said:

Following the Conservative government’s decision … to press ahead with new regulations forcing websites which make money from adult content to carry an age verification system … Dirtyboyz and its website dirtyboyz.xxx have made the decision to close.

The new age verification system will be mostly run by large adult content companies which themselves host major “Tube” style porn sites. ‘It would force online readers of Dirtyboyz to publicly declare themselves.

Open Rights Group executive director, Jim Killock, told GSN the privacy of users needs protecting:

The issue with age verification systems is that they need to know it’s you. This means there’s a strong likelihood that it will basically track you and know what you’re watching. And that’s data that could be very harmful to people.

It could cause issues in relationships. Or it could see children outed to their parents. It could mean people are subjected to scams and blackmail if that data falls into criminal hands. Government response

A spokesperson for the Department of Culture, Media and Sport (DCMS) told Gay Star News:

Pornographic websites and age verification services will be subject to the UK’s existing high standard of data protection legislation. The Data Protection Act 2018 provides a comprehensive and modern framework for data protection, with strong sanctions for malpractice and enforced by the Information Commissioner’s Office.

But this is bollox, the likes of Facebook and Google are allowed to sell browsing data for eg targeted advertising within the remit of GDPR. And targeted advertising could be enough in itself to out porn viewers.

Read more eu.htm at MelonFarmers.co.uk

European Court of Justice The French Internet censor CNIL some time ago insisted that censorship required under the ‘right to be forgotten’ should be applied worldwide rather than limited to the EU. Google appealed against the court order leading to the case being sent to the European Court of Justice.Now opinions from the court’s advocate general suggest that court will determine that the right to be forgotten does not apply worldwide. The opinions are not final but the court often follows them when it hands down its ruling, which is expected later.

CNIL wanted Google to remove links from Google.com instead of just removing links from European versions of the site, like Google.de and Google.fr. However Maciej Szpunar warned that going further would be risky because the right to be forgotten always has to be balanced against other rights, including legitimate public interest in accessing the information sought.

Szpunar said if worldwide de-referencing was allowed, European Union authorities would not be able to determine a right to receive information or balance it against other fundamental rights to data protection and to privacy.

And of course if France were allowed to censor information from the entire worldwide internet then why not China, Russia, Iran, and Saudi Arabia?

Read more uk_internet_censors.htm at MelonFarmers.co.uk

House of Commons logo The government has published Online Pornography (Commercial Basis) Regulations 2019 which defines which websites get caught up in upcoming internet porn censorship requirements and how social media websites are excused from the censorship.These new laws will come into force on the day that subsection (1) of section 14 of the Digital Economy Act 2017 comes fully into force. This is the section that introduces porn censorship and age verification requirements. This date has not yet been announced but the government has promised to give at least 3 months notice.

So now websites which are more than one-third pornographic content or else those that promote themselves as pornographic will be obliged to verify the age of UK visitors under. However the law does not provide any specific protection for porn viewers’ data beyond the GDPR requirements to obtain nominal consent before using the data obtained for any purpose the websites may desire.

The BBFC and ICO will initiate a voluntary kitemark scheme so that porn websites and age verification providers can be audited as holding porn browsing data and identity details responsibly. This scheme has not yet produced any audited providers so it seems a little unfair to demand that websites choose age verification technology before service providers are checked out.

It all seems extraordinarily dangerous for porn users to submit their identity to adult websites or age verification providers without any protection under law. The BBFC has offered worthless calls for these companies to handle data responsibly, but so many of the world’s major website companies have proven themselves to be untrustworthy, and hackers, spammers, scammers, blackmailers and identity thieves are hardly likely to take note of the BBFC’s fine words eg suggesting ‘best practice’ when implementing age verification.

Neil Brown, the MD of law firm decoded.legal told Sky News:

It is not clear how this age verification will be done, and whether it can be done without also have to prove identity, and there are concerns about the lack of specific privacy and security safeguards.

Even though this legislation has received quite a lot of attention, I doubt most internet users will be aware of what looks like an imminent requirement to obtain a ‘porn licence’ before watching pornography online.

The government’s own impact assessment recognises that it is not guaranteed to succeed, and I suspect we will see an increase in advertising from providers in the near future.

It would seem particularly stupid to open one up to the dangers of have browsing and identity tracked, so surely it is time to get oneself protected with a VPN, which enables one to continue accessing porn without having to hand over identity details.