Posts Tagged ‘Internet Censorship’

Read more gcnews.htm at MelonFarmers.co.uk

matt hancockThe UK’s digital and culture secretary, Matt Hancock, has ruled out creating a new internet censor targeting social media such as Facebook and Twitter.In an interview on the BBC’s Media Show , Hancock said he was not inclined in that direction and instead wanted to ensure existing regulation is fit for purpose. He said:

If you tried to bring in a new regulator you’d end up having to regulate everything. But that doesn’t mean that we don’t need to make sure that the regulations ensure that markets work properly and people are protected.

Meanwhile the Electoral Commission and the Department for Digital, Culture, Media and Sport select committee are now investigating whether Russian groups used the platforms to interfere in the Brexit referendum in 2016. The DCMS select committee is in the US this week to grill tech executives about their role in spreading fake news. In a committee hearing in Washington yesterday, YouTube’s policy chief said the site had found no evidence of Russian-linked accounts purchasing ads to interfere in the Brexit referendum.

Advertisements
Read more uk_internet_censors.htm at MelonFarmers.co.uk

government unacceptableGovernment outlines next steps to make the UK the safest place to be online

The Prime Minister has announced plans to review laws and make sure that what is illegal offline is illegal online as the Government marks Safer Internet Day.

The Law Commission will launch a review of current legislation on offensive online communications to ensure that laws are up to date with technology.

As set out in the Internet Safety Strategy Green Paper , the Government is clear that abusive and threatening behaviour online is totally unacceptable. This work will determine whether laws are effective enough in ensuring parity between the treatment of offensive behaviour that happens offline and online.

The Prime Minister has also announced:

  • That the Government will introduce a comprehensive new social media code of practice this year, setting out clearly the minimum expectations on social media companies

  • The introduction of an annual internet safety transparency report – providing UK data on offensive online content and what action is being taken to remove it.

Other announcements made today by Secretary of State for Digital, Culture, Media and Sport (DCMS) Matt Hancock include:

  • A new online safety guide for those working with children, including school leaders and teachers, to prepare young people for digital life

  • A commitment from major online platforms including Google, Facebook and Twitter to put in place specific support during election campaigns to ensure abusive content can be dealt with quickly — and that they will provide advice and guidance to Parliamentary candidates on how to remain safe and secure online

DCMS Secretary of State Matt Hancock said:

We want to make the UK the safest place in the world to be online and having listened to the views of parents, communities and industry, we are delivering on the ambitions set out in our Internet Safety Strategy.

Not only are we seeing if the law needs updating to better tackle online harms, we are moving forward with our plans for online platforms to have tailored protections in place – giving the UK public standards of internet safety unparalleled anywhere else in the world.

Law Commissioner Professor David Ormerod QC said:

There are laws in place to stop abuse but we’ve moved on from the age of green ink and poison pens. The digital world throws up new questions and we need to make sure that the law is robust and flexible enough to answer them.

If we are to be safe both on and off line, the criminal law must offer appropriate protection in both spaces. By studying the law and identifying any problems we can give government the full picture as it works to make the UK the safest place to be online.

The latest announcements follow the publication of the Government’s Internet Safety Strategy Green Paper last year which outlined plans for a social media code of practice. The aim is to prevent abusive behaviour online, introduce more effective reporting mechanisms to tackle bullying or harmful content, and give better guidance for users to identify and report illegal content. The Government will be outlining further steps on the strategy, including more detail on the code of practice and transparency reports, in the spring.

To support this work, people working with children including teachers and school leaders will be given a new guide for online safety, to help educate young people in safe internet use. Developed by the UK Council for Child Internet Safety ( UKCCIS , the toolkit describes the knowledge and skills for staying safe online that children and young people should have at different stages of their lives.

Major online platforms including Google, Facebook and Twitter have also agreed to take forward a recommendation from the Committee on Standards in Public Life (CSPL) to provide specific support for Parliamentary candidates so that they can remain safe and secure while on these sites. during election campaigns. These are important steps in safeguarding the free and open elections which are a key part of our democracy.

Notes

Included in the Law Commission’s scope for their review will be the Malicious Communications Act and the Communications Act. It will consider whether difficult concepts need to be reconsidered in the light of technological change – for example, whether the definition of who a ‘sender’ is needs to be updated.

The Government will bring forward an Annual Internet Safety Transparency report, as proposed in our Internet Safety Strategy green paper. The reporting will show:

  • the amount of harmful content reported to companies

  • the volume and proportion of this material that is taken down

  • how social media companies are handling and responding to complaints

  • how each online platform moderates harmful and abusive behaviour and the policies they have in place to tackle it.

Annual reporting will help to set baselines against which to benchmark companies’ progress, and encourage the sharing of best practice between companies.

The new social media code of practice will outline standards and norms expected from online platforms. It will cover:

  • The development, enforcement and review of robust community guidelines for the content uploaded by users and their conduct online

  • The prevention of abusive behaviour online and the misuse of social media platforms — including action to identify and stop users who are persistently abusing services

  • The reporting mechanisms that companies have in place for inappropriate, bullying and harmful content, and ensuring they have clear policies and performance metrics for taking this content down

  • The guidance social media companies offer to help users identify illegal content and contact online, and advise them on how to report it to the authorities, to ensure this is as clear as possible

  • The policies and practices companies apply around privacy issues.

Comment: Preventing protest

7th February 2018. See  article from indexoncensorship.org

Index on Censorship logoThe UK Prime Minister’s proposals for possible new laws to stop intimidation against politicians have the potential to prevent legal protests and free speech that are at the core of our democracy, says Index on Censorship. One hundred years after the suffragette demonstrations won the right for women to have the vote for the first time, a law that potentially silences angry voices calling for change would be a retrograde step.

No one should be threatened with violence, or subjected to violence, for doing their job, said Index chief executive Jodie Ginsberg. However, the UK already has a host of laws dealing with harassment of individuals both off and online that cover the kind of abuse politicians receive on social media and elsewhere. A loosely defined offence of ‘intimidation’ could cover a raft of perfectly legitimate criticism of political candidates and politicians — including public protest.

Read more eu.htm at MelonFarmers.co.uk

stop censorship machinesIn a new campaign video, several Members of the European Parliament warn that the EU’s proposed mandatory upload filters pose a threat to freedom of speech. The new filters would function as censorship machines which are “completely disproportionate,” they say. The MEPs encourage the public to speak up, while they still can.

Through a series of new proposals, the European Commission is working hard to modernize EU copyright law. Among other things, it will require online services to do more to fight piracy.

These proposals have not been without controversy. Article 13 of the proposed Copyright Directive, for example, has been widely criticized as it would require online services to monitor and filter uploaded content.

This means that online services, which deal with large volumes of user-uploaded content, must use fingerprinting or other detection mechanisms — similar to YouTube’s Content-ID system — to block copyright infringing files.

The Commission believes that more stringent control is needed to support copyright holders. However, many legal scholars , digital activists , and members of the public worry that they will violate the rights of regular Internet users.

In the European Parliament, there is fierce opposition as well. Today, six Members of Parliament (MEPs) from across the political spectrum released a new campaign video warning their fellow colleagues and the public at large.

The MEPs warn that such upload filters would act as censorship machines, something they’ve made clear to the Council’s working group on intellectual property, where the controversial proposal was discussed today.

Imagine if every time you opened your mouth, computers controlled by big companies would check what you were about to say, and have the power to prevent you from saying it, Greens/EFA MEP Julia Reda says.

A new legal proposal would make this a reality when it comes to expressing yourself online: Every clip and every photo would have to be pre-screened by some automated ‘robocop’ before it could be uploaded and seen online, ALDE MEP Marietje Schaake adds.

Stop censorship machines!

Schaake notes that she has dealt with the consequences of upload filters herself. When she uploaded a recording of a political speech to YouTube, the site took it down without explanation. Until this day, the MEP still doesn’t know on what grounds it was removed.

These broad upload filters are completely disproportionate and a danger for freedom of speech, the MEPs warn. The automated systems make mistakes and can’t properly detect whether something’s fair use, for example.

Another problem is that the measures will be relatively costly for smaller companies ,which puts them at a competitive disadvantage. “Only the biggest platforms can afford them — European competitors and small businesses will struggle,” ECR MEP Dan Dalton says.

The plans can still be stopped, the MEPs say. They are currently scheduled for a vote in the Legal Affairs Committee at the end of March, and the video encourages members of the public to raise their voices.

Speak out …while you can still do so unfiltered! S&D MEP Catherine Stihler says.

Read more eu.htm at MelonFarmers.co.uk

banned titanic tweetThe Twitter account of German satirical magazine Titanic was blocked after it parodied anti-Muslim comments by AfD MP Beatrix von Storch.She accused police of trying to appease the barbaric, Muslim, rapist hordes of men by putting out a tweet in Arabic.

On Tuesday night, the magazine published a tweet parodying von Storch, saying:

The last thing that I want is mollified barbarian, Muslim, gang-raping hordes of men.

Titanic said on Wednesday its Twitter account had been blocked over the message, presumably as a result of a new law requiring social media sites to immediately block hateful comments on threat of massive fines. There is no time allowed or economic reason for assessing the merits of censorship claims, so social media companies are just censoring everything on demand, just in case.

Read more uk_internet_censors.htm at MelonFarmers.co.uk

dcms age verification risk assessment The UK government slipped out its impact assessment of the upcoming porn censorship law during the Christmas break. The new law requires porn websites to be blocked in the UK when they don’t implement age verification.The measures are currently due to come into force in May but it seems a tight schedule as even the rules for acceptable age verification systems have not yet been published.

The report contains some interesting costings and assessment of the expected harms to be inflicted on porn viewers and British adult businesses.

The document notes the unpopularity of the age verification requirements with a public consultation finding that 54% of respondents did not support the introduction of a law to require age verification.

However, the government has forged ahead, with the aim of stopping kids accessing porn on the grounds that such content could distress them or harm their development.

The governments censorship rules will be enforced by the BBFC, in its new role as the UK porn censor although it prefers the descriptor: age-verification regulator . The government states that the censorship job will initially be funded by the government, and the government is assuming this will cost £4.5 million based upon a range of estimates from 1 million to 8 million.

The government has bizarrely assumed that the BBFC will ban just 1 to 60 sites in a year. The additional work for ISPs to block these sites is estimated £100,000 to £500,000 for each ISP. Probably to be absorbed by larger companies, but will be an expensive problem for smaller companies who do not currently implement any blocking systems.

Interestingly the government notes that there wont be any impact on UK adult businesses notionally because they should have already implemented age verification under ATVOD and Ofcom censorship rules. In reality it will have little impact on UK businesses because they have already been decimated by the ATVOD and Ofcom rules and have mostly closed down or moved abroad.

Te key section of the document summarising expected harms is as follows.

The policy option set out above also gives rise to the following risks:

  • Deterring adults from consuming content as a result of privacy/ fraud concerns linked to inputting ID data into sites and apps, also some adults may not be able to prove their age online;
  • Development of alternative payment systems and technological work-arounds could mean porn providers do not comply with new law, and enforcement is impossible as they are based overseas, so the policy goal would not be achieved;
  • The assumption that ISPs will comply with the direction of the regulator;
  • Reputational risks including Government censorship, over-regulation, freedom of speech and freedom of expression.
  • The potential for online fraud could raise significantly, as criminals adapt approaches in order to make use of false AV systems / spoof websites and access user data;
  • The potential ability of children, particularly older children, to bypass age verification controls is a risk. However, whilst no system will be perfect, and alternative routes such as virtual private networks and peer-to-peer sharing of content may enable some under-18s to see this content, Ofcom research indicates that the numbers of children bypassing network level filters, for example, is very low (ca. 1%).
  • Adults (and some children) may be pushed towards using ToR and related systems to avoid AV where they could be exposed to illegal and extreme material that they otherwise would never have come into contact with.

The list does not seem to include the potential for blackmail from user data sold by porn firms, or else stolen by hackers. And mischievously, politicians could be one of the groups most open to blackmail for money or favours.

Another notable omission, is that the government does not seem overly concerned about mass VPN usage. I would have thought that the secret services wanting to monitor terrorists would not be pleased if a couple of million people stared to use encrypted VPNs. Perhaps it shows that the likes of GCHQ can already see into what goes on behind VPNs.

Read more uk_internet_censors.htm at MelonFarmers.co.uk

ben wallaceBritain’s security minister Ben Wallace has threatened technology firms such as Facebook, YouTube and Google with punitive taxation if they fail to cooperate with the government on fighting online extremism.Ben Wallace said that Britain was spending hundreds of millions of pounds on human surveillance and de-radicalisation programmes because tech giants were failing to remove extremist content online quick enough.

Wallace said the companies were ruthless profiteers, despite sitting on beanbags in T-shirts, who sold on details of its users to loan companies but would fail to give the same information to the government.

Because of encryption and because of radicalisation, the cost of that is heaped on law enforcement agencies, Wallace told the Sunday Times. I have to have more human surveillance. It’s costing hundreds of millions of pounds. If they [tech firms] continue to be less than co-operative, we should look at things like tax as a way of incentivising them or compensating for their inaction.

Because content is not taken down as quickly as they could do, we’re having to de-radicalise people who have been radicalised. That’s costing millions. They [the firms] can’t get away with that and we should look at all options, including tax.

Maybe its a good idea to extract a significantly higher tax take from the vast sums of money being siphoned out of the UK economy straight into the hands of American big business. But it seems a little hopeful to claim that quicker blocking of terrorist related material will ‘solve’ the UK’s terrorism problem.One suspects that terrorism is a little more entrenched in society, and that terrorism will continue pretty much unabated even if the government get its way with quicker takedowns. There might even be a scope for some very expensive legal bluff calling, should expensive censorship measures get taken, and it turns out that the government blame conjecture is provably wrong.

Read more parl.htm at MelonFarmers.co.uk

terry burnsTerry Burns is set to become the new chairman of Ofcom in January 2018. As part of the approval process he was asked to appear before parliament’s Digital, Culture, Media and Sport Select Committee. And the topic of conversation was internet censorship, in particular censorship of social media.He was asked his thoughts on whether social media platforms such as Facebook should be recognised as publishers and therefore regulated. He responded:

I think it’s a very big issue. It’s becoming more and more difficult to distinguish between broadcasting and what one is capable of watching on the internet.

However, I think in many ways the main issue here is in terms of legislation and it is an issue for parliament rather than Ofcom.

I’ve been following this issue about platforms versus publishers… There must be a question of how sustainable that is.  I don’t want to take a position on that at this stage. As far as I’m concerned the rules under which we are working at the moment is that they are defined as platforms.

There will be an ongoing debate about that, for the moment that’s where they are. I find it difficult to believe that over time there isn’t going to be further examination of this issue.

Asked whether there was a role for Ofcom to monitor and check social media, Lord Burns said:

I don’t see any reason why if parliament wanted Ofcom to do that it shouldn’t [do so]… I’m not quite sure who else would do it.