Posts Tagged ‘Internet Censorship’

Read more eu.htm at MelonFarmers.co.uk

EU flagArticle 13: Monitoring and filtering of internet content is unacceptable. Index on Censorship joined with 56 other NGOs to call for the deletion of Article 13 from the proposal on the Digital Single Market, which includes obligations on internet companies that would be impossible to respect without the imposition of excessive restrictions on citizens’ fundamental rights.

Dear President Juncker,
Dear President Tajani,
Dear President Tusk,
Dear Prime Minister Ratas,
Dear Prime Minister Borissov,
Dear Ministers,
Dear MEP Voss, MEP Boni

The undersigned stakeholders represent fundamental rights organisations.

Fundamental rights, justice and the rule of law are intrinsically linked and constitute core values on which the EU is founded. Any attempt to disregard these values undermines the mutual trust between member states required for the EU to function. Any such attempt would also undermine the commitments made by the European Union and national governments to their citizens.

Article 13 of the proposal on Copyright in the Digital Single Market include obligations on internet companies that would be impossible to respect without the imposition of excessive restrictions on citizens’ fundamental rights.

Article 13 introduces new obligations on internet service providers that share and store user-generated content, such as video or photo-sharing platforms or even creative writing websites, including obligations to filter uploads to their services. Article 13 appears to provoke such legal uncertainty that online services will have no other option than to monitor, filter and block EU citizens’ communications if they are to have any chance of staying in business.

Article 13 contradicts existing rules and the case law of the Court of Justice. The Directive of Electronic Commerce ( 2000/31/EC) regulates the liability for those internet companies that host content on behalf of their users. According to the existing rules, there is an obligation to remove any content that breaches copyright rules, once this has been notified to the provider.

Article 13 would force these companies to actively monitor their users’ content, which contradicts the ‘no general obligation to monitor’ rules in the Electronic Commerce Directive. The requirement to install a system for filtering electronic communications has twice been rejected by the Court of Justice, in the cases Scarlet Extended ( C 70/10) and Netlog/Sabam (C 360/10). Therefore, a legislative provision that requires internet companies to install a filtering system would almost certainly be rejected by the Court of Justice because it would contravene the requirement that a fair balance be struck between the right to intellectual property on the one hand, and the freedom to conduct business and the right to freedom of expression, such as to receive or impart information, on the other.

In particular, the requirement to filter content in this way would violate the freedom of expression set out in Article 11 of the Charter of Fundamental Rights. If internet companies are required to apply filtering mechanisms in order to avoid possible liability, they will. This will lead to excessive filtering and deletion of content and limit the freedom to impart information on the one hand, and the freedom to receive information on the other.

If EU legislation conflicts with the Charter of Fundamental Rights, national constitutional courts are likely to be tempted to disapply it and we can expect such a rule to be annulled by the Court of Justice. This is what happened with the Data Retention Directive (2006/24/EC), when EU legislators ignored compatibility problems with the Charter of Fundamental Rights. In 2014, the Court of Justice declared the Data Retention Directive invalid because it violated the Charter.

Taking into consideration these arguments, we ask the relevant policy-makers to delete Article 13.

European Digital Rights (EDRi)
Access Info
ActiveWatch
Article 19
Associação D3 — Defesa dos Direitos Digitais
Associação Nacional para o Software Livre (ANSOL)
Association for Progressive Communications (APC)
Association for Technology and Internet (ApTI)
Association of the Defence of Human Rights in Romania (APADOR)
Associazione Antigone
Bangladesh NGOs Network for Radio and Communication (BNNRC)
Bits of Freedom (BoF)
BlueLink Foundation
Bulgarian Helsinki Committee
Center for Democracy & Technology (CDT)
Centre for Peace Studies
Centrum Cyfrowe
Coalizione Italiana Liberta@ e Diritti Civili (CILD)
Code for Croatia
COMMUNIA
Culture Action Europe
Electronic Frontier Foundation (EFF)
epicenter.works
Estonian Human Rights Centre
Freedom of the Press Foundation
Frënn vun der Ënn
Helsinki Foundation for Human Rights
Hermes Center for Transparency and Digital Human Rights
Human Rights Monitoring Institute
Human Rights Watch
Human Rights Without Frontiers
Hungarian Civil Liberties Union
Index on Censorship
International Partnership for Human Rights (IPHR)
International Service for Human Rights (ISHR)
Internautas
JUMEN
Justice & Peace
La Quadrature du Net
Media Development Centre
Miklos Haraszti (Former OSCE Media Representative)
Modern Poland Foundation
Netherlands Helsinki Committee
One World Platform
Open Observatory of Network Interference (OONI)
Open Rights Group (ORG)
OpenMedia
Panoptykon
Plataforma en Defensa de la Libertad de Información (PDLI)
Reporters without Borders (RSF)
Rights International Spain
South East Europe Media Organisation (SEEMO)
South East European Network for Professionalization of Media (SEENPM)
Statewatch
The Right to Know Coalition of Nova Scotia (RTKNS)
Xnet

Advertisements
Read more ow.htm at MelonFarmers.co.uk

patricia hodgsonThe chairman of the media censor Ofcom has said she believes internet businesses such as Google and Facebook are publishers, and so should be regulated by the state.Patricia Hodgson also revealed that the board of Ofcom discussed how the internet could be regulated in the future at a strategy day last week, although she said this was ultimately a matter for the government.

Hodgson was speaking to MPs at a hearing of the digital, culture, media and sport committee. Asked about the rise of fake news and whether internet companies should face greater regulation, Hodgson said:

Those particular distribution systems [Facebook, Google, Twitter etc] are not within Ofcom’s responsibility but we feel very strongly about the integrity of news in this country and we are totally supportive of steps that should and need to be taken to improve matters.

My personal view is I see this as an issue that is finally being grasped — certainly within the EU, certainly within this country — and to my amazement and interest, being asked in the United States as a result of the potential Russian scandals. My personal view is that they are publishers but that is only my personal view, that is not an Ofcom view. As I said, Ofcom is simply concerned about the integrity of news and very supportive of the debate and the steps that are being taken.

Theresa May’s spokesman said Hodgson’s comments were a matter for her as an independent regulator, but indicated that ministers were sympathetic.

Sharon White, the chief executive of Ofcom, said she was wary of regulating internet companies. We feel strongly that the platforms as publishers have got more responsibility to ensure the right content, she said. I don’t think it’s a question of regulation, which I think has a fuzzy boundary with censorship, but I think we feel strongly that the platforms ought to be doing more to ensure their content can be trusted.

Read more gcnews.htm at MelonFarmers.co.uk

amber ruddHome secretary Amber Rudd used her keynote speech at the Conservative party conference in Manchester to announce new laws, which would see anyone caught repeatedly watching extremist content on the internet to face up to 15 years jail.At present laws prohibiting material that could be useful to terrorists only apply to hardcopy or downloaded material . They do not apply to material that is not actually in one’s possession.

Security and digital rights experts have dumped on the home secretary’s proposal for the new laws, calling the move incredibly dangerous. Jim Killock, Executive Director of Open Rights Group, said:

This is incredibly dangerous. Journalists, anti-terror campaigns and others may need to view extremist content, regularly and frequently.

People tempted towards extremism may fear discussing what they have read or seen with anyone in authority. Even potential informants may be dissuaded from coming forward because they are already criminalised.

Martha Spurrier, director of Liberty, said:

This shocking proposal would make thoughtcrime a reality in the UK. Blurring the boundary between thought and action like this undermines the bedrock principles of our criminal justice system and will criminalise journalists, academics and many other innocent people.

We have a vast number of laws to tackle terror. The Government’s own reviewer of terror legislation Max Hill QC has said repeatedly that we need fewer, not more. A responsible Home Secretary would listen to the evidence — not grandstand for cheap political points at the expense of our fundamental freedoms.

In terms of how people would be identified — it’s hard for us to say without seeing more detail about the proposals. It’s likely identifying people would mean intrusive surveillance measures like those in the Investigatory Powers Act. In terms of enforceability — it’s likely to be really difficult because so many people will be caught up who have a legitimate reason and will then run that defence.

Shashank Joshi, a research fellow at the security think tank RUSI, told BuzzFeed News that Rudd’s proposal lacked specific detail and ran the risk of criminalising parts of some newspapers:

The risk is that [Rudd] runs into the same problems as her predecessor, Theresa May, did in 2015, when she sought to ban ‘extremism’, Joshi said. These are broad and nebulous terms, and they require very careful definition in order to avoid curbing legitimate free speech.

Otherwise we would risk criminalising some of the material that appears in certain mainstream newspaper columns.

Amber Rudd also decided to bang on about prohibiting encryption, even rather haplessly admitting that she did not understand who it worked.

Again campaigners were not impressed. Jim Killock, Executive Director of Open Rights Group, noted:

Amber Rudd needs to be realistic and clear about what she wants. It is no better saying she wishes to deny criminals the use of encryption than to say she wishes to deny them access to gravity. And if she succeeds in pushing them off major platforms, terrorists may end up being harder to detect.

Lib Dem Ed Davey also weighed in:

Encryption keeps us all secure online. It allows businesses to operate and thrive securely. Any weakening of encryption will ultimately make us all less safe. For if you weaken encryption, you run the risk of letting in the bad guys

But this Conservative government can only see things in black and white — ignoring the realities of technology. The Home Secretary’s key note speech called on tech giants to work together and, with government, to take down extremist content faster than ever before. My party completely support her in that mission. The only way we will defeat this scourge is to band together — exchange information, invest in new technologies and present a united front.

Read more eu.htm at MelonFarmers.co.uk

Germany flagGermany’s new internet censorship law came into force on 1st October. The law nominally targets ‘hate speech’, but massively high penalties coupled with ridiculously short time scales allowed to consider the issues, mean that the law ensures that anything the authorities don’t like will have to be immediately censored…just in case.Passed earlier this summer, the law will financially penalize social media platforms, like Facebook, Twitter, and YouTube, if they don’t remove hate speech, as per its definition in Germany’s current criminal code within 24 hours. They will be allowed up to a week to decide for comments that don’t fall into the blatant hate speech category. The top fine for not deleting hate speech within 24 hours is 50 million euro though that would be for repeatedly breaking the law, not for individual cases.

Journalists, lawyers, and free-speech advocates have been voicing their concerns about the new law for months. They say that, to avoid fines, Facebook and others will err on the side of caution and just delete swathes of comments, including ones that are not illegal. They worry that social media platforms are being given the power to police and effectively shut down people’s right to free opinion and free speech in Germany.

The German Journalists Association (DJV) is calling on journalists and media organizations to start documenting all deletions of their posts on social media as of today. The borders of free speech must not be allowed to be drawn by profit-driven businesses, said DJV chairman Frank 3cberall in a recent statement.

Reporters Without Borders also expressed their strong opposition to the law when it was drafted in May, saying it would contribute to the trend to privatize censorship by delegating the duties of judges to commercial online platforms — as if the internet giants can replace independent and impartial courts.

Read more eu.htm at MelonFarmers.co.uk

european commission logoVera Jourova, the EU’s commissioner for justice, is resisting calls to follow Theresa May’s censorship lead and legislate to fine internet companies who fail to take down anything deemed hate speech.Vera Jourova condemned Facebook as a highway for hatred, but the former Czech minister said she was not yet ready to promote EU-wide legislation similar to that being pursued in the UK, France and Germany. I would never say they [the UK, France and Germany] are wrong, but we all have the responsibility to react to this challenge with necessary and proportionate reaction, she told the Guardian.

In Britain, May is demanding that internet companies remove hateful content , in particular that aligned to terror organisations, within two hours of being discovered, or face financial sanctions. Under a law due to come into effect next month in Germany, social media companies face fines of up to £43m if they persistently fail to remove illegal content from their sites.

The commission is instead offering further guidance to internet companies about how they improve their record by complying with a voluntary code of conduct drawn up last year and so far adopted by Facebook, Twitter and YouTube.

Read more inus.htm at MelonFarmers.co.uk

Electronic Frontier Foundation EFF opposes the Senate’s Stop Enabling Sex Trafficking Act ( S. 1693 ) (“SESTA”), and its House counterpart the Allow States and Victims to Fight Online Sex Trafficking Act ( H.R. 1865 ), because they would open up liability for Internet intermediaries–the ISPs, web hosting companies, websites, and social media platforms that enable users to share and access content online–by amending Section 230’s immunity for user-generated content ( 47 U.S.C. § 230 ). While both bills have the laudable goal of curbing sex trafficking, including of minor children, they would greatly weaken Section 230’s protections for online free speech and innovation .

Proponents of SESTA and its House counterpart view Section 230 as a broken law that prevents victims of sex trafficking from seeking justice. But Section 230 is not broken. First, existing federal criminal law allows federal prosecutors to go after bad online platforms, like Backpage.com, that knowingly play a role in sex trafficking. Second, courts have allowed civil claims against online platforms–despite Section 230’s immunity–when a platform had a direct hand in creating the illegal user-generated content.

Thus, before Congress fundamentally changes Section 230, lawmakers should ask whether these bills are necessary to begin with.

Why Section 230 Matters

Section 230 is the part of the Telecommunications Act of 1996 that provides broad immunity to Internet intermediaries from liability for the content that their users create or post (i.e., user-generated content or third-party content).

Section 230 can be credited with creating today’s Internet–with its abundance of unique platforms and services that enable a vast array of user-generated content. Section 230 has provided the legal buffer online entrepreneurs need to experiment with news ways for users to connect online–and this is just as important for today’s popular platforms with billions of users as it is for startups.

Congress’ rationale for crafting Section 230 is just as applicable today as when the law was passed in 1996: if Internet intermediaries are not largely shielded from liability for content their users create or post–particularly given their huge numbers of users–existing companies risk being prosecuted or sued out of existence, and potential new companies may not even enter the marketplace for fear of being prosecuted or sued out of existence (or because venture capitalists fear this).

This massive legal exposure would dramatically change the Internet as we know it: it would not only thwart innovation in online platforms and services, but free speech as well. As companies fall or fail to be launched in the first place, the ability of all Internet users to speak online would be disrupted. For those companies that remain, they may act in ways that undermine the open Internet. They may act as gatekeepers by preventing whole accounts from being created in the first place and pre-screening content before it is even posted. Or they may over-censor already posted content, pursuant to very strict terms of service in order to avoid the possibility of any user-generated content on their platforms and services that could get them into criminal or civil hot water. Again, this would be a disaster for online free speech. The current proposals to gut Section 230 raise the exact same problems that Congress dealt with in 1996.

By guarding online platforms from being held legally responsible for what thousands or millions or even billions of users might say online, Section 230 has protected online free speech and innovation for more than 20 years.

But Congress did not create blanket immunity. Section 230 reflects a purposeful balance that permits Internet intermediaries to be on the hook for their users’ content in certain carefully considered circumstances, and the courts have expanded upon these rules.

Section 230 Does Not Bar Federal Prosecutors From Targeting Criminal Online Platforms

Section 230 has never provided immunity to Internet intermediaries for violations of federal criminal law –like the federal criminal sex trafficking statute ( 18 U.S.C. § 1591 ). In 2015, Congress passed the SAVE Act, which amended Section 1591 to expressly include “advertising” as a criminal action. Congress intended to go after websites that host ads knowing that such ads involve sex trafficking. If these companies violate federal criminal law, they can be criminally prosecuted in federal court alongside their users who are directly engaged in sex trafficking.

In a parallel context, a federal judge in the Silk Road case correctly ruled that Section 230 did not provide immunity against federal prosecution to the operator of a website that hosted other people’s ads for illegal drugs.

By contrast, Section 230 does provide immunity to Internet intermediaries from liability for user-generated content under state criminal law . Congress deliberately chose not to expose these companies to criminal prosecutions in 50 different states for content their users create or post. Congress fashioned this balance so that federal prosecutors could bring to justice culpable companies while still ensuring that free speech and innovation could thrive online.

However, SESTA and its House counterpart would expose Internet intermediaries to liability under state criminal sex trafficking statutes. Although EFF understands the desire of state attorneys general to have more tools at their disposal to combat sex trafficking, such an amendment to Section 230 would upend the carefully crafted policy balance Congress embodied in Section 230.

More fundamentally, it cannot be said that Section 230’s current approach to criminal law has failed. A Senate investigation earlier this year and a recent Washington Post article both uncovered information suggesting that Backpage.com not only knew that their users were posting sex trafficking ads to their website, but that the company also took affirmative steps to help those ads get posted. Additionally, it has been reported that a federal grand jury has been empaneled in Arizona to investigate Backpage.com. Congress should wait and see what comes of these developments before it exposes Internet intermediaries to additional criminal liability.

Civil Litigants Are Not Always Without a Remedy Against Internet Intermediaries

Section 230 provides immunity to Internet intermediaries from liability for user-generated content under civil law–whether federal or state civil law. Again, Congress made this deliberate policy choice to protect online free speech and innovation.

Congress recognized that exposing companies to civil liability would put the Internet at risk even more than criminal liability because: 1) the standard of proof in criminal cases is “beyond a reasonable doubt,” whereas in civil cases it is merely “preponderance of the evidence,” making the likelihood higher that a company will lose a civil case; and 2) criminal prosecutors as agents of the government tend to exercise more restraint in filing charges, whereas civil litigants often exercise less restraint in suing other private parties, making the likelihood higher that a company will be sued in the first place for third-party content.

However, Section 230’s immunity against civil claims is not absolute. The courts have interpreted this civil immunity as creating a presumption of civil immunity that plaintiffs can rebut if they have evidence that an Internet intermediary did not simply host illegal user-generated content, but also had a direct hand in creating the illegal content. In a seminal 2008 decision, the U.S. Court of Appeals for the Ninth Circuit in Fair Housing Council v. Roommates.com held that a website that helped people find roommates violated fair housing laws by “inducing third parties to express illegal preferences.” The website had required users to answer profile questions related to personal characteristics that may not be used to discriminate in housing (e.g., gender, sexual orientation, and the presence of children in the home). Thus, the court held that the website lost Section 230 civil immunity because it was “directly involved with developing and enforcing a system that subjects subscribers to allegedly discriminatory housing practices.” Although EFF is concerned with some of the implications of the Roommates.com decision and its potential to chill online free speech and innovation, it is the law.

Thus, even without new legislation, victims of sex trafficking may bring civil cases against websites or other Internet intermediaries under the federal civil cause of action ( 18 U.S.C. § 1595 ), and overcome Section 230 civil immunity if they can show that the websites had a direct hand in creating ads for illegal sex. As mentioned above, a Senate investigation and a Washington Post article both strongly indicate that Backpage.com would not enjoy Section 230 civil immunity today.

SESTA and its House counterpart would expose Internet intermediaries to liability under federal and state civil sex trafficking laws. Removing Section 230’s rebuttable presumption of civil immunity would, as with the criminal amendments, disrupt the carefully crafted policy balance found in Section 230. Moreover, victims of sex trafficking can already bring civil suits against the pimps and “johns” who harmed them, as these cases against the direct perpetrators do not implicate Section 230.

Therefore, the bills’ amendments to Section 230 are not necessary–because Section 230 is not broken. Rather, Section 230 reflects a delicate policy balance that allows the most egregious online platforms to bear responsibility along with their users for illegal content, while generally preserving immunity so that free speech and innovation can thrive online.

By dramatically increasing the legal exposure of Internet intermediaries for user-generated content, the risk that these bills pose to the Internet as we know it is real. Visit our STOP SESTA campaign page and tell Congress to reject S. 1693 and H.R. 1865 !

Read more eu.htm at MelonFarmers.co.uk

european commission logoSocial media giants Facebook, Google and Twitter will be forced to change their terms of service for EU users within a month, or face hefty fines from European authorities, an official said on Friday.The move was initiated after politicians have decided to blame their unpopularity on ‘fake news’ rather than their own incompetence and their failure to listen to the will of the people.

The EU Commission sent letters to the three companies in December, stating that some terms of service were in breach of EU protection laws and urged them to do more to prevent fraud on their platforms. The EU has also urged social media companies to do more when it comes to assessing the suitability of user generated content.

The letters, seen by Reuters, explained that the EU Commission also wanted clearer signposting for sponsored content, and that mandatory rights, such as cancelling a contract, could not be interfered with.

Germany said this week it is working on a new law that would see social media sites face fines of up to $53 million if they failed to strengthen their efforts to remove material that the EU does not like. German censorship minister Heiko Mass said:

There must be as little space for criminal incitement and slander on social networks as on the streets. Too few criminal comments are deleted and they are not erased quickly enough. The biggest problem is that networks do not take the complaints of their own users seriously enough…it is now clear that we must increase the pressure on social networks.