Archive for the ‘Internet’ Category

Read more eu.htm at MelonFarmers.co.uk

Germany flagGermany’s new internet censorship law came into force on 1st October. The law nominally targets ‘hate speech’, but massively high penalties coupled with ridiculously short time scales allowed to consider the issues, mean that the law ensures that anything the authorities don’t like will have to be immediately censored…just in case.Passed earlier this summer, the law will financially penalize social media platforms, like Facebook, Twitter, and YouTube, if they don’t remove hate speech, as per its definition in Germany’s current criminal code within 24 hours. They will be allowed up to a week to decide for comments that don’t fall into the blatant hate speech category. The top fine for not deleting hate speech within 24 hours is 50 million euro though that would be for repeatedly breaking the law, not for individual cases.

Journalists, lawyers, and free-speech advocates have been voicing their concerns about the new law for months. They say that, to avoid fines, Facebook and others will err on the side of caution and just delete swathes of comments, including ones that are not illegal. They worry that social media platforms are being given the power to police and effectively shut down people’s right to free opinion and free speech in Germany.

The German Journalists Association (DJV) is calling on journalists and media organizations to start documenting all deletions of their posts on social media as of today. The borders of free speech must not be allowed to be drawn by profit-driven businesses, said DJV chairman Frank 3cberall in a recent statement.

Reporters Without Borders also expressed their strong opposition to the law when it was drafted in May, saying it would contribute to the trend to privatize censorship by delegating the duties of judges to commercial online platforms — as if the internet giants can replace independent and impartial courts.

Advertisements
Read more eu.htm at MelonFarmers.co.uk

european commission logoVera Jourova, the EU’s commissioner for justice, is resisting calls to follow Theresa May’s censorship lead and legislate to fine internet companies who fail to take down anything deemed hate speech.Vera Jourova condemned Facebook as a highway for hatred, but the former Czech minister said she was not yet ready to promote EU-wide legislation similar to that being pursued in the UK, France and Germany. I would never say they [the UK, France and Germany] are wrong, but we all have the responsibility to react to this challenge with necessary and proportionate reaction, she told the Guardian.

In Britain, May is demanding that internet companies remove hateful content , in particular that aligned to terror organisations, within two hours of being discovered, or face financial sanctions. Under a law due to come into effect next month in Germany, social media companies face fines of up to £43m if they persistently fail to remove illegal content from their sites.

The commission is instead offering further guidance to internet companies about how they improve their record by complying with a voluntary code of conduct drawn up last year and so far adopted by Facebook, Twitter and YouTube.

Read more me_internet.htm at MelonFarmers.co.uk

yale logoFacebook touts its partnership with outside fact-checkers as a key prong in its fight against fake news, but a major new Yale University study finds that fact-checking and then tagging inaccurate news stories on social media doesn’t work.The study , reported for the first time by POLITICO, found that tagging false news stories as disputed by third party fact-checkers has only a small impact on whether readers perceive their headlines as true. Overall, the existence of disputed tags made participants just 3.7 percentage points more likely to correctly judge headlines as false, the study said.

The researchers also found that, for some groups–particularly, Trump supporters and adults under 26–flagging bogus stories could actually end up increasing the likelihood that users will believe fake news. This because not all fake stories are fact checked, and the absence of a warning tends to add to the credibility of an unchecked, but fake, story.

Researchers Gordon Pennycook & David G. Rand of Yale University write in their abstract:

Assessing the effect of disputed warnings and source salience on perceptions of fake news accuracy

What are effective techniques for combatting belief in fake news? Tagging fake articles with Disputed by 3rd party fact-checkers warnings and making articles’ sources more salient by adding publisher logos are two approaches that have received large-scale rollouts on social media in recent months.

Here we assess the effect of these interventions on perceptions of accuracy across seven experiments [involving 7,534 people].

With respect to disputed warnings, we find that tagging articles as disputed did significantly reduce their perceived accuracy relative to a control without tags, but only modestly (d=.20, 3.7 percentage point decrease in headlines judged as accurate).

Furthermore, we find a backfire effect — particularly among Trump supporters and those under 26 years of age — whereby untagged fake news stories are seen as more accurate than in the control.

We also find a similar spillover effect for real news, whose perceived accuracy is increased by the presence of disputed tags on other headlines.

With respect to source salience, we find no evidence that adding a banner with the logo of the headline’s publisher had any impact on accuracy judgments whatsoever.

Together, these results suggest that the currently deployed approaches are not nearly enough to effectively undermine belief in fake news, and new (empirically supported) strategies are needed.

Presented with the study, a Facebook spokesperson questioned the researchers’ methodology–pointing out that the study was performed via Internet survey, not on Facebook’s platform–and added that fact-checking is just one part of the company’s efforts to combat fake news. Those include disrupting financial incentives for spammers, building new products and helping people make more informed choices about the news they read, trust and share, the spokesperson said.

The Facebook spokesman added that the articles created by the third party fact-checkers have uses beyond creating the disputed tags. For instance, links to the fact checks appear in related article stacks beside other similar stories that Facebook’s software identifies as potentially false. They are powering other systems that limit the spread of news hoaxes and information, the spokesperson said.

Read more inus.htm at MelonFarmers.co.uk

Electronic Frontier Foundation EFF opposes the Senate’s Stop Enabling Sex Trafficking Act ( S. 1693 ) (“SESTA”), and its House counterpart the Allow States and Victims to Fight Online Sex Trafficking Act ( H.R. 1865 ), because they would open up liability for Internet intermediaries–the ISPs, web hosting companies, websites, and social media platforms that enable users to share and access content online–by amending Section 230’s immunity for user-generated content ( 47 U.S.C. § 230 ). While both bills have the laudable goal of curbing sex trafficking, including of minor children, they would greatly weaken Section 230’s protections for online free speech and innovation .

Proponents of SESTA and its House counterpart view Section 230 as a broken law that prevents victims of sex trafficking from seeking justice. But Section 230 is not broken. First, existing federal criminal law allows federal prosecutors to go after bad online platforms, like Backpage.com, that knowingly play a role in sex trafficking. Second, courts have allowed civil claims against online platforms–despite Section 230’s immunity–when a platform had a direct hand in creating the illegal user-generated content.

Thus, before Congress fundamentally changes Section 230, lawmakers should ask whether these bills are necessary to begin with.

Why Section 230 Matters

Section 230 is the part of the Telecommunications Act of 1996 that provides broad immunity to Internet intermediaries from liability for the content that their users create or post (i.e., user-generated content or third-party content).

Section 230 can be credited with creating today’s Internet–with its abundance of unique platforms and services that enable a vast array of user-generated content. Section 230 has provided the legal buffer online entrepreneurs need to experiment with news ways for users to connect online–and this is just as important for today’s popular platforms with billions of users as it is for startups.

Congress’ rationale for crafting Section 230 is just as applicable today as when the law was passed in 1996: if Internet intermediaries are not largely shielded from liability for content their users create or post–particularly given their huge numbers of users–existing companies risk being prosecuted or sued out of existence, and potential new companies may not even enter the marketplace for fear of being prosecuted or sued out of existence (or because venture capitalists fear this).

This massive legal exposure would dramatically change the Internet as we know it: it would not only thwart innovation in online platforms and services, but free speech as well. As companies fall or fail to be launched in the first place, the ability of all Internet users to speak online would be disrupted. For those companies that remain, they may act in ways that undermine the open Internet. They may act as gatekeepers by preventing whole accounts from being created in the first place and pre-screening content before it is even posted. Or they may over-censor already posted content, pursuant to very strict terms of service in order to avoid the possibility of any user-generated content on their platforms and services that could get them into criminal or civil hot water. Again, this would be a disaster for online free speech. The current proposals to gut Section 230 raise the exact same problems that Congress dealt with in 1996.

By guarding online platforms from being held legally responsible for what thousands or millions or even billions of users might say online, Section 230 has protected online free speech and innovation for more than 20 years.

But Congress did not create blanket immunity. Section 230 reflects a purposeful balance that permits Internet intermediaries to be on the hook for their users’ content in certain carefully considered circumstances, and the courts have expanded upon these rules.

Section 230 Does Not Bar Federal Prosecutors From Targeting Criminal Online Platforms

Section 230 has never provided immunity to Internet intermediaries for violations of federal criminal law –like the federal criminal sex trafficking statute ( 18 U.S.C. § 1591 ). In 2015, Congress passed the SAVE Act, which amended Section 1591 to expressly include “advertising” as a criminal action. Congress intended to go after websites that host ads knowing that such ads involve sex trafficking. If these companies violate federal criminal law, they can be criminally prosecuted in federal court alongside their users who are directly engaged in sex trafficking.

In a parallel context, a federal judge in the Silk Road case correctly ruled that Section 230 did not provide immunity against federal prosecution to the operator of a website that hosted other people’s ads for illegal drugs.

By contrast, Section 230 does provide immunity to Internet intermediaries from liability for user-generated content under state criminal law . Congress deliberately chose not to expose these companies to criminal prosecutions in 50 different states for content their users create or post. Congress fashioned this balance so that federal prosecutors could bring to justice culpable companies while still ensuring that free speech and innovation could thrive online.

However, SESTA and its House counterpart would expose Internet intermediaries to liability under state criminal sex trafficking statutes. Although EFF understands the desire of state attorneys general to have more tools at their disposal to combat sex trafficking, such an amendment to Section 230 would upend the carefully crafted policy balance Congress embodied in Section 230.

More fundamentally, it cannot be said that Section 230’s current approach to criminal law has failed. A Senate investigation earlier this year and a recent Washington Post article both uncovered information suggesting that Backpage.com not only knew that their users were posting sex trafficking ads to their website, but that the company also took affirmative steps to help those ads get posted. Additionally, it has been reported that a federal grand jury has been empaneled in Arizona to investigate Backpage.com. Congress should wait and see what comes of these developments before it exposes Internet intermediaries to additional criminal liability.

Civil Litigants Are Not Always Without a Remedy Against Internet Intermediaries

Section 230 provides immunity to Internet intermediaries from liability for user-generated content under civil law–whether federal or state civil law. Again, Congress made this deliberate policy choice to protect online free speech and innovation.

Congress recognized that exposing companies to civil liability would put the Internet at risk even more than criminal liability because: 1) the standard of proof in criminal cases is “beyond a reasonable doubt,” whereas in civil cases it is merely “preponderance of the evidence,” making the likelihood higher that a company will lose a civil case; and 2) criminal prosecutors as agents of the government tend to exercise more restraint in filing charges, whereas civil litigants often exercise less restraint in suing other private parties, making the likelihood higher that a company will be sued in the first place for third-party content.

However, Section 230’s immunity against civil claims is not absolute. The courts have interpreted this civil immunity as creating a presumption of civil immunity that plaintiffs can rebut if they have evidence that an Internet intermediary did not simply host illegal user-generated content, but also had a direct hand in creating the illegal content. In a seminal 2008 decision, the U.S. Court of Appeals for the Ninth Circuit in Fair Housing Council v. Roommates.com held that a website that helped people find roommates violated fair housing laws by “inducing third parties to express illegal preferences.” The website had required users to answer profile questions related to personal characteristics that may not be used to discriminate in housing (e.g., gender, sexual orientation, and the presence of children in the home). Thus, the court held that the website lost Section 230 civil immunity because it was “directly involved with developing and enforcing a system that subjects subscribers to allegedly discriminatory housing practices.” Although EFF is concerned with some of the implications of the Roommates.com decision and its potential to chill online free speech and innovation, it is the law.

Thus, even without new legislation, victims of sex trafficking may bring civil cases against websites or other Internet intermediaries under the federal civil cause of action ( 18 U.S.C. § 1595 ), and overcome Section 230 civil immunity if they can show that the websites had a direct hand in creating ads for illegal sex. As mentioned above, a Senate investigation and a Washington Post article both strongly indicate that Backpage.com would not enjoy Section 230 civil immunity today.

SESTA and its House counterpart would expose Internet intermediaries to liability under federal and state civil sex trafficking laws. Removing Section 230’s rebuttable presumption of civil immunity would, as with the criminal amendments, disrupt the carefully crafted policy balance found in Section 230. Moreover, victims of sex trafficking can already bring civil suits against the pimps and “johns” who harmed them, as these cases against the direct perpetrators do not implicate Section 230.

Therefore, the bills’ amendments to Section 230 are not necessary–because Section 230 is not broken. Rather, Section 230 reflects a delicate policy balance that allows the most egregious online platforms to bear responsibility along with their users for illegal content, while generally preserving immunity so that free speech and innovation can thrive online.

By dramatically increasing the legal exposure of Internet intermediaries for user-generated content, the risk that these bills pose to the Internet as we know it is real. Visit our STOP SESTA campaign page and tell Congress to reject S. 1693 and H.R. 1865 !

Read more inus.htm at MelonFarmers.co.uk

stop youtube banning bible US catholics have become an early victim of newly introduced censorship measure from YouTube presumably because their teaching is considered offensive due to politically incorrect attitudes towards gays and abortion. Catholic Online writes:

More media organizations are criticizing YouTube’s increasingly oppressive soft censorship policies which are now eliminating mainstream news reports from the video sharing network. Many content creators on YouTube are losing millions in revenue as the Google-owned firm reduces and cuts off payments in pursuit of profits and control.

YouTube is censoring content though various indirect means even if that content does not violate any terms of service. The Google-owned firm is removing content that it deems inappropriate or offensive, and is taking cues from the Southern Poverty Law Center. The result seems to be a broad labeling of content, and the suppression of even mainstream news. Many of Catholic Online’s bible readings have been caught up in YouTube’s web of suppression, despite containing no commentary or message other than the reading of the scriptures.

YouTube is not a government agency but a private platform, so it is free to ban or restrict content as it pleases them. Therefore, their policies, no matter how arbitrary, are not true censorship. However, the firm is practicing what some call soft censorship.

Soft censorship is any kind of activity that suppresses speech, particularly that which is true and accurate. It takes many forms. For example, broadcasting celebrity gossip in place of news is a form of soft censorship. Placing real news lower in search results, preventing content from being shared on social media, or depriving media outlets of ad revenue for reporting on certain topics, are all common forms of soft censorship.

For some unknown reason, Catholic Online has also been targeted by these policies. Saints videos and daily readings are the most common targets. None of this content can be considered objectionable by any means, and none of it infringes on YouTube’s terms and conditions. It is suspected that anti-Christian bigotry, such as that promoted by liberal extremist organizations like the Southern Poverty Law Center, are to blame.

The problem for content creators and media organizations is that there are few places for them to go. Most video viewing takes place on YouTube, and there are no video hosting sites as well known and widely used as YouTube. Other sites also restrict content and some don’t share revenues with content creators. This makes YouTube a monopoly; they are literally the only show in town.

The time has come for governments around the world to recognize that Facebook, Google, and YouTube control the public forum. If freedom of speech is to be protected, then these firms must be compelled to abide by free speech rules.

Read more me_internet.htm at MelonFarmers.co.uk

yutube limitedYoutube has been introduced a new tier of censorship designed to restrict the audience for videos deemed to be inappropriate or offensive to some audiences.The site is now putting videos into a limited state if they are deemed controversial enough to be considered objectionable, but not hateful, pornographic or violent enough to be banned altogether.

This policy was announced several months ago but has come into force in the past week, prompting anger among members of the YouTube community.

YouTube defines Limited Videos as follows:

Our Community Guidelines prohibit hate speech that either promotes violence or has the primary purpose of inciting hatred against individuals or groups based on certain attributes. YouTube also prohibits content intended to recruit for terrorist organizations, incite violence, celebrate terrorist attacks, or otherwise promote acts of terrorism. Some borderline videos, such as those containing inflammatory religious or supremacist content without a direct call to violence or a primary purpose of inciting hatred, may not cross these lines for removal. Following user reports, if our review teams determine that a video is borderline under our policies, it may have some features disabled.

These videos will remain available on YouTube, but will be placed behind a warning message, and some features will be disabled, including comments, suggested videos, and likes. These videos are also not eligible for ads.

Having features disabled on a video will not create a strike on your account.

Videos which are put into a limited state cannot be embedded on other websites. They also cannot be easily published on social media using the usual share buttons and other users cannot comment on them. Crucially, the person who made the video will no longer receive any payment.

Earlier this week, Julian Assange wrote:

‘Controversial’ but contract-legal videos [which break YouTube’s terms and conditions] cannot be liked, embedded or earn [money from advertising revenue].

What’s interesting about the new method deployed is that it is a clear attempt at social engineering. It isn’t just turning off the ads. It’s turning off the comments, embeds, etc too. Everything possible to strangle the reach without deleting it.

Read more news.htm at MelonFarmers.co.uk

Crown Prosecution ServiceThe Director of public prosecutions has announced plans for more prosecutions and harsher punishments for online insult. Prosecutors will be ordered to treat online hate crime as seriously as offences carried out face to face.Alison Saunders said the Crown Prosecution Service will seek stiffer penalties for abuse on Twitter, Facebook and other social media platforms. Saunders says the crackdown is needed because online abuse can lead to the sort of extremist hate seen in Charlottesville in the United States last weekend, which left one person dead.

Writing in the Guardian, Saunders said:

Left unchallenged, even low-level offending can subsequently fuel the kind of dangerous hostility that has been plastered across our media in recent days. That is why countering it is a priority for the CPS.

The new policy documents cover different strands of hate crime: racist and religious; disability; and homophobic, biphobic and transphobic. They also say that victims of biphobic hate crime, aimed at bisexual people, have different needs and experiences compared to those suffering anti-gay and transphobic offences.