Archive for the ‘Internet’ Category

Read more inus.htm at MelonFarmers.co.uk

todd weilerUtah’s most prominent anti-porn lawmaker wants to give people the ability to sue pornographers in the hope that someone, somewhere will be able to prove that watching their product causes emotional and psychological damage.State Senator Todd Weiler received national attention for penning a 2016 resolution declaring a public health crisis caused by pornography. He not only wants to limit access to sexually explicit material to children and teens, but he believes pornographers should be held liable for the impacts their products have on adults. He said:

Right now porn is available without any warnings and labeling, without any protections online. This would just open the valve for a cause of action. Let these attorneys go after these cases.

If the Legislature passes his proposal, he said, he expects courts to initially reject claims that pornography causes real harm: But I think, eventually, the tide will turn.

Weiler is pinning his hopes on some sort of ludicrous analogy with tobacco use, where court challenges broke through big business defence of their deadly trade. But of course there simply aren’t millions of porn users dropping dead, and even anti porn campaigners haven’t really come up with many harms beyond instilling bad attitudes to women.

Read more eu.htm at MelonFarmers.co.uk

france senate logoFrance is considering appointing an official internet ombudsman to investigate complaints about online material in order to prevent excessive censorship and preserve free speech. A bill establishing a content qualification assessment procedure has been tabled in the French senate.

Dan Shefets, a Danish lawyer explained one of the issues targeted by the bill:

ISPs face both penal and civil liability as soon as they are made aware of allegedly illicit content. One consequence of such liability is that smaller companies take down such content for fear of later sanctions.

The aim is to provide a simple procedure that will support firms operating online who are uncertain of their legal liabilities and to prevent over-zealous removal or censorship of material merely because it is the subject of a complaint. It could be copied by other European jurisdictions.

The idea is that a rapid response from the internet ombudsman would either order the material to be taken down or allow it to remain. As long as ISPs complied with the rulings, they would not face any fine or punishment.

Read more parl.htm at MelonFarmers.co.uk

chi onwurahLabour’s industrial spokesperson has called for the algorithms used by technology firms to be made transparent and subject to regulation.Shadow minister Chi Onwurah wants to see greater scrutiny of the algorithms that now control everything from the tailored news served to Facebook users to the order in which websites are presented in Google search. She said:

Algorithms aren’t above the law. The outcomes of algorithms are regulated — the companies which use them have to meet employment law and competition law. The question is, how do we make that regulation effective when we can’t see the algorithm?

She added in a letter to the Guardian:

Google and others argue their results are a mirror to society, not their responsibility. Google, Facebook and Uber need to take responsibility for the unintended consequences of the algorithms and machine learning that drive their profits. They can bring huge benefits and great apps, but we need a tech-savvy government to minimise the downside by opening up algorithms to regulation as well as legislating for greater consumer ownership of data and control of the advertising revenue it generates.

Labour’s industrial paper, due to be published after the Christmas break, will call for suggestions on how tech firms could be more closely supervised by government.

Read more eu.htm at MelonFarmers.co.uk

spd cdu logoLeading German MPs have called for online ‘fake news’ campaigns to be made a crime. Patrick Sensburg, a senior MP in Angela Merkel’s Christian Democratic Union (CDU) party, said:

Targeted disinformation to destabilise a state should be a criminal offence. We need to consider whether there should be some sort of ‘test site’ that reveals and identifies propaganda pages.

The call was backed by his party colleague Ansgar Heveling, the chairman of the German parliament’s influential internal affairs committee aying:

We last saw disinformation campaigns during the Cold War, now they have clearly been revived with new media opportunities. The law already offers options, such as a slander or defamation. But I think a criminal sentence is more appropriate when it is a targeted campaign.

German intelligence has warned that Russia is seeking to influence next year’s German elections via propaganda distributed via the internet, partcularly social media. Russia has been accused of deliberately using socialbots , automated software masqueraring as real people, to promote ‘fake news’ stories on social media.

Mrs Merkel’s current coalition partners and main rival in next year’s elections, the Social Democratic Party (SPD), have also called for a cross-party alliance against ‘fake news’ stories. Sigmar Gabriel, the SPD leader called for

Democratic solidarity against manipulative socialbots and an alliance against ‘fake news’.

Thorsten Schäfer-Gümbel of the SPD added:

If there is any doubt about the authenticity of any information, we should refrain from attacking our political opponents with it.

Read more me_internet.htm at MelonFarmers.co.uk

Facebook logoFacebook, Microsoft, Twitter and YouTube are coming together to help curb the spread of terrorist content online. There is no place for content that promotes terrorism on our hosted consumer services. When alerted, we take swift action against this kind of content in accordance with our respective policies.

We have committed to the creation of a shared industry database of hashes 204 unique digital fingerprints 204 for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services. By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.

Our companies will begin sharing hashes of the most extreme and egregious terrorist images and videos we have removed from our services 204 content most likely to violate all of our respective companies’ content policies. Participating companies can add hashes of terrorist images or videos that are identified on one of our platforms to the database. Other participating companies can then use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.

As we continue to collaborate and share best practices, each company will independently determine what image and video hashes to contribute to the shared database. No personally identifiable information will be shared, and matching content will not be automatically removed. Each company will continue to apply its own policies and definitions of terrorist content when deciding whether to remove content when a match to a shared hash is found. And each company will continue to apply its practice of transparency and review for any government requests, as well as retain its own appeal process for removal decisions and grievances. As part of this collaboration, we will all focus on how to involve additional companies in the future.

Throughout this collaboration, we are committed to protecting our users’ privacy and their ability to express themselves freely and safely on our platforms. We also seek to engage with the wider community of interested stakeholders in a transparent, thoughtful and responsible way as we further our shared objective to prevent the spread of terrorist content online while respecting human rights.

Read more inam.htm at MelonFarmers.co.uk

google canada logoTwo dozen human rights and civil liberty groups have thrown their weight behind Google’s challenge of a Canadian court decision it warns could stifle freedom of expression around the world and lead to a diminished internet of the lowest common denominator .In an appeal heard on Tuesday in the supreme court of Canada , Google Inc took aim at a 2015 court decision that sought to censor search results beyond Canada’s borders.

In 2012, Canadian company Equustek won a judgment to have a company banned from selling a counterfeit version of Equustek’s product online. Google voluntarily removed more than 300 infringing URLs. But as more sites popped up, Equustek went back to court — this time seeking a worldwide ban. A court of appeal in British Columbia sided with Equustek in 2015, ordering Google to remove all of its search results linked to the company. It is this ruling that Google is now appealing.

The human rights groups are focusing on the question at the heart of the precedent-setting case: if one country can control what you see on the internet, what is to prevent other countries from doing the same?  Gregg Leslie of Reporters Committee for Freedom of the Press said:

It’s a worrisome trend, where we see individual countries trying to regulate the internet worldwide. And of course the consequences of that would mean that even countries like Russia and China could do the same thing and that will really affect the content available on the internet.

Read more eu.htm at MelonFarmers.co.uk

european commission logoThe European Commission has called on tech companies such as Twitter, Facebook, and other major names to implement more aggressively measures in order to censor online hate speech. The alternative is to face new EU legislation that would force the tech companies to censor more quickly.The Financial Times reports that a study commissioned by the EU justice commissioner, Vera Jourova, found that YouTube, Google, Microsoft, Twitter, and Facebook have struggled to comply with the hate speech voluntary code of conduct that was announced earlier this year. Amid national security concerns and heightened racial tensions, mostly resulting from unpopular EU refugee policies.

In Germany, the government-led effort has been particularly aggressive. Germany is one of the European nations where the ongoing refugee crisis has reinvigorated the far-right and sparked a backlash against government policy. Reuter reports that Heiko Maas, the German Justice Minister, recently said that Facebook should be made liable for any hate speech published on its social media platform and it should be treated as a media company.

According to The Verge, Google, Twitter, Facebook and Microsoft agreed in a code of conduct announced in May to review and respond within 24 hours to the majority of hate speech complaints. However, only 40% of the recorded incidents have been reviewed within 24 hours, according to the commission’s report. That figure rose to 80% after 48 hours.

According to PCMag, two advocacy groups have criticized those efforts in France. In May, the two rights groups announced their plans to sue Google, Twitter, and Facebook for failing to remove from their platforms homophobic, racist and other hateful posts. News articles have so far failed to point out that maybe some of these groups are making some false claims about  material being censorable. Perhaps the media companies were right to not remove all of the posts reported.

On Thursday, Dec. 8, EU justice ministers will meet to discuss the report’s findings. H