Archive for the ‘Internet Social Media’ Category

Read more uk_internet_censors.htm at MelonFarmers.co.uk

home affairs committee Digital ID was discussed by the Commons Science and Technology Committee on 13th November 2018.

Carol Monaghan Committee Member:  At the moment, platforms such as Facebook require age verification, but that simply means entering a date of birth, and children can change that. If you are planning to extend that, or look at how it might apply to other social media, how confident are you that the age verification processes would be robust enough to cope?

Margot James MP, Minister for Digital and the Creative Industries: At the moment, I do not think that we would be, but age verification tools and techniques are developing at pace, and we keep abreast of developments. At the moment , we think we have a robust means by which to verify people’s age at 18; the challenge is to develop tools that can verify people’s age at a younger age, such as 13. Those techniques are not robust enough yet, but a lot of technological research is going on, and I am reasonably confident that, over the next few years, there will be robust means by which to identify age at younger than 18.

Stephen Metcalfe Committee Member: My question is on the same point about how we can create a verification system that you cannot just get around by putting in a fake date of birth. I assume that the verification for 18 – plus is based around some sort of credit card, or some sort of bank card. The issue there is that, potentially, someone could borrow another person’s card, because it does not require secret information–it requires just the entering of the 16-digit number, or something. But on the younger ages, given that we are talking about digital life and digital literacy, do you think that the time has come to talk about having a digital verified ID that young people get and which you cannot fiddle with–a bit like an online ID card, or digital passport? I know that that idea has been around a little while.

Margot James: It has. I do think that the time has come when that is required, but there are considerable hoops to go through before we can arrive at a system of digital identity, including someone’s age, that is acknowledged, respected and entered into by the vast majority of people. As you probably know, the Government have committed in prior years to the Verify system, which we think has got as far as it can go, which is not far enough. We have a team of excellent policy officials in the DCMS looking afresh at other techniques of digital identity. It is a live issue and there have been many attempts at it; there is frustration, and not everybody would agree with what I have said. But you asked my view, and that is it–and the Department is focusing a lot of energy on that area of research.

Chair: Can you imagine that your legislation, when it comes, could include the concept, to which Stephen referred, of a digital identity for children?

Margot James: That is a long way off–or it is not next year, and probably not the year after, given how much consultation it would require. The new work has only just started, so it is not a short-term solution, and I do not expect to see it as part of our White Paper that we publish this winter. That does not mean to say that we do not think that it is important; we are working towards getting a system that we think could have public support.

To go slightly beyond the terms of your inquiry, with regard to the potential for delivering a proper digital relationship between citizen and G overnment through delivery of public services, a digital identity system will be important. We feel that public service delivery has a huge amount to gain from the digital solution.

Bill Grant Committee Member:: I am pleased to note that the Government are addressing issues that have been with us for nearly a decade–the dark side of social media and the risk to children, not least the risk that we all experience as parliamentarians. Can you offer any reason why it has taken so long for Government to begin that process? Would you be minded to accelerate the process to address the belated start?

Margot James: One reason is that progress has been made by working with technology companies. The Home Office has had considerable success in working with technology companies to eradicate terrorist content online. To a lesser but still significant extent, progress has also been made on a voluntary basis with the reduction in child abuse images and child sexual exploitation. I said “significant , ” but this is a Home Office area–I am working closely with the Home Office, because the White Paper is being developed in concert with it–and it is clear that it does not feel that anything like enough is being done through voluntary measures.

Chair: Do you feel that?

Margot James: Yes, I do. A lot of the highly dangerous material has gone under the radar in the dark web, but too much material is still available, apparently, on various platforms, and it takes them too long to remove it.

Chair: Ultimately, the voluntary approach is not working adequately.

Margot James: Exactly–that is our view now. I was trying to address the hon. Member’s question about why it had taken a long time. Partly it is that technology changes very fast , but, partly, it is because voluntary engagement was delivering, but it has impressed itself on us in the last 12 months that it is not delivering fast enough or adequately. We have not even talked about the vast range of other harms, some of which are illegal and some legal but harmful, and some in the grey area in between, where decidedly inadequate progress has been made as a result of the many instances of voluntary engagement, not just between the Government and the technology sector but between charitable organisations and non-governmental organisations, including the police.

Bill Grant: It was envisaged earlier that there would be some sort of regulator or ombudsman, but , over and above that , Martha Lane Fox’s think – tank proposed the establishment of an office for responsible technology, which would be overarching, in whatever form the regulation comes. Would you be minded to take that on board?

Margot James: That is one proposal that we will certainly look at, yes. Martha Lane Fox does a lot of very good work in this area, has many years’ experience of it, and runs a very good organisation in the “tech for good” environment, so her proposals are well worth consideration. That is one reason why I was unable to give a specific answer earlier, because there are good ideas, and they all need proper evaluation. When the White Paper is published, we will engage with you and any other interested party , and invite other organisations to contribute to our thinking, prior to the final legislation being put before Parliament and firming up the non-legislative measures, which are crucial. We all know that legislation does not solve every ill, and it is crucial that we continue the very good work being done by many internet companies to improve the overall environment.

Advertisements
Read more eu.htm at MelonFarmers.co.uk

YouTube logo YouTube has warned its video creators about the likely effect of the EU’s upcoming censorship machines:

YouTube’s growing creative economy is at risk, as the EU Parliament voted on Article 13, copyright legislation that could drastically change the internet that you see today.

Article 13 as written threatens to shut down the ability of millions of people — from creators like you to everyday users — to upload content to platforms like YouTube. And it threatens to block users in the EU from viewing content that is already live on the channels of creators everywhere. This includes YouTube’s incredible video library of educational content, such as language classes, physics tutorials and other how-to’s.

This legislation poses a threat to both your livelihood and your ability to share your voice with the world. And, if implemented as proposed, Article 13 threatens hundreds of thousands of jobs, European creators, businesses, artists and everyone they employ. The proposal could force platforms, like YouTube, to allow only content from a small number of large companies. It would be too risky for platforms to host content from smaller original content creators, because the platforms would now be directly liable for that content. We realize the importance of all rights holders being fairly compensated, which is why we built Content ID and a platform to pay out all types of content owners. But the unintended consequences of article 13 will put this ecosystem at risk. We are committed to working with the industry to find a better way. This language could be finalized by the end of the year, so it’s important to speak up now.

Please take a moment to learn more about how it could affect your channel and take action immediately. Tell the world through social media (#SaveYourInternet) and your channel why the creator economy is important and how this legislation will impact you

Read more inus.htm at MelonFarmers.co.uk

US Supreme Court After the recent censorship purge of over 800 independent media outlets on Facebook, the Supreme Court is now hearing a case that could have ramifications for any future attempts at similar purges.The United States Supreme Court has agreed to take a case that could change free speech on the Internet. Manhattan Community Access Corp. v. Halleck, No. 17-702, the case that it has agreed to take, will decide if the private operator of a public access network is considered a state actor.

The case could affect how companies like Facebook, Twitter, Instagram, Google and YouTube are governed. If the Court were to issue a far-reaching ruling it could subject such companies to First Amendment lawsuits and force them to allow a much broader scope of free speech from its users.

DeeDee Halleck and Jesus Melendez claimed that they were fired from Manhattan Neighborhood Network for speaking critically of the network. And, though the case does not involve the Internet giants, it could create a ruling that expands the First Amendment beyond the government.

Read more inus.htm at MelonFarmers.co.uk

Facebook logo The recent Fosta law in the US forces internet companies to censor anything to do with legal, adult and consensual sex work. It holds them liable for abetting sex traffickers even when they can’t possibly distinguish the trafficking from the legal sex work. The only solution is therefore to ban the use of their platforms for any personal hook ups. So indeed adult sex work websites have been duly cleansed from the US internet.But now a woman is claiming that Facebook facilitated trafficking when of course its nigh on impossible for Facebook to detect such use of their networking systems. But of course that’s no excuse under the FOSTA.

According to a new lawsuit by an unnamed woman in Houston, Texas, Facebook’s morally bankrupt corporate culture for permitting a sex trafficker to force her into prostitution after beating and raping her. She claims Facebook should be held responsible when a user on the social media platform sexually exploits another Facebook user. The lawsuit says that Facebook should have warned the woman, who was 15 years old at the time she was victimized, that its platform could be used by sex traffickers to recruit and groom victims, including children.

The lawsuit also names Backpage.com, which according to a Reuters report , hosted pictures of the woman taken by the man who victimized her after he uploaded them to the site.

The classified advertising site Backpage has already been shut down by federal prosecutors in April of this year.

Read more aw_privacy.htm at MelonFarmers.co.uk

Facebook logo Add a phone number I never gave Facebook for targeted advertising to the list of deceptive and invasive ways Facebook makes money off your personal information. Contrary to user expectations and Facebook representatives’ own previous statements, the company has been using contact information that users explicitly provided for security purposes–or that users never provided at all –for targeted advertising.

A group of academic researchers from Northeastern University and Princeton University , along with Gizmodo reporters , have used real-world tests to demonstrate how Facebook’s latest deceptive practice works. They found that Facebook harvests user phone numbers for targeted advertising in two disturbing ways: two-factor authentication (2FA) phone numbers, and shadow contact information. Two-Factor Authentication Is Not The Problem

First, when a user gives Facebook their number for security purposes–to set up 2FA , or to receive alerts about new logins to their account–that phone number can become fair game for advertisers within weeks. (This is not the first time Facebook has misused 2FA phone numbers .)

But the important message for users is: this is not a reason to turn off or avoid 2FA. The problem is not with two-factor authentication. It’s not even a problem with the inherent weaknesses of SMS-based 2FA in particular . Instead, this is a problem with how Facebook has handled users’ information and violated their reasonable security and privacy expectations.

There are many types of 2FA . SMS-based 2FA requires a phone number, so you can receive a text with a second factor code when you log in. Other types of 2FA–like authenticator apps and hardware tokens–do not require a phone number to work. However, until just four months ago , Facebook required users to enter a phone number to turn on any type of 2FA, even though it offers its authenticator as a more secure alternative. Other companies– Google notable among them –also still follow that outdated practice.

Even with the welcome move to no longer require phone numbers for 2FA, Facebook still has work to do here. This finding has not only validated users who are suspicious of Facebook’s repeated claims that we have complete control over our own information, but has also seriously damaged users’ trust in a foundational security practice.

Until Facebook and other companies do better, users who need privacy and security most–especially those for whom using an authenticator app or hardware key is not feasible–will be forced into a corner. Shadow Contact Information

Second, Facebook is also grabbing your contact information from your friends. Kash Hill of Gizmodo provides an example :

…if User A, whom we’ll call Anna, shares her contacts with Facebook, including a previously unknown phone number for User B, whom we’ll call Ben, advertisers will be able to target Ben with an ad using that phone number, which I call shadow contact information, about a month later.

This means that, even if you never directly handed a particular phone number over to Facebook, advertisers may nevertheless be able to associate it with your account based on your friends’ phone books.

Even worse, none of this is accessible or transparent to users. You can’t find such shadow contact information in the contact and basic info section of your profile; users in Europe can’t even get their hands on it despite explicit requirements under the GDPR that a company give users a right to know what information it has on them.

As Facebook attempts to salvage its reputation among users in the wake of the Cambridge Analytica scandal , it needs to put its money where its mouth is . Wiping 2FA numbers and shadow contact data from non-essential use would be a good start.

Read more me_internet.htm at MelonFarmers.co.uk

twitter 2015 logo Twitter is consulting its users about new censorship rules banning ‘dehumanising speech’, in which people are compared to animals or objects. It said language that made people seem less than human had repercussions.

The social network already has a hateful-conduct policy but it is implemented discriminately allowing some types of insulting language to remain online. For example, countless tweets describing middle-aged white men as gammon can be found on the platform.

At present it bans insults based on a person’s: race ethnicity nationality sexual orientation sex gender religious beliefs age disability medical condition but there is an unwritten secondary rule which means that the prohibition excludes groups not favoured under the conventions of political correctness.

Twitter said it intended to prohibit dehumanising language towards people in an identifiable group because some researchers claim it could lead to real-world violence. Asked whether calling men gammon would count as dehumanising speech, the company said it would first seek the views of its members. Twitter’s announcement reads in part:

For the last three months, we have been developing a new policy to address dehumanizing language on Twitter. Language that makes someone less than human can have repercussions off the service, including normalizing serious violence. Some of this content falls within our hateful conduct policy (which prohibits the promotion of violence against or direct attacks or threats against other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease), but there are still Tweets many people consider to be abusive, even when they do not break our rules. Better addressing this gap is part of our work to serve a healthy public conversation.

With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target. Many scholars have examined the relationship between dehumanization and violence. For example, Susan Benesch has described dehumanizing language as a hallmark of dangerous speech, because it can make violence seem acceptable, and Herbert Kelman has posited that dehumanization can reduce the strength of restraining forces against violence.

witter’s critics are now using the hashtag #verifiedhate to highlight examples of what they believe to be bias in what the platform judges to be unacceptable. The gammon insult gained popularity after a collage of contributors to the BBC’s Question Time programme – each middle-aged, white and male – was shared along with the phrase Great Wall of Gammon in 2017.

The scope of identifiable groups covered by the new rules will be decided after a public consultation that will run until 9 October.

Ps before filling in the consultation form, note that it was broken for me and didn’t accept my submission. For the record, Melon Farmer tried to submit the comment:

This is yet another policy that restricts free speech. As always, the vagueness of the rules will allow Twitter, or its moderators, to arbitrarily apply its own morality anyway. But not to worry, the richness of language will always enable people to dream up new ways to insult others.

Read more pc_news.htm at MelonFarmers.co.uk

police federation logo The new head of the Police Federation John Apter, who represents 120,000 rank and file officers across England and Wales, has said his members were incredibly frustrated because they have been assigned to sorting out social media spats rather than tackling more serious crimes like burglary.

The new head explained that while resourcing remained the main issue facing policing, there was also a lack of common sense when it came to priorities.

Last week it emerged that Yorkshire Police had asked people to report insults on social media, even if they were not considered to be a hate crime. Other forces have been criticised recently for using computer programmes rather than experienced officers to decide whether a burglary is worth investigating. Such initiatives have led to criticism of the police and the observation that the service is out of touch with the public.

But Apter said nobody was more frustrated than police officers when they were prevented from attending burglaries and other serious crimes. Burglary is one of the most intrusive, horrible crimes that a householder can go through. It makes you feel incredibly vulnerable, but people can sometimes wait days for a police response, Apter said.