Posts Tagged ‘Facebook’

Read more me_internet.htm at MelonFarmers.co.uk

Facebook logo Facebook has introduced a new censorship tool known as Group Quality to evaluate private groups and scrutinize them for any ‘problematic content’.For a long time now, Facebook was facing heat from the media for the fact that the private groups feature is harboring extremists and the spreading of ‘fake news’. As a result, the company wrote an article from newsroom.fb.com introducing a new feature known as Group Quality:

Being in a private group doesn’t mean that your actions should go unchecked. We have a responsibility to keep Facebook safe, which is why our Community Standards apply across Facebook, including in private groups. To enforce these policies, we use a combination of people and technology — content reviewers and proactive detection. Over the last few years, we’ve invested heavily in both, including hiring more than 30,000 people across our safety and security teams.

Within this, a specialized team has been working on the Safe Communities Initiative: an effort that started two years ago with the goal of protecting people using Facebook Groups from harm. Made up of product managers, engineers, machine learning experts and content reviewers, this team works to anticipate the potential ways people can do harm in groups and develops solutions to minimize and prevent it. As the head of Facebook Groups, I want to explain how we’re making private groups safer by focusing on three key areas: proactive detection, tools for admins, and transparency and control for members.

On the plus side Facebook has updated settings used in defining access and visibility of groups which are much clearer than previus incarnations.

Critics say that Facebook’s move will not curb misinformation and fake news, but, on the contrary, it may further push it deeper underground making it hard for censor to filter or terminate such content from the site.

Advertisements
Read more inau.htm at MelonFarmers.co.uk

Facebook logo Australian media companies and Facebook are scrambling to come to terms with a landmark ruling by an Australian judge that found publishers are legally responsible for pre-moderating comments on Facebook.

On Monday in the New South Wales supreme court judge Stephen Rothman found that commercial entities, including media companies, could be regarded as the publishers of comments made on Facebook, and as such had a responsibility to ensure defamatory remarks were not posted in the first place.

News Corp Australia responded to the judgement in a statement:

This ruling shows how far out of step Australia’s defamation laws are with other English-speaking democracies and highlights the urgent need for change. It defies belief that media organisations are held responsible for comments made by other people on social media pages.

It is ridiculous that the media company is held responsible while Facebook, which gives us no ability to turn off comments on its platform, bears no responsibility at all.

The ruling was made in a pre-trial hearing over a defamation case brought by Dylan Voller against a number of media outlets over comments made by readers on Facebook.

Paul Gordon, a social media lawyer at Wallmans lawyers in Adelaide explained the change to Guardian Australia:

Up until yesterday the general thread [was] if you knew or ought to have known a defamatory post was there, you had to take it down.

What the judge yesterday found was a bit different, because it wasn’t alleged by Voller that the media companies had been negligent in failing to the take down the comments. Instead, the judge found the companies were responsible for putting them up in the first place.

That’s really the key difference. You have a situation where now media companies are responsible not just for taking down comments when they see them, but for preventing them going up in the first place. It places a significantly bigger burden on media companies from what was previously in place.

News Corp Australia said it is reviewing the decision with a view to an appeal.

Perhaps the only way for companies to abide by this understanding  of the law is for them to take down their Facebook pages totally.

Read more uk_internet_censors.htm at MelonFarmers.co.uk

dcms facebook The Digital, Culture, Media and Sport Committee has published its final report on Disinformation and ‘fake news’. The report calls for:

  • Compulsory Code of Ethics for tech companies overseen by independent regulator

  • Regulator given powers to launch legal action against companies breaching code

  • Government to reform current electoral communications laws and rules on overseas involvement in UK elections

  • Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation

Further finds that:

  • Electoral law ‘not fit for purpose’

  • Facebook intentionally and knowingly violated both data privacy and anti-competition laws

Chair’s comment

Damian Collins MP, Chair of the DCMS Committee said:

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer.

“Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use everyday. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights.

“Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission.

“We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.

“We also have to accept that our electoral regulations are hopelessly out of date for the internet age. We need reform so that the same principles of transparency of political communications apply online, just as they do in the real world. More needs to be done to require major donors to clearly establish the source of their funds.

“Much of the evidence we have scrutinised during our inquiry has focused on the business practices of Facebook; before, during and after the Cambridge Analytica data breach scandal.

“We believe that in its evidence to the Committee Facebook has often deliberately sought to frustrate our work, by giving incomplete, disingenuous and at times misleading answers to our questions.

“Even if Mark Zuckerberg doesn’t believe he is accountable to the UK Parliament, he is to the billions of Facebook users across the world. Evidence uncovered by my Committee shows he still has questions to answer yet he’s continued to duck them, refusing to respond to our invitations directly or sending representatives who don’t have the right information. Mark Zuckerberg continually fails to show the levels of leadership and personal responsibility that should be expected from someone who sits at the top of one of the world’s biggest companies.

“We also repeat our call to the Government to make a statement about how many investigations are currently being carried out into Russian interference in UK politics. We want to find out what was the impact of disinformation and voter manipulation on past elections including the UK Referendum in 2016 and are calling on the Government to launch an independent investigation.”

Final Report

This Final Report on Disinformation and ‘Fake News’ repeats a number of recommendations from the interim report published last summer. The Committee calls for the Government to reconsider a number of recommendations to which it did not respond and to include concrete proposals for action in its forthcoming White Paper on online harms.
Independent regulation of social media companies.

The Report repeats a recommendation from the Interim Report for clear legal liabilities to be established for tech companies to act against harmful or illegal content on their sites, and the report calls for a compulsory Code of Ethics defining what constitutes harmful content. An independent regulator should be responsible for monitoring tech companies, backed by statutory powers to launch legal action against companies in breach of the code.

Companies failing obligations on harmful or illegal content would face hefty fines. MPs conclude: “Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites.”

The Report’s recommendation chimes with recent statements by Ministers indicating the Government is prepared to regulate social media companies following the death of teenager Molly Russell. The Committee hopes to see firm recommendations for legislation in the White Paper to create a regulatory system for online content that is as effective as that for offline content.

It repeats its recommendation for new independent regulation to be funded by a levy on tech companies operating in the UK.

Data use and data targeting

The Report highlights Facebook documents obtained by the Committee and published in December 2018 relating to a Californian court case brought by app developer Six4Three. Through scrutiny of internal Facebook emails between 2011 and 2015, the Report finds evidence to indicate that the company was willing to: override its users’ privacy settings in order to transfer data to some app developers; to charge high prices in advertising to some developers, for the exchange of data, and starve some developers–such as Six4Three–of that data, contributing to them losing their business. MPs conclude: “It is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws.”

It recommends that the ICO carries out a detailed investigation into the practices of the Facebook platform, its use of users’ and users’ friends’ data, and the use of ‘reciprocity’ of the sharing of data. The CMA (Competition and Markets Authority) should conduct a comprehensive audit of the advertising market on social media and investigate whether Facebook has been involved in anti-competitive practices.

MPs note that Facebook, in particular, is unwilling to be accountable to regulators around the world: “By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both our Committee and the ‘International Grand Committee’ involving members from nine legislators from around the world.”

Read more parl.htm at MelonFarmers.co.uk

home affairs committee Parliament’s fake news inquiry has published a cache of seized Facebook documents including internal emails sent between Mark Zuckerberg and the social network’s staff. The emails were obtained from the chief of a software firm that is suing the tech giant. About 250 pages have been published, some of which are marked highly confidential.

Facebook had objected to their release.

Damian Collins MP, the chair of the parliamentary committee involved, highlighted several key issues in an introductory note. He wrote that:

  • Facebook allowed some companies to maintain “full access” to users’ friends data even after announcing changes to its platform in 2014/2015 to limit what developers’ could see. “It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted,” Mr Collins wrote
  • Facebook had been aware that an update to its Android app that let it collect records of users’ calls and texts would be controversial. “To mitigate any bad PR, Facebook planned to make it as hard as possible for users to know that this was one of the underlying features,” Mr Collins wrote
  • Facebook used data provided by the Israeli analytics firm Onavo to determine which other mobile apps were being downloaded and used by the public. It then used this knowledge to decide which apps to acquire or otherwise treat as a threat
  • there was evidence that Facebook’s refusal to share data with some apps caused them to fail
  • there had been much discussion of the financial value of providing access to friends’ data

In response, Facebook has said that the documents had been presented in a very misleading manner and required additional context.

See Mark Zuckerberg’s response on Facebook

Read more uk_internet_censors.htm at MelonFarmers.co.uk

lse logo The likes of Facebook and Twitter should fund the creation of a new UK watchdog to internet censor to police fake news, censorship campaigners have claimed.Sounding like a religious morality campaign, the LSE Commission on Truth, Trust and Technology , a group made up of MPs, academics and industry, also proposed the Government should scrap plans to hand fresh powers to existing cesnors such as Ofcom and the Information Commissioner.

The campaigners argue for the creation a new body to monitor the effectiveness of technology companies’ self regulation. The body, which would be called the Independent Platform Agency, would provide a permanent forum for monitoring and cesnorsing the behaviour of online sites and produce an annual review of the state of disinformation, the group said.

Damian Tambini, adviser to the LSE commission and associate professor in LSE’s department of media and communications, claimed:

Parliament, led by the Government, must take action to ensure that we have the information and institutions we need to respond to the information crisis. If we fail to build transparency and trust through independent institutions we could see the creeping securitisation of our media system.

Read more inus.htm at MelonFarmers.co.uk

Facebook logo The recent Fosta law in the US forces internet companies to censor anything to do with legal, adult and consensual sex work. It holds them liable for abetting sex traffickers even when they can’t possibly distinguish the trafficking from the legal sex work. The only solution is therefore to ban the use of their platforms for any personal hook ups. So indeed adult sex work websites have been duly cleansed from the US internet.But now a woman is claiming that Facebook facilitated trafficking when of course its nigh on impossible for Facebook to detect such use of their networking systems. But of course that’s no excuse under the FOSTA.

According to a new lawsuit by an unnamed woman in Houston, Texas, Facebook’s morally bankrupt corporate culture for permitting a sex trafficker to force her into prostitution after beating and raping her. She claims Facebook should be held responsible when a user on the social media platform sexually exploits another Facebook user. The lawsuit says that Facebook should have warned the woman, who was 15 years old at the time she was victimized, that its platform could be used by sex traffickers to recruit and groom victims, including children.

The lawsuit also names Backpage.com, which according to a Reuters report , hosted pictures of the woman taken by the man who victimized her after he uploaded them to the site.

The classified advertising site Backpage has already been shut down by federal prosecutors in April of this year.

Read more aw_privacy.htm at MelonFarmers.co.uk

Facebook logo Add a phone number I never gave Facebook for targeted advertising to the list of deceptive and invasive ways Facebook makes money off your personal information. Contrary to user expectations and Facebook representatives’ own previous statements, the company has been using contact information that users explicitly provided for security purposes–or that users never provided at all –for targeted advertising.

A group of academic researchers from Northeastern University and Princeton University , along with Gizmodo reporters , have used real-world tests to demonstrate how Facebook’s latest deceptive practice works. They found that Facebook harvests user phone numbers for targeted advertising in two disturbing ways: two-factor authentication (2FA) phone numbers, and shadow contact information. Two-Factor Authentication Is Not The Problem

First, when a user gives Facebook their number for security purposes–to set up 2FA , or to receive alerts about new logins to their account–that phone number can become fair game for advertisers within weeks. (This is not the first time Facebook has misused 2FA phone numbers .)

But the important message for users is: this is not a reason to turn off or avoid 2FA. The problem is not with two-factor authentication. It’s not even a problem with the inherent weaknesses of SMS-based 2FA in particular . Instead, this is a problem with how Facebook has handled users’ information and violated their reasonable security and privacy expectations.

There are many types of 2FA . SMS-based 2FA requires a phone number, so you can receive a text with a second factor code when you log in. Other types of 2FA–like authenticator apps and hardware tokens–do not require a phone number to work. However, until just four months ago , Facebook required users to enter a phone number to turn on any type of 2FA, even though it offers its authenticator as a more secure alternative. Other companies– Google notable among them –also still follow that outdated practice.

Even with the welcome move to no longer require phone numbers for 2FA, Facebook still has work to do here. This finding has not only validated users who are suspicious of Facebook’s repeated claims that we have complete control over our own information, but has also seriously damaged users’ trust in a foundational security practice.

Until Facebook and other companies do better, users who need privacy and security most–especially those for whom using an authenticator app or hardware key is not feasible–will be forced into a corner. Shadow Contact Information

Second, Facebook is also grabbing your contact information from your friends. Kash Hill of Gizmodo provides an example :

…if User A, whom we’ll call Anna, shares her contacts with Facebook, including a previously unknown phone number for User B, whom we’ll call Ben, advertisers will be able to target Ben with an ad using that phone number, which I call shadow contact information, about a month later.

This means that, even if you never directly handed a particular phone number over to Facebook, advertisers may nevertheless be able to associate it with your account based on your friends’ phone books.

Even worse, none of this is accessible or transparent to users. You can’t find such shadow contact information in the contact and basic info section of your profile; users in Europe can’t even get their hands on it despite explicit requirements under the GDPR that a company give users a right to know what information it has on them.

As Facebook attempts to salvage its reputation among users in the wake of the Cambridge Analytica scandal , it needs to put its money where its mouth is . Wiping 2FA numbers and shadow contact data from non-essential use would be a good start.