Posts Tagged ‘Facebook’

Read more uk_internet_censors.htm at MelonFarmers.co.uk

dcms facebook The Digital, Culture, Media and Sport Committee has published its final report on Disinformation and ‘fake news’. The report calls for:

  • Compulsory Code of Ethics for tech companies overseen by independent regulator

  • Regulator given powers to launch legal action against companies breaching code

  • Government to reform current electoral communications laws and rules on overseas involvement in UK elections

  • Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation

Further finds that:

  • Electoral law ‘not fit for purpose’

  • Facebook intentionally and knowingly violated both data privacy and anti-competition laws

Chair’s comment

Damian Collins MP, Chair of the DCMS Committee said:

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer.

“Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use everyday. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights.

“Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission.

“We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.

“We also have to accept that our electoral regulations are hopelessly out of date for the internet age. We need reform so that the same principles of transparency of political communications apply online, just as they do in the real world. More needs to be done to require major donors to clearly establish the source of their funds.

“Much of the evidence we have scrutinised during our inquiry has focused on the business practices of Facebook; before, during and after the Cambridge Analytica data breach scandal.

“We believe that in its evidence to the Committee Facebook has often deliberately sought to frustrate our work, by giving incomplete, disingenuous and at times misleading answers to our questions.

“Even if Mark Zuckerberg doesn’t believe he is accountable to the UK Parliament, he is to the billions of Facebook users across the world. Evidence uncovered by my Committee shows he still has questions to answer yet he’s continued to duck them, refusing to respond to our invitations directly or sending representatives who don’t have the right information. Mark Zuckerberg continually fails to show the levels of leadership and personal responsibility that should be expected from someone who sits at the top of one of the world’s biggest companies.

“We also repeat our call to the Government to make a statement about how many investigations are currently being carried out into Russian interference in UK politics. We want to find out what was the impact of disinformation and voter manipulation on past elections including the UK Referendum in 2016 and are calling on the Government to launch an independent investigation.”

Final Report

This Final Report on Disinformation and ‘Fake News’ repeats a number of recommendations from the interim report published last summer. The Committee calls for the Government to reconsider a number of recommendations to which it did not respond and to include concrete proposals for action in its forthcoming White Paper on online harms.
Independent regulation of social media companies.

The Report repeats a recommendation from the Interim Report for clear legal liabilities to be established for tech companies to act against harmful or illegal content on their sites, and the report calls for a compulsory Code of Ethics defining what constitutes harmful content. An independent regulator should be responsible for monitoring tech companies, backed by statutory powers to launch legal action against companies in breach of the code.

Companies failing obligations on harmful or illegal content would face hefty fines. MPs conclude: “Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites.”

The Report’s recommendation chimes with recent statements by Ministers indicating the Government is prepared to regulate social media companies following the death of teenager Molly Russell. The Committee hopes to see firm recommendations for legislation in the White Paper to create a regulatory system for online content that is as effective as that for offline content.

It repeats its recommendation for new independent regulation to be funded by a levy on tech companies operating in the UK.

Data use and data targeting

The Report highlights Facebook documents obtained by the Committee and published in December 2018 relating to a Californian court case brought by app developer Six4Three. Through scrutiny of internal Facebook emails between 2011 and 2015, the Report finds evidence to indicate that the company was willing to: override its users’ privacy settings in order to transfer data to some app developers; to charge high prices in advertising to some developers, for the exchange of data, and starve some developers–such as Six4Three–of that data, contributing to them losing their business. MPs conclude: “It is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws.”

It recommends that the ICO carries out a detailed investigation into the practices of the Facebook platform, its use of users’ and users’ friends’ data, and the use of ‘reciprocity’ of the sharing of data. The CMA (Competition and Markets Authority) should conduct a comprehensive audit of the advertising market on social media and investigate whether Facebook has been involved in anti-competitive practices.

MPs note that Facebook, in particular, is unwilling to be accountable to regulators around the world: “By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both our Committee and the ‘International Grand Committee’ involving members from nine legislators from around the world.”

Advertisements
Read more parl.htm at MelonFarmers.co.uk

home affairs committee Parliament’s fake news inquiry has published a cache of seized Facebook documents including internal emails sent between Mark Zuckerberg and the social network’s staff. The emails were obtained from the chief of a software firm that is suing the tech giant. About 250 pages have been published, some of which are marked highly confidential.

Facebook had objected to their release.

Damian Collins MP, the chair of the parliamentary committee involved, highlighted several key issues in an introductory note. He wrote that:

  • Facebook allowed some companies to maintain “full access” to users’ friends data even after announcing changes to its platform in 2014/2015 to limit what developers’ could see. “It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted,” Mr Collins wrote
  • Facebook had been aware that an update to its Android app that let it collect records of users’ calls and texts would be controversial. “To mitigate any bad PR, Facebook planned to make it as hard as possible for users to know that this was one of the underlying features,” Mr Collins wrote
  • Facebook used data provided by the Israeli analytics firm Onavo to determine which other mobile apps were being downloaded and used by the public. It then used this knowledge to decide which apps to acquire or otherwise treat as a threat
  • there was evidence that Facebook’s refusal to share data with some apps caused them to fail
  • there had been much discussion of the financial value of providing access to friends’ data

In response, Facebook has said that the documents had been presented in a very misleading manner and required additional context.

See Mark Zuckerberg’s response on Facebook

Read more uk_internet_censors.htm at MelonFarmers.co.uk

lse logo The likes of Facebook and Twitter should fund the creation of a new UK watchdog to internet censor to police fake news, censorship campaigners have claimed.Sounding like a religious morality campaign, the LSE Commission on Truth, Trust and Technology , a group made up of MPs, academics and industry, also proposed the Government should scrap plans to hand fresh powers to existing cesnors such as Ofcom and the Information Commissioner.

The campaigners argue for the creation a new body to monitor the effectiveness of technology companies’ self regulation. The body, which would be called the Independent Platform Agency, would provide a permanent forum for monitoring and cesnorsing the behaviour of online sites and produce an annual review of the state of disinformation, the group said.

Damian Tambini, adviser to the LSE commission and associate professor in LSE’s department of media and communications, claimed:

Parliament, led by the Government, must take action to ensure that we have the information and institutions we need to respond to the information crisis. If we fail to build transparency and trust through independent institutions we could see the creeping securitisation of our media system.

Read more inus.htm at MelonFarmers.co.uk

Facebook logo The recent Fosta law in the US forces internet companies to censor anything to do with legal, adult and consensual sex work. It holds them liable for abetting sex traffickers even when they can’t possibly distinguish the trafficking from the legal sex work. The only solution is therefore to ban the use of their platforms for any personal hook ups. So indeed adult sex work websites have been duly cleansed from the US internet.But now a woman is claiming that Facebook facilitated trafficking when of course its nigh on impossible for Facebook to detect such use of their networking systems. But of course that’s no excuse under the FOSTA.

According to a new lawsuit by an unnamed woman in Houston, Texas, Facebook’s morally bankrupt corporate culture for permitting a sex trafficker to force her into prostitution after beating and raping her. She claims Facebook should be held responsible when a user on the social media platform sexually exploits another Facebook user. The lawsuit says that Facebook should have warned the woman, who was 15 years old at the time she was victimized, that its platform could be used by sex traffickers to recruit and groom victims, including children.

The lawsuit also names Backpage.com, which according to a Reuters report , hosted pictures of the woman taken by the man who victimized her after he uploaded them to the site.

The classified advertising site Backpage has already been shut down by federal prosecutors in April of this year.

Read more aw_privacy.htm at MelonFarmers.co.uk

Facebook logo Add a phone number I never gave Facebook for targeted advertising to the list of deceptive and invasive ways Facebook makes money off your personal information. Contrary to user expectations and Facebook representatives’ own previous statements, the company has been using contact information that users explicitly provided for security purposes–or that users never provided at all –for targeted advertising.

A group of academic researchers from Northeastern University and Princeton University , along with Gizmodo reporters , have used real-world tests to demonstrate how Facebook’s latest deceptive practice works. They found that Facebook harvests user phone numbers for targeted advertising in two disturbing ways: two-factor authentication (2FA) phone numbers, and shadow contact information. Two-Factor Authentication Is Not The Problem

First, when a user gives Facebook their number for security purposes–to set up 2FA , or to receive alerts about new logins to their account–that phone number can become fair game for advertisers within weeks. (This is not the first time Facebook has misused 2FA phone numbers .)

But the important message for users is: this is not a reason to turn off or avoid 2FA. The problem is not with two-factor authentication. It’s not even a problem with the inherent weaknesses of SMS-based 2FA in particular . Instead, this is a problem with how Facebook has handled users’ information and violated their reasonable security and privacy expectations.

There are many types of 2FA . SMS-based 2FA requires a phone number, so you can receive a text with a second factor code when you log in. Other types of 2FA–like authenticator apps and hardware tokens–do not require a phone number to work. However, until just four months ago , Facebook required users to enter a phone number to turn on any type of 2FA, even though it offers its authenticator as a more secure alternative. Other companies– Google notable among them –also still follow that outdated practice.

Even with the welcome move to no longer require phone numbers for 2FA, Facebook still has work to do here. This finding has not only validated users who are suspicious of Facebook’s repeated claims that we have complete control over our own information, but has also seriously damaged users’ trust in a foundational security practice.

Until Facebook and other companies do better, users who need privacy and security most–especially those for whom using an authenticator app or hardware key is not feasible–will be forced into a corner. Shadow Contact Information

Second, Facebook is also grabbing your contact information from your friends. Kash Hill of Gizmodo provides an example :

…if User A, whom we’ll call Anna, shares her contacts with Facebook, including a previously unknown phone number for User B, whom we’ll call Ben, advertisers will be able to target Ben with an ad using that phone number, which I call shadow contact information, about a month later.

This means that, even if you never directly handed a particular phone number over to Facebook, advertisers may nevertheless be able to associate it with your account based on your friends’ phone books.

Even worse, none of this is accessible or transparent to users. You can’t find such shadow contact information in the contact and basic info section of your profile; users in Europe can’t even get their hands on it despite explicit requirements under the GDPR that a company give users a right to know what information it has on them.

As Facebook attempts to salvage its reputation among users in the wake of the Cambridge Analytica scandal , it needs to put its money where its mouth is . Wiping 2FA numbers and shadow contact data from non-essential use would be a good start.

Read more me_internet.htm at MelonFarmers.co.uk

Facebook logoAnd today’s daily act of censorship is to take down 652 accounts and pages connected to Russia and Iran that published political propaganda.Facebook said in a blog post  that the errant accounts were first uncovered by the cybersecurity firm FireEye, and have links to Russia and Iran. CEO Mark Zuckerberg said:

These were networks of accounts that were misleading people about who they were and what they were doing,We ban this kind of behavior because authenticity matters. People need to be able to trust the connections they make on Facebook.

In July, FireEye tipped Facebook off to the existence of a network of pages known as Liberty Front Press. The network included 70 accounts, three Facebook groups, and 76 Instagram accounts, which had 155,000 Facebook followers and 48,000 Instagram followers. Not exactly impressive figures though. And the paltry $6,000 spent since 2015 rather suggests that these a small fry.

Liberty Free Press also was linked to a set of pages that posed as news organizations while also hacking people’s accounts and spread malware, Facebook said. That network included 12 pages and 66 accounts, plus nine Instagram accounts. They had about 15,000 Facebook followers and 1,100 Instagram followers, and did not buy advertising or events.

Iran-linked accounts and pages created in 2011 shared posts about politics in the Middle East, United Kingdom, and United States. That campaign had 168 pages and 140 Facebook accounts, as well as 31 Instagram accounts, and had 813,000 Facebook followers and 10,000 Instagram followers. Again the total advertising spend was just $6,000.

Russian accounts taken down in the Facebook action were focused on politics in Syria and Ukraine, but did not target the United States.

Facebook’s reputation ratings

See  article from bbc.co.uk

Facebook has confirmed that it has started scoring some of its members on a trustworthiness scale.The Washington Post revealed that the social network had developed the system over the past year.

The tech firm says it has been developed to help handle reports of false news on its platform, but it has declined to reveal how the score is calculated or the limits of its use. Critics are concerned that users have no apparent way to obtain their rating. The BBC understands that at present only Facebook’s misinformation team makes use of the measurement.

Perhaps the scheme works on 1 to 5 scale with the bottom rating of 1, being as trustworthy as Facebook, a lowly score of 2 for being twice as trustworthy as Facebook, whilst top of the scale is 5 times as trustworthy as Facebook.

Facebook objected the scale being described in the Washington Post as being a ‘reputation’ score. Facebook said that this was just plain wrong claiming:

What we’re actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible.

No doubt armies of Indian SEO workers will now redirect their efforts at improving website’s Facebook reputation ratings.

Seeking refuge in blaming Facebook

See  article from nytimes.com

Meanwhile Warwick University research suggests that anti refugee troubles are worse in German towns where Facebook usage is more than the national average. Facebook are taking a lot of stick lately but it seems a little much to start blaming them for all the world’s ills. If Facebook were to be banned tomorrow, would the world suddenly become a less fractious place? What do you think?

Read more inus.htm at MelonFarmers.co.uk

us government logoThe US Federal Government is quietly meeting with top tech company representatives to develop a proposal to protect web users’ privacy amid the ongoing fallout globally of scandals that have rocked Facebook and other companies.Over the past month, the Commerce Department has met with representatives from Facebook and Google, along with Internet providers like AT&T and Comcast, and consumer advocates, sources told the Washington Post.

The goal of these meetings is to come up with a data privacy proposal at the federal level that could serve as a blueprint for Congress to pass sweeping legislation in the mode of the European Union GDPR. There are currently no laws that govern how tech companies harness and monetize US users’ data.

A total of 22 meetings with more than 80 companies have been held on this topic over the last month.

One official at the White House told the Post this week that recent developments have been seismic in the privacy policy world, prompting the government to discuss what a modern U.S. approach to privacy protection might look like.