Posts Tagged ‘Facebook’

Brands demand that Facebook censors news that offends identitarian sensitivities.

Read more inus.htm at MelonFarmers.co.uk

stop hate for profit logo Facebook has said it will start to label potentially harmful posts that it leaves up because of their news value. The move comes as the firm faces growing pressure to censor the content on its platform.More than 90 advertisers have joined a boycott of the site, including consumer goods giant Unilever on Friday. The Stop Hate for Profit campaign was started by US civil rights groups after the death of George Floyd in May while in police custody. It has focused on Facebook, which also owns Instagram; and WhatsApp. The organisers, which include Color of Change and the National Association for the Advancement of Colored People, have said Facebook allows racist, violent and verifiably false content to run rampant on its platform.

Unilver said it would halt Twitter, Facebook and Instagram advertising in the US at least through 2020.

In a speech on Friday, Facebook boss Mark Zuckerberg defended the firm’s record of taking down hate speech. But he said the firm was tightening its policies to address the reality of the challenges our country is facing and how they’re showing up across our community. In addition to introducing labels, Facebook will ban ads that describe people from different groups, based on factors such as race or immigration status, as a threat. He said:

A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

We will soon start labelling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case.

He added that Facebook would remove content – even from politicians – if it determines that it incites violence or suppresses voting.

Read more me_internet.htm at MelonFarmers.co.uk

facebook meme challenge Facebook is seeking help in the censorship of hateful messages that have been encoded into meme. The company writes in a post:

In order for AI to become a more effective tool for detecting hate speech, it must be able to understand content the way people do: holistically. When viewing a meme, for example, we don’t think about the words and photo independently of each other; we understand the combined meaning together. This is extremely challenging for machines, however, because it means they can’t just analyze the text and the image separately. They must combine these different modalities and understand how the meaning changes when they are presented together. To catalyze research in this area, Facebook AI has created a data set to help build systems that better understand multimodal hate speech. Today, we are releasing this Hateful Memes data set to the broader research community and launching an associated competition, hosted by DrivenData with a $100,000 prize pool.

The challenges of harmful content affect the entire tech industry and society at large. As with our work on initiatives like the Deepfake Detection Challenge and the Reproducibility Challenge, Facebook AI believes the best solutions will come from open collaboration by experts across the AI community.

We continue to make progress in improving our AI systems to detect hate speech and other harmful content on our platforms, and we believe the Hateful Memes project will enable Facebook and others to do more to keep people safe.

Facebook seems to be suggesting that if governments are so keen on censoring people’s speech then perhaps the governments should take over the censorship job entirely.

Read more me_internet.htm at MelonFarmers.co.uk

Facebook logo Today, we’re publishing a white paper setting out some questions that regulation of online content might address.

Charting a Way Forward: Online Content Regulation builds on recent developments on this topic, including legislative efforts and scholarship.

The paper poses four questions which go to the heart of the debate about regulating content online:

  • How can content regulation best achieve the goal of reducing harmful speech while preserving free expression? By requiring systems such as user-friendly channels for reporting content or external oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies’ efforts.

  • How can regulations enhance the accountability of internet platforms? Regulators could consider certain requirements for companies, such as publishing their content standards, consulting with stakeholders when making significant changes to standards, or creating a channel for users to appeal a company’s content removal or non-removal decision.

  • Should regulation require internet companies to meet certain performance targets? Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.

  • Should regulation define which “harmful content” should be prohibited on the internet? Laws restricting speech are generally implemented by law enforcement officials and the courts. Internet content moderation is fundamentally different. Governments should create rules to address this complexity — that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context.

Guidelines for Future Regulation

The development of regulatory solutions should involve not just lawmakers, private companies and civil society, but also those who use online platforms. The following principles are based on lessons we’ve learned from our work in combating harmful content and our discussions with others.

  • Incentives. Ensuring accountability in companies’ content moderation systems and procedures will be the best way to create the incentives for companies to responsibly balance values like safety, privacy, and freedom of expression.

  • The global nature of the internet. Any national regulatory approach to addressing harmful content should respect the global scale of the internet and the value of cross-border communications. They should aim to increase interoperability among regulators and regulations.

  • Freedom of expression. In addition to complying with Article 19 of the ICCPR (and related guidance), regulators should consider the impacts of their decisions on freedom of expression.

  • Technology. Regulators should develop an understanding of the capabilities and limitations of technology in content moderation and allow internet companies the flexibility to innovate. An approach that works for one particular platform or type of content may be less effective (or even counterproductive) when applied elsewhere.

  • Proportionality and necessity. Regulators should take into account the severity and prevalence of the harmful content in question, its status in law, and the efforts already underway to address the content.

If designed well, new frameworks for regulating harmful content can contribute to the internet’s continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together. Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation.

We hope today’s white paper helps to stimulate further conversation around the regulation of content online. It builds on a paper we published last September on data portability , and we plan on publishing similar papers on elections and privacy in the coming months.

Too many government’s defining online harms that need censoring…

Read more me_internet.htm at MelonFarmers.co.uk

mark zuckerberg Mark Zuckerberg has declared that Facebook is going to stand up for free expression in spite of the fact it will piss off a lot of people.He made the claim during a fiery appearance at the Silicon Slopes Tech Summit in Utah on Friday. Zuckerberg told the audience that Facebook had previously tried to resist moves that would be branded as too offensive – but says he now believes he is being asked to partake in excessive censorship:

Increasingly we’re getting called to censor a lot of different kinds of content that makes me really uncomfortable, he claimed. We’re going to take down the content that’s really harmful, but the line needs to be held at some point.

It kind of feels like the list of things that you’re not allowed to say socially keeps on growing, and I’m not really okay with that.

This is the new approach [free expression], and I think it’s going to piss off a lot of people. But frankly the old approach was pissing off a lot of people too, so let’s try something different.

The US Senate Judiciary Committee joins the UK and Australia Wanting Everyone to Know It’s Concerned About Encryption.

Read more inus.htm at MelonFarmers.co.uk

EFF logo Yesterday the US Senate Judiciary Committee held a hearing on encryption and lawful access. That’s the fanciful idea that encryption providers can somehow allow law enforcement access to users’ encrypted data while otherwise preventing the bad guys from accessing this very same data.

But the hearing was not inspired by some new engineering breakthrough that might make it possible for Apple or Facebook to build a secure law enforcement backdoor into their encrypted devices and messaging applications. Instead, it followed speeches, open letters, and other public pressure by law enforcement officials in the U.S. and elsewhere to prevent Facebook from encrypting its messaging applications, and more generally to portray encryption as a tool used in serious crimes, including child exploitation. Facebook has signaled it won’t bow to that pressure. And more than 100 organizations including EFF have called on these law enforcement officials to reverse course and avoid gutting one of the most powerful privacy and security tools available to users in an increasingly insecure world.

Many of the committee members seemed to arrive at the hearing convinced that they could legislate secure backdoors. Among others, Senators Graham and Feinstein told representatives from Apple and Facebook that they had a responsibility to find a solution to enable government access to encrypted data. Senator Graham commented:

My advice to you is to get on with it, because this time next year, if we haven’t found a way that you can live with, we will impose our will on you.

But when it came to questioning witnesses, the senators had trouble establishing the need for or the feasibility of blanket law enforcement access to encrypted data. As all of the witnesses pointed out, even a basic discussion of encryption requires differentiating between encrypting data on a smartphone, also called encryption at rest, and end-to-end encryption of private chats, for example.

As a result, the committee’s questioning actually revealed several points that undercut the apocalyptic vision painted by law enforcement officials in recent months. Here are some of our takeaways:

There’s No Such Thing As an Unhackable Phone

The first witness was Manhattan District Attorney Cyrus Vance, Jr., who has called for Apple and Google to roll back encryption in their mobile operating systems. Yet by his own statistics, the DA’s office is able to access the contents of a majority of devices it encounters in its investigations each year. Even for those phones that are locked and encrypted, Vance reported that half could be accessed using in-house forensic tools or services from outside vendors. Although he stressed both the high cost and the uncertainty of these tools, the fact remains that device encryption is far from an insurmountable barrier to law enforcement.

As we saw when the FBI dramatically lowered its own estimate of unhackable phones in 2017, the level of security of these devices is not static. Even as Apple and Google patch vulnerabilities that might allow access, vendors like Cellebrite and Grayshift discover new means of bypassing security features in mobile operating systems. Of course, no investigative technique will be completely effective, which is why law enforcement has always worked every angle it can. The cost of forensic tools may be a concern, but they are clearly part of a variety of tools law enforcement use to successfully pursue investigations in a world with widespread encryption.

Lawful Access to Encrypted Phones Would Take Us Back to the Bad Old Days

Meanwhile, even as Vance focused on the cost of forensic tools to access encrypted phones, he repeatedly ignored why companies like Apple began fully encrypting their devices in their first place. In a colloquy with Senator Mike Lee, Apple’s manager of user privacy Erik Neuenschwander explained that the company’s introduction of full disk encryption in iOS in 2014 was a response to threats from hackers and criminals who could otherwise access a wealth of sensitive, unencrypted data on users’ phones. On this point, Neuenschwander explained that Vance was simply misinformed: Apple has never held a key capable of decrypting encrypted data on users’ phones.

Neuenschwander explained that he could think of only two approaches to accomplishing Vance’s call for lawful access, both of which would dramatically increase the risks to consumers. Either Apple could simply roll back encryption on its devices, leaving users exposed to increasingly sophisticated threats from bad actors, or it could attempt to engineer a system where it did hold a master key to every iPhone in the world. Regarding the second approach, Neuenschwander said as a technologist, I am extremely fearful of the security properties of such a system. His fear is well-founded; years of research by technologists and cryptographers confirm that key escrow and related systems are highly insecure at the scale and complexity of Apple’s mobile ecosystem.

End-to-End Encryption Is Here to Stay

Finally, despite the heated rhetoric directed by Attorney General Barr and others at end-to-end encryption in messaging applications, the committee found little consensus. Both Vance and Professor Matt Tait suggested that they did not believe that Congress should mandate backdoors in end-to-end encrypted messaging platforms. Meanwhile, Senators Coons, Cornyn, and others expressed concerns that doing so would simply push bad actors to applications hosted outside of the United States, and also aid authoritarian states who want to spy on Facebook users within their own borders. Facebook’s director for messaging privacy Jay Sullivan discussed ways that the company will root out abuse on its platforms while removing its own ability to read users’ messages. As we’ve written before, an encrypted Facebook Messenger is a good thing , but the proof will be in the pudding.

Ultimately, while the Senate Judiciary Committee hearing offered worrying posturing on the necessity of backdoors, we’re hopeful that Congress will recognize what a dangerous idea legislation would be in this area.

Comment: Open Rights Group joins international outcry over UK government calls to access private messages

11th December 2019. See article from openrightsgroup.org

See letter from openrightsgroup.org

open rights group 2016 logo Open Rights Group has joined dozens of other organizations signing an open letter to the UK government to express significant concerns raised by their recent statements against encryption.

The UK Home Secretary, Priti Patel, has joined her US counterparts in demanding weaker encryption and asking i nternet companies to design digital back doors into their messaging services. The UK government suggests stronger capabilities to monitor private messages will aid inf fighting terrorism and child abuse. ORG disagrees, arguing that alternative approaches must be used as the proposed measures will weaken the security of every internet user.

ORG is concerned that this attack on encryption forms a pattern of attacks on digital privacy and security by the UK government. Only last week leaked documents showed that the UK wants to give the US access to NHS records and other personal information, in a free flow of data between the two countries.

The open letter was also addressed to US and Australian authorities, and was coordinated by the US-based Open Technology Institute and was signed, among others, by Amnesty International, Article 19, Index on Censorship, Privacy International and Reporters Without Borders.

Javier Ruiz Diaz, Policy Director for Open Rights Group, said:

The Home Secretary wants to be able to access our private messages in WhatsApp and similar apps, demanding that companies remove the technical protections that keep out fraudsters and other criminals. This is wrong and will make the internet less safe. Surveillance measures should be targeted and not built into the apps used by millions of people to talk to their friends and family.

Comment: Facebook has also responded to UK/US/Australian government calls for back doors

11th December 2019. See article [pdf] from about.fb.com

Facebook logo As the Heads of WhatsApp and Messenger, we are writing in response to your public letter addressing our plans to strengthen private messaging for our customers. You have raised important issues that could impact the future of free societies in the digital age and we are grateful for the opportunity to explain our view.

We all want people to have the ability to communicate privately and safely, without harm or abuse from hackers, criminals or repressive regimes. Every day, billions of people around the world use encrypted messages to stay in touch with their family and friends, run their small businesses, and advocate for important causes. In these messages they share private information that they only want the person they message to see. And it is the fact that these messages are encrypted that forms the first line of defense, as it keeps them safe from cyber attacks and protected from falling into the hands of criminals. The core principle behind end-to-end encryption is that only the sender and recipient of a message have the keys to unlock and read what is sent. No one can intercept and read these messages – not us, not governments, not hackers or criminals.

We believe that people have a right to expect this level of security, wherever they live. As a company that supports 2.7 billion users around the world, it is our responsibility to use the very best technology available to protect their privacy. Encrypted messaging is the leading form of online communication and the vast majority of the billions of online messages that are sent daily, including on WhatsApp, iMessage, and Signal, are already protected with end-to-end encryption.

Cybersecurity experts have repeatedly proven that when you weaken any part of an encrypted system, you weaken it for everyone, everywhere. The backdoor access you are demanding for law enforcement would be a gift to criminals, hackers and repressive regimes, creating a way for them to enter our systems and leaving every person on our platforms more vulnerable to real-life harm. It is simply impossible to create such a backdoor for one purpose and not expect others to try and open it. People’s private messages would be less secure and the real winners would be anyone seeking to take advantage of that weakened security. That is not something we are prepared to do.

 

All politicians lie, governments are made up of politicians, why would anyone believe their claims about what is, and what isn’t, fake news anyway?

Read more inap.htm at MelonFarmers.co.uk

  facebook singapore censorship Singapore’s new law designed to counter fake news is now fully in effect. It allows the country’s government to issue corrections of information that it deems to be false, and fine those publishing it up to an equivalent of $730,000 and send them to prison for up to ten years.Singapore is now attempting to apply the new legislation globally, by ordering Facebook to correct a post made by a user in Australia. This is one of the points the critics of the legislation have been making ever since it was passed in May — that it will likely be used to stifle freedom of expression not only in Singapore but also beyond its borders.

The law, officially the Protection from Online Falsehoods and Manipulation Act, is described as one of the toughest in the world — while the order dispatched to Facebook marks the first time Singapore has attempted to directly influence a social media platform and content hosted on it.

The supposed ‘fake news’ in the first invocation of the law involved improvable claims in argument between the government and a government Singaporean critic now based in Australia. It seems unlikely that Facebook can substantiate or arbitrate the actual truth of the claims.

In this case, Facebook has added a correction notice to the disputed post saying:

Facebook is legally required to tell you that the Singapore government says this post has false information.

Fact Check this Mr Collins: ‘What people voted for last year was for us to leave the European Union and we will leave the EU on 29 March 2019’

Read more parl.htm at MelonFarmers.co.uk

damian collins Damian Collins, the chair of the House of Commons’ digital, culture, media and sport select committee has written to Nick Clegg, Facebook’s vice-president for global affairs and communications,  querying Facebook decision to exempt political adverts from fact-checkingCollins, presumably speaking from planet Uranus where all politicians always tell the truth, demanded to know why Facebook has decided to exempt political statements from its fact-checking programme — removing all bars on political candidates lying in paid adverts.

Collins wrote to Clegg with five questions for Facebook to answer , three of which covered the rule change. Why was the decision taken to change Facebook’s policy, the MP asked, given the heavy constraint this will place on Facebook’s ability to combat online disinformation in the run-up to elections around the world, and a possible UK general election in particular?

Read more me_internet.htm at MelonFarmers.co.uk

Facebook logo Nick Clegg, the Facbook VP of Global Affairs and Communications writes in a blog post:

Fact-Checking Political Speech

We rely on third-party fact-checkers to help reduce the spread of false news and other types of viral misinformation, like memes or manipulated photos and videos. We don’t believe, however, that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny. That’s why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here .

Newsworthiness Exemption

Facebook has had a newsworthiness exemption since 2016 . This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads — if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.

When we make a determination as to newsworthiness, we evaluate the public interest value of the piece of speech against the risk of harm. When balancing these interests, we take a number of factors into consideration, including country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards.

Read more me_internet.htm at MelonFarmers.co.uk

Facebook logoMark Zuckerberg has previously described plans to create a high level oversight board to decide upon censorship issues with a wider consideration than just Facebook interests. He suggested that national government interests should be considered at this top level of policy making. Zuckerberg wrote:

We are responsible for enforcing our policies every day and we make millions of content decisions every week. But ultimately I don’t believe private companies like ours should be making so many important decisions about speech on our own. That’s why I’ve called for governments to set clearer standards around harmful content. It’s also why we’re now giving people a way to appeal our content decisions by establishing the independent Oversight Board.

If someone disagrees with a decision we’ve made, they can appeal to us first, and soon they will be able to further appeal to this independent board. The board’s decision will be binding, even if I or anyone at Facebook disagrees with it. The board will use our values to inform its decisions and explain its reasoning openly and in a way that protects people’s privacy.

The board will be an advocate for our community — supporting people’s right to free expression, and making sure we fulfill our responsibility to keep people safe. As an independent organization, we hope it gives people confidence that their views will be heard, and that Facebook doesn’t have the ultimate power over their expression. Just as our Board of Directors keeps Facebook accountable to our shareholders, we believe the Oversight Board can do the same for our community.

As well as a detailed charter, Facebook provided a summary of the design of the board.

Along with the charter, we are providing a summary which breaks down the elements from the draft charter , the feedback we’ve received, and the rationale behind our decisions in relation to both. Many issues have spurred healthy and constructive debate. Four areas in particular were:

  • Governance: The majority of people we consulted supported our decision to establish an independent trust. They felt that this could help ensure the board’s independence, while also providing a means to provide additional accountability checks. The trust will provide the infrastructure to support and compensate the Board.

Read more me_internet.htm at MelonFarmers.co.uk

Facebook logo Facebook has introduced a new censorship tool known as Group Quality to evaluate private groups and scrutinize them for any ‘problematic content’.For a long time now, Facebook was facing heat from the media for the fact that the private groups feature is harboring extremists and the spreading of ‘fake news’. As a result, the company wrote an article from newsroom.fb.com introducing a new feature known as Group Quality:

Being in a private group doesn’t mean that your actions should go unchecked. We have a responsibility to keep Facebook safe, which is why our Community Standards apply across Facebook, including in private groups. To enforce these policies, we use a combination of people and technology — content reviewers and proactive detection. Over the last few years, we’ve invested heavily in both, including hiring more than 30,000 people across our safety and security teams.

Within this, a specialized team has been working on the Safe Communities Initiative: an effort that started two years ago with the goal of protecting people using Facebook Groups from harm. Made up of product managers, engineers, machine learning experts and content reviewers, this team works to anticipate the potential ways people can do harm in groups and develops solutions to minimize and prevent it. As the head of Facebook Groups, I want to explain how we’re making private groups safer by focusing on three key areas: proactive detection, tools for admins, and transparency and control for members.

On the plus side Facebook has updated settings used in defining access and visibility of groups which are much clearer than previus incarnations.

Critics say that Facebook’s move will not curb misinformation and fake news, but, on the contrary, it may further push it deeper underground making it hard for censor to filter or terminate such content from the site.