Archive for the ‘Internet Social Media’ Category

Read more me_internet.htm at MelonFarmers.co.uk

Facebook logoMark Zuckerberg has previously described plans to create a high level oversight board to decide upon censorship issues with a wider consideration than just Facebook interests. He suggested that national government interests should be considered at this top level of policy making. Zuckerberg wrote:

We are responsible for enforcing our policies every day and we make millions of content decisions every week. But ultimately I don’t believe private companies like ours should be making so many important decisions about speech on our own. That’s why I’ve called for governments to set clearer standards around harmful content. It’s also why we’re now giving people a way to appeal our content decisions by establishing the independent Oversight Board.

If someone disagrees with a decision we’ve made, they can appeal to us first, and soon they will be able to further appeal to this independent board. The board’s decision will be binding, even if I or anyone at Facebook disagrees with it. The board will use our values to inform its decisions and explain its reasoning openly and in a way that protects people’s privacy.

The board will be an advocate for our community — supporting people’s right to free expression, and making sure we fulfill our responsibility to keep people safe. As an independent organization, we hope it gives people confidence that their views will be heard, and that Facebook doesn’t have the ultimate power over their expression. Just as our Board of Directors keeps Facebook accountable to our shareholders, we believe the Oversight Board can do the same for our community.

As well as a detailed charter, Facebook provided a summary of the design of the board.

Along with the charter, we are providing a summary which breaks down the elements from the draft charter , the feedback we’ve received, and the rationale behind our decisions in relation to both. Many issues have spurred healthy and constructive debate. Four areas in particular were:

  • Governance: The majority of people we consulted supported our decision to establish an independent trust. They felt that this could help ensure the board’s independence, while also providing a means to provide additional accountability checks. The trust will provide the infrastructure to support and compensate the Board.

Advertisements
Read more me_internet.htm at MelonFarmers.co.uk

instrgram 2019 logo Facebook has launched a new feature allowing Instagram users to flag posts they claim contain fake news to its fact-checking partners for vetting.The move is part of a wider raft of measures the social media giant has taken to appease the authorities who claim that ‘fake news’ is the root of all social ills.

Launched in December 2016 following the controversy surrounding the impact of Russian meddling and online fake news in the US presidential election, Facebook’s partnership now involves more than 50 independent ‘fact-checkers’ in over 30 countries .

The new flagging feature for Instagram users was first introduced in the US in mid-August and has now been rolled out globally.

Users can report potentially false posts by clicking or tapping on the three dots that appear in the top right-hand corner, selecting report, it’s inappropriate and then false information.

No doubt the facility will be more likely to report posts that people don’t like rather for ‘false information’.

Children will see but not be heard…

Read more me_internet.htm at MelonFarmers.co.uk

youtube kids logo Google have announced potentially far reaching new policies about kids’ videos on YouTube. A Google blog post explains:

An update on kids and data protection on YouTube

From its earliest days, YouTube has been a site for people over 13, but with a boom in family content and the rise of shared devices, the likelihood of children watching without supervision has increased. We’ve been taking a hard look at areas where we can do more to address this, informed by feedback from parents, experts, and regulators, including COPPA concerns raised by the U.S. Federal Trade Commission and the New York Attorney General that we are addressing with a settlement announced today.

New data practices for children’s content on YouTube

We are changing how we treat data for children’s content on YouTube. Starting in about four months, we will treat data from anyone watching children’s content on YouTube as coming from a child, regardless of the age of the user. This means that we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service. We will also stop serving personalized ads on this content entirely, and some features will no longer be available on this type of content, like comments and notifications. In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and we’ll also use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.

Improvements to YouTube Kids

We continue to recommend parents use YouTube Kids if they plan to allow kids under 13 to watch independently. Tens of millions of people use YouTube Kids every week but we want even more parents to be aware of the app and its benefits. We’re increasing our investments in promoting YouTube Kids to parents with a campaign that will run across YouTube. We’re also continuing to improve the product. For example, we recently raised the bar for which channels can be a part of YouTube Kids, drastically reducing the number of channels on the app. And we’re bringing the YouTube Kids experience to the desktop.

Investing in family creators

We know these changes will have a significant business impact on family and kids creators who have been building both wonderful content and thriving businesses, so we’ve worked to give impacted creators four months to adjust before changes take effect on YouTube. We recognize this won’t be easy for some creators and are committed to working with them through this transition and providing resources to help them better understand these changes.

We are also going to continue investing in the future of quality kids, family and educational content. We are establishing a $100 million fund, disbursed over three years, dedicated to the creation of thoughtful, original children’s content on YouTube and YouTube Kids globally.

Today’s changes will allow us to better protect kids and families on YouTube, and this is just the beginning. We’ll continue working with lawmakers around the world in this area, including as the FTC seeks comments on COPPA . And in the coming months, we’ll share details on how we’re rethinking our overall approach to kids and families, including a dedicated kids experience on YouTube.

Read more me_internet.htm at MelonFarmers.co.uk

YouTube logo After a long introduction about how open and diverse YouTube is, CEO Susan Wojcick gets down to the nitty gritty of how YouTube censorship works. SHe writes in a blog:

Problematic content represents a fraction of one percent of the content on YouTube and we’re constantly working to reduce this even further. This very small amount has a hugely outsized impact, both in the potential harm for our users, as well as the loss of faith in the open model that has enabled the rise of your creative community. One assumption we’ve heard is that we hesitate to take action on problematic content because it benefits our business. This is simply not true — in fact, the cost of not taking sufficient action over the long term results in lack of trust from our users, advertisers, and you, our creators. We want to earn that trust. This is why we’ve been investing significantly over the past few years in the teams and systems that protect YouTube. Our approach towards responsibility involves four “Rs”:

  • We REMOVE content that violates our policy as quickly as possible. And we’re always looking to make our policies clearer and more effective, as we’ve done with pranks and challenges , child safety , and hate speech just this year. We aim to be thoughtful when we make these updates and consult a wide variety of experts to inform our thinking, for example we talked to dozens of experts as we developed our updated hate speech policy. We also report on the removals we make in our quarterly Community Guidelines enforcement report. I also appreciate that when policies aren’t working for the creator community, you let us know. One area we’ve heard loud and clear needs an update is creator-on-creator harassment. I said in my last letter that we’d be looking at this and we will have more to share in the coming months.

  • We RAISE UP authoritative voices when people are looking for breaking news and information, especially during breaking news moments. Our breaking and top news shelves are available in 40 countries and we’re continuing to expand that number.

  • We REDUCE the spread of content that brushes right up against our policy line. Already, in the U.S. where we made changes to recommendations earlier this year, we’ve seen a 50% drop of views from recommendations to this type of content, meaning quality content has more of a chance to shine. And we’ve begun experimenting with this change in the UK, Ireland, South Africa and other English-language markets.

  • And we set a higher bar for what channels can make money on our site, REWARDING trusted, eligible creators. Not all content allowed on YouTube is going to match what advertisers feel is suitable for their brand, we have to be sure they are comfortable with where their ads appear. This is also why we’re enabling new revenue streams for creators like Super Chat and Memberships. Thousands of channels have more than doubled their total YouTube revenue by using these new tools in addition to advertising.

Read more me_internet.htm at MelonFarmers.co.uk

YouTube logo A little while ago there was an issue on YouTube about parody videos using well known children’s cartoons as a baseline for adult humour. The videos were not in themselves outside of what YouTube allows but were not suitable for the child audience of the original shows. YouTube has now responded as follws:Content that contains mature or violent themes that explicitly targets younger minors and families in the title, description and/or tags will no longer be allowed on the platform. This content was previously age-restricted, but today we’re updating our child safety policies to better protect the family experience.

What content will be removed?

We’re removing misleading family content, including videos that target younger minors and families, that contain sexual themes, violence, obscene, or other mature themes not suitable for young audiences. Here are some examples of content that will be removed:

  • A video with tags like “for children” featuring family friendly cartoons engaging in inappropriate acts like injecting needles.
  • Videos with prominent children’s nursery rhymes targeting younger minors and families in the video’s title, description or tags, that contain adult themes such as violence, sex, death, etc.
  • Videos that explicitly target younger minors and families with phrasing such as “for kids” or “family fun” in the video’s title, description and/or tags that contain vulgar language.

What content will be age-restricted? Content that is meant for adults and not targeting younger minors and families won’t be removed, but it may be age-restricted. If you create adult content that could be confused as family entertainment, make sure your titles, descriptions, and tags match the audience you are targeting. Remember you can age restrict your content upon upload if it’s intended for mature audiences. Here is an example of content that may still be allowed on YouTube but will be age-restricted :

  • Adult cartoons with vulgar language and/or violence that is explicitly targeted at adults.
Read more me_internet.htm at MelonFarmers.co.uk

instrgram 2019 logo Instagram is adding an option for users to report posts they claim are false. The photo-sharing website is responding to increasing pressure to censor material that government’s do not like.Results then rated as false are removed from search tools, such as Instagram’s explore tab and hashtag search results.

The new report facility on Instagram is being initially rolled out only in the US.

Stephanie Otway, a Facebook company spokeswoman Said:

This is an initial step as we work towards a more comprehensive approach to tackling misinformation.

Posting false information is not banned on any of Facebook’s suite of social media services, but the company is taking steps to limit the reach of inaccurate information and warn users about disputed claims.

Read more me_internet.htm at MelonFarmers.co.uk

Facebook logo Facebook has introduced a new censorship tool known as Group Quality to evaluate private groups and scrutinize them for any ‘problematic content’.For a long time now, Facebook was facing heat from the media for the fact that the private groups feature is harboring extremists and the spreading of ‘fake news’. As a result, the company wrote an article from newsroom.fb.com introducing a new feature known as Group Quality:

Being in a private group doesn’t mean that your actions should go unchecked. We have a responsibility to keep Facebook safe, which is why our Community Standards apply across Facebook, including in private groups. To enforce these policies, we use a combination of people and technology — content reviewers and proactive detection. Over the last few years, we’ve invested heavily in both, including hiring more than 30,000 people across our safety and security teams.

Within this, a specialized team has been working on the Safe Communities Initiative: an effort that started two years ago with the goal of protecting people using Facebook Groups from harm. Made up of product managers, engineers, machine learning experts and content reviewers, this team works to anticipate the potential ways people can do harm in groups and develops solutions to minimize and prevent it. As the head of Facebook Groups, I want to explain how we’re making private groups safer by focusing on three key areas: proactive detection, tools for admins, and transparency and control for members.

On the plus side Facebook has updated settings used in defining access and visibility of groups which are much clearer than previus incarnations.

Critics say that Facebook’s move will not curb misinformation and fake news, but, on the contrary, it may further push it deeper underground making it hard for censor to filter or terminate such content from the site.