Posts Tagged ‘Twitter’

Read more pc_news.htm at MelonFarmers.co.uk

twitter 2015 logo Deadnaming and misgendering could now get you a suspension from Twitter as it looks to sure up its safeguarding policy for people in the protected transgender category.Twitter’s recently updated censorship policy now reads:

Repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone

We prohibit targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.

According to the Ofxord English dictionary misgendering means:

Refer to (someone, especially a transgender person) using a word, especially a pronoun or form of address, that does not correctly reflect the gender with which they identify.

According to thegayuk.com:

Deadnaming is when a person refers to someone by a previous name, it could be done with malice or by accident. It mostly affects transgender people who have changed their name during their transition.

Advertisements
Read more me_internet.htm at MelonFarmers.co.uk

twitter 2015 logo Twitter is consulting its users about new censorship rules banning ‘dehumanising speech’, in which people are compared to animals or objects. It said language that made people seem less than human had repercussions.

The social network already has a hateful-conduct policy but it is implemented discriminately allowing some types of insulting language to remain online. For example, countless tweets describing middle-aged white men as gammon can be found on the platform.

At present it bans insults based on a person’s: race ethnicity nationality sexual orientation sex gender religious beliefs age disability medical condition but there is an unwritten secondary rule which means that the prohibition excludes groups not favoured under the conventions of political correctness.

Twitter said it intended to prohibit dehumanising language towards people in an identifiable group because some researchers claim it could lead to real-world violence. Asked whether calling men gammon would count as dehumanising speech, the company said it would first seek the views of its members. Twitter’s announcement reads in part:

For the last three months, we have been developing a new policy to address dehumanizing language on Twitter. Language that makes someone less than human can have repercussions off the service, including normalizing serious violence. Some of this content falls within our hateful conduct policy (which prohibits the promotion of violence against or direct attacks or threats against other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease), but there are still Tweets many people consider to be abusive, even when they do not break our rules. Better addressing this gap is part of our work to serve a healthy public conversation.

With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target. Many scholars have examined the relationship between dehumanization and violence. For example, Susan Benesch has described dehumanizing language as a hallmark of dangerous speech, because it can make violence seem acceptable, and Herbert Kelman has posited that dehumanization can reduce the strength of restraining forces against violence.

witter’s critics are now using the hashtag #verifiedhate to highlight examples of what they believe to be bias in what the platform judges to be unacceptable. The gammon insult gained popularity after a collage of contributors to the BBC’s Question Time programme – each middle-aged, white and male – was shared along with the phrase Great Wall of Gammon in 2017.

The scope of identifiable groups covered by the new rules will be decided after a public consultation that will run until 9 October.

Ps before filling in the consultation form, note that it was broken for me and didn’t accept my submission. For the record, Melon Farmer tried to submit the comment:

This is yet another policy that restricts free speech. As always, the vagueness of the rules will allow Twitter, or its moderators, to arbitrarily apply its own morality anyway. But not to worry, the richness of language will always enable people to dream up new ways to insult others.

Read more inme.htm at MelonFarmers.co.uk

twitter 2015 logoTwitter has outlined further censorship measures in a blog post:

In March, we introduced our new approach to improve the health of the public conversation on Twitter. One important issue we’ve been working to address is what some might refer to as “trolls.” Some troll-like behavior is fun, good and humorous. What we’re talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search. Some of these accounts and Tweets violate our policies, and, in those cases, we take action on them. Others don’t but are behaving in ways that distort the conversation.

To put this in context, less than 1% of accounts make up the majority of accounts reported for abuse, but a lot of what’s reported does not violate our rules. While still a small overall number, these accounts have a disproportionately large — and negative — impact on people’s experience on Twitter. The challenge for us has been: how can we proactively address these disruptive behaviors that do not violate our policies but negatively impact the health of the conversation?

A New Approach

Today, we use policies, human review processes, and machine learning to help us determine how Tweets are organized and presented in communal places like conversations and search. Now, we’re tackling issues of behaviors that distort and detract from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. By using new tools to address this conduct from a behavioral perspective, we’re able to improve the health of the conversation, and everyone’s experience on Twitter, without waiting for people who use Twitter to report potential issues to us.

There are many new signals we’re taking in, most of which are not visible externally. Just a few examples include if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that repeatedly Tweet and mention accounts that don’t follow them, or behavior that might indicate a coordinated attack. We’re also looking at how accounts are connected to those that violate our rules and how they interact with each other.

These signals will now be considered in how we organize and present content in communal areas like conversation and search. Because this content doesn’t violate our policies, it will remain on Twitter, and will be available if you click on “Show more replies” or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search.

Results

In our early testing in markets around the world, we’ve already seen this new approach have a positive impact, resulting in a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations. That means fewer people are seeing Tweets that disrupt their experience on Twitter.

Our work is far from done. This is only one part of our work to improve the health of the conversation and to make everyone’s Twitter experience better. This technology and our team will learn over time and will make mistakes. There will be false positives and things that we miss; our goal is to learn fast and make our processes and tools smarter. We’ll continue to be open and honest about the mistakes we make and the progress we are making. We’re encouraged by the results we’ve seen so far, but also recognize that this is just one step on a much longer journey to improve the overall health of our service and your experience on it.

Read more eu.htm at MelonFarmers.co.uk

banned titanic tweetThe Twitter account of German satirical magazine Titanic was blocked after it parodied anti-Muslim comments by AfD MP Beatrix von Storch.She accused police of trying to appease the barbaric, Muslim, rapist hordes of men by putting out a tweet in Arabic.

On Tuesday night, the magazine published a tweet parodying von Storch, saying:

The last thing that I want is mollified barbarian, Muslim, gang-raping hordes of men.

Titanic said on Wednesday its Twitter account had been blocked over the message, presumably as a result of a new law requiring social media sites to immediately block hateful comments on threat of massive fines. There is no time allowed or economic reason for assessing the merits of censorship claims, so social media companies are just censoring everything on demand, just in case.

Read more uk_internet_censors.htm at MelonFarmers.co.uk

Culture, Media and Sport select committeeA parliamentary committee is trying to get heavy with Facebook and Twitter over the release of details about Russian elections interference.Damian Collins, chair of the Department of Culture, Media and Sport select committee, which is looking into so-called fake news, has given the companies until 18 January to correct their failure to hand over information he requested about Russian misinformation campaigns on their platforms. He said:

There has to be a way of scrutinising the procedures that companies like Facebook put in place to help them identify known sources of disinformation, particularly when it’s politically motivated and coming from another country.

They need to be able to tell us what they can do about it. And what we need to be able to do is say to the companies: we recognise that you are best placed to monitor what is going on your own site and to get the balance right in taking action against it but also safeguarding the privacy of users.

But what there has to be then is some mechanism of saying: if you fail to do that, if you ignore requests to act, if you fail to police the site effectively and deal with highly problematic content, then there has to be some sort of sanction against you.

In a letter to Twitter this month, Collins wrote:

The information you have now shared with us is completely inadequate … It seems odd that so far we have received more information about activities that have taken place on your platform from journalists and academics than from you.

Read more me_internet.htm at MelonFarmers.co.uk

twitter 2015 logoThere was plenty of strong language flying around on Twitter in response to the Harvey Weinstein scandal. Twitter got a bit confused about who was harassing who, and ended up suspending Weinstein critic Rose McGowan for harassment. Twitter ended up being boycotted over its wrong call, and so Twitter bosses have been banging their heads together to do something.Wired has got hold of an email outline an expansion of content liable to Twitter censorship and also for more severe sanctions for errant tweeters. Twitter’s head of safety policy wrote of new measures to rolled out in the coming weeks:

Non-consensual nudity

Our definition of “non-consensual nudity” is expanding to more broadly include content like upskirt imagery, “creep shots,” and hidden camera content. Given that people appearing in this content often do not know the material exists, we will not require a report from a target in order to remove it.

While we recognize there’s an entire genre of pornography dedicated to this type of content, it’s nearly impossible for us to distinguish when this content may/may not have been produced and distributed consensually. We would rather error on the side of protecting victims and removing this type of content when we become aware of it.

Unwanted sexual advances

Pornographic content is generally permitted on Twitter, and it’s challenging to know whether or not sexually charged conversations and/or the exchange of sexual media may be wanted. To help infer whether or not a conversation is consensual, we currently rely on and take enforcement action only if/when we receive a report from a participant in the conversation.

We are going to update the Twitter Rules to make it clear that this type of behavior is unacceptable. We will continue taking enforcement action when we receive a report from someone directly involved in the conversation.

Hate symbols and imagery (new)

We are still defining the exact scope of what will be covered by this policy. At a high level, hateful imagery, hate symbols, etc will now be considered sensitive media (similar to how we handle and enforce adult content and graphic violence). More details to come.

Violent groups (new)

We are still defining the exact scope of what will be covered by this policy. At a high level, we will take enforcement action against organizations that use/have historically used violence as a means to advance their cause. More details to come here as well

Tweets that glorify violence (new)

We already take enforcement action against direct violent threats (“I’m going to kill you”), vague violent threats (“Someone should kill you”) and wishes/hopes of serious physical harm, death, or disease (“I hope someone kills you”). Moving forward, we will also take action against content that glorifies (“Praise be to for shooting up. He’s a hero!”) and/or condones (“Murdering makes sense. That way they won’t be a drain on social services”). More details to come.

Read more me_internet.htm at MelonFarmers.co.uk
flower pot men
  Banned on Twitter

If you’re looking to follow news and advocacy about an anticipated Vermont legislature vote this week on legalizing marijuana, a search for the latest tweets that use the combined terms Vermont and marijuana will for many Twitter users yield zero results.

Same goes for searches for tweets using the terms pot, weed or cannabis. The latest results for jackass and jerk , words generally printed without censorship by news outlets, also yield a blank page with a message claiming: Nothing came up for that search, which is a little weird. Maybe check what you searched for and try again.

The omissions are examples of a new censorship syste introduced by Twitter, with users required to opt out of a filter to see uncensored results.

Top results for restricted terms still appear, but results for the most recent posts and for photos, videos and news content tabs do not.