Archive for the ‘Internet Social Media’ Category

Read more me_bbc_trust.htm at MelonFarmers.co.uk

BBC logoA number of TV broadcasters, mobile network and internet service providers has urged the UK government to introduce a new internet censor of social media companies. In a letter to The Sunday Telegraph, executives from the BBC, ITV and Channel 4, as well as Sky, BT and TalkTalk, called for a new censor to help tackle fake news, child exploitation, harassment and other growing issues online.  The letter said:

We do not think it is realistic or appropriate to expect internet and social media companies to make all the judgment calls about what content is and is not acceptable, without any independent oversight.

There is an urgent need for independent scrutiny of the decisions taken, and greater transparency.

This is not about censoring the internet:[ …BUT… ] it is about making the most popular internet platforms safer, by ensuring there is accountability and transparency over the decisions these private companies are already taking. The UK government is aware of the problems on Facebook, Twitter, and other social media platforms. Last October, it introduced an Internet Safety Green Paper as part of its digital charter manifesto pledge. Following a consultation period, then digital secretary Matt Hancock (he’s now the health secretary) said a white paper would be introduced later in 2018.

And in a comment suggesting that maybe the call is more about righting market imbalances than concern over societal problems. The letter noted that its signatories all pay high and fair levels of tax. The letter also notes that broadcasters and telcos are held to account by Ofcom, while social media forms are not, which again gives the internet companies an edge in the market.

Advertisements
Read more uk_internet_censors.htm at MelonFarmers.co.uk

dcsm fake news Parliament’s Digital, Culture, Media and Sport (DCMS) Committee has been investigating disinformation and fake news following the Cambridge Analytica data scandal and is claiming that the UK faces a democratic crisis due to the spread of pernicious views and the manipulation of personal data.In its first report it will suggest social media companies should face tighter censorship. It also proposes measures to combat election interference.

The report claims that the relentless targeting of hyper-partisan views, which play to the fears and prejudices of people, in order to influence their voting plans is a threat to democracy.

The report was very critical of Facebook, which has been under increased scrutiny following the Cambridge Analytica data scandal.

Facebook has hampered our efforts to get information about their company throughout this inquiry. It is as if it thinks that the problem will go away if it does not share information about the problem, and reacts only when it is pressed, the report said.  It provided witnesses who have been unwilling or unable to give full answers to the committee’s questions.

The committee suggests:

1. Social media sites should be held responsible for harmful content on their services

Social media companies cannot hide behind the claim of being merely a ‘platform’, claiming that they are tech companies and have no role themselves in regulating the content of their sites, the committee said.

They continually change what is and is not seen on their sites, based on algorithms and human intervention.

They reward what is most engaging, because engagement is part of their business model and their growth strategy. They have profited greatly by using this model.

The committee suggested a new category of tech company should be created, which was not necessarily a platform or a publisher but something in between.

This should establish clear legal liability for the tech companies to act against harmful and illegal content on their platforms, the report said.

2. The rules on political campaigns should be made fit for the digital age

The committee said electoral law needed to be updated to reflect changes in campaigning techniques.

It suggested creating a public register for political advertising so that anybody can see what messages are being distributed online political advertisements should have a digital imprint stating who was responsible, as is required with printed leaflets and advertisements social media sites should be held responsible for interference in elections by malicious actors electoral fraud fines should be increased from a maximum of £20,000 to a percentage of organisations’ annual turnover

3. Technology companies should be taxed to fund education and regulation

Increased regulation of social media sites would result in more work for organisations such as the Electoral Commission and Information Commissioner’s Office (ICO).

The committee suggested a levy on tech companies should fund the expanded responsibilities of the regulators.

The money should also be spent on educational programmes and a public information campaign, to help people identify disinformation and fake news.

4. Social networks should be audited

The committee warned that fake accounts on sites such as Facebook and Twitter not only damage the user experience, but potentially defraud advertisers.

It suggested an independent authority such as the Competition and Markets Authority should audit the social networks.

It also said security mechanisms and algorithms used by social networks should be available for audit by a government regulator, to ensure they are operating responsibly.

Read more uk_internet_censors.htm at MelonFarmers.co.uk

Telegraph logoThe Telegraph reveals:

The government is braced for criticism next week over an anticipated delay in its prospective curbs on under 18s’ access to hardcore porn sites.

The current timetable culminating in the implementation of UK porn censorship by the end of the year required that the final censorship guidelines are presented to MPs before they go on holiday on Thursday. They will then be ready to approve them when they return to work in the autumn. It sound like they won’t be ready for publishing by this Thursday.

The BBFC noted that they were due to send the results of the public consultation along with the BBFC censorship rules to the government by late May of this year so presumably the government is still pondering what to do.

‘Best practice’ just like Facebook and Cambridge Analytica

Back in April when the BBFC initiated its rather naive draft rules for public consultation its prose tried to suggest that we can trust age verifiers with our most sensitive porn browsing data because they will voluntarily follow ‘best practice’. But in light of the major industry player, in this case Facebook, allowing Cambridge Analytica to so dramatically abuse our personal data, the hope that these people will follow best practice’ is surely forlorn.

GDPR

And there was the implementation of GDPR. The BBFC seemed to think that this was all that was needed to keep our data safe. But when t comes down to it all GDPR seems to have done is to train us, like Pavlov’s dogs, to endlessly tick the consent box for all these companies to do what the hell they like with our data.

Ingenious kids

Then there was a nice little piece of research this week that revealed that network level ISP filtering of porn has next to no impact on preventing young porn seekers from obtaining their kicks. The research notes seems to suggest that it is not enough to block porn one lad because he has 30 mates whose house he can round to surf the web there, or else it only takes a few lads to be able to download porn and it will soon be circulated to the whole community on a memory stick or whatever.

Mass Buy in

I guess the government is finding it tough to find age verification ideas that are both convenient for adult users, whilst remaining robust about preventing access by the under 18s. I think the governments needs to find a solution that will achieve a mass buy in by adult users. If the adults don’t want to play ball with the age verification process, then the first fall back position is for them to use a VPN. I know that from my use of VPNS that they are very good, and once you turn it on then I find it gets left on all day. I am sure millions of people using VPNs would not go down well with the security services on the trail of more serious crimes than under age porn viewing.

I think the most likely age verification method proposed to date that has a chance of a mass buy-in is the AVSecure system of anonymously buying a porn access card from a local shop, and using a PIN, perhaps typed in once a day. Then they are able to browse without further hassle on all participating websites. But I think it would require a certain pragmatism from government to accept this idea, as it would be so open to over 18s buying a card and then selling the PIN to under 18s, or perhaps sons nicking their Dad’s PINS when they see the card lying around, (or even perhaps installing a keyboard logger to nick the password).

The government would probably like something more robust where PINS have to be matched to people’s proven ID. But I think pron users would be stupid to hand over their ID to anyone on the internet who can monitor porn use. The risks are enormous, reputational damage, blackmail, fraud etc, and in this nasty PC world, the penalty of the most trivial of moral transgressions is to lose your job or even career.

A path to failure

The government is also setting out on a path when it can do nothing but fail. The Telegraph piece mentioned above is already lambasting the government for not applying the rules to social media websites such as Twitter, that host a fair bit of porn. The Telegraph comments:

Children will be free to watch explicit X-rated sex videos on social media sites because of a loophole in a new porn crackdown, Britain’s chief censor has admitted.

David Austin, chief executive of the BBFC, has been charged by ministers with enforcing new laws that require people to prove they are over 18 to access porn sites. However, writing for telegraph.co.uk, Mr Austin admitted it would not be a silver bullet as online porn on sites such as Facebook and YouTube would escape the age restrictions. Social media companies will not be required to carry age-verification for pornographic content on their platforms. He said it was a matter for government to review this position.

Read more ow.htm at MelonFarmers.co.uk

sharon whiteSharon White, the CEO of Ofcom has put her case to be the British internet news censor, disgracefully from behind the paywalled website of the The Times.White says Ofcom has done research showing how little users trust what they read on social media. She said that only 39% consider social media to be a trustworthy news source, compared with 63% for newspapers, and 70% for TV.

But then again many people don’t much trust the biased moralising from the politically correct mainstream media, including the likes of Ofcom.

White claims social media platforms need to be more accountable in how they curate and police content on their platforms, or face regulation.

In reality, Facebook’s algorithm seems pretty straightforward, it just gives readers more of what they have liked in the past. But of course the powers that be don’t like people choosing their own media sources, they would much prefer that the BBC, or the Guardian , or Ofcom do the choosing.

Sharon White, wrote in the Times:

The argument for independent regulatory oversight of [large online players] has never been stronger.

In practice, this would place much greater scrutiny on how effectively the online platforms respond to harmful content to protect consumers, with powers for a regulator to enforce standards, and act if these are not met.

She continued, disgracefully revealing her complete contempt of the British people:

Many people admit they simply don’t have the time or inclination to think critically when engaging with news, which has important implications for our democracy.

White joins a growing number of the establishment elite arguing that social media needs cenorship. The government has frequently suggested as much, with Matt Hancock, then digital, culture, media and sport secretary, telling Facebook in April:

Social media companies are not above the law and will not be allowed to shirk their responsibilities to our citizens.

Read more me_internet.htm at MelonFarmers.co.uk

declaration of independenceOne moment Facebook’s algorithms are expected to be able to automatically distinguish terrorism support from news reporting or satire, the next moment, it demonstrates exactly how crap it is by failing to distinguish hate speech from a profound, and nation establishing, statement of citizens rights.

Facebook’s algorithms removed parts of the US Declaration of Independence from the social media site after determining they represented hate speech.

The issue came to light when a local paper in Texas began posting excerpts of the historic text on its Facebook page each day in the run up to the country’s Independence Day celebrations on July 4.

However when The Liberty County Vindicator attempted to post its tenth extract, which refers to merciless Indian savages, on its Facebook page the paper received a notice saying the post went against its standards on hate speech.

Facebook later ‘apologised’ as it has done countless times before and allowed the posting.

Read more inafrica.htm at MelonFarmers.co.uk

Uganda flagUganda has just introduced a significant tax on social media usage. It is set at 200 shillings a day which adds up to about 3% of the average annual income if used daily.Use of a long list of websites including Facebook, Whatsapp, Twitter, Tinder triggers the daily taxed through billing by ISPs.

And as you may expect Uganda internet users are turning to VPNs so that ISPs can’t detect access to taxed apps and websites.

In response, the government says it has ordered local ISPs to begin blocking VPNs. In a statement, Uganda Communications Commission Executive Director, Godfrey Mutabazi said that Internet service providers would be ordered to block VPNs to prevent citizens from avoiding the social media tax.

Mutabazi told Dispatch that ISPs are already taking action to prevent VPNs from being accessible but since there are so many, it won’t be possible to block them all. In the meantime, the government is trying to portray VPNs as more expensive to use than the tax. In a post on Facebook this morning, Mutabazi promoted the tax as the sensible economic option.

it appears that many Ugandans are outraged at the prospect of yet another tax and see VPN use as a protest, despite any additional cost. Opposition figures have already called for a boycott with support coming in from all corners of society. The government appears unmoved, however. Frank Tumwebaze, Minister of Information Technology and Communications said:

If we tax essentials like water, why not social media?

Read more inme.htm at MelonFarmers.co.uk

twitter 2015 logoTwitter has outlined further censorship measures in a blog post:

In March, we introduced our new approach to improve the health of the public conversation on Twitter. One important issue we’ve been working to address is what some might refer to as “trolls.” Some troll-like behavior is fun, good and humorous. What we’re talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search. Some of these accounts and Tweets violate our policies, and, in those cases, we take action on them. Others don’t but are behaving in ways that distort the conversation.

To put this in context, less than 1% of accounts make up the majority of accounts reported for abuse, but a lot of what’s reported does not violate our rules. While still a small overall number, these accounts have a disproportionately large — and negative — impact on people’s experience on Twitter. The challenge for us has been: how can we proactively address these disruptive behaviors that do not violate our policies but negatively impact the health of the conversation?

A New Approach

Today, we use policies, human review processes, and machine learning to help us determine how Tweets are organized and presented in communal places like conversations and search. Now, we’re tackling issues of behaviors that distort and detract from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. By using new tools to address this conduct from a behavioral perspective, we’re able to improve the health of the conversation, and everyone’s experience on Twitter, without waiting for people who use Twitter to report potential issues to us.

There are many new signals we’re taking in, most of which are not visible externally. Just a few examples include if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that repeatedly Tweet and mention accounts that don’t follow them, or behavior that might indicate a coordinated attack. We’re also looking at how accounts are connected to those that violate our rules and how they interact with each other.

These signals will now be considered in how we organize and present content in communal areas like conversation and search. Because this content doesn’t violate our policies, it will remain on Twitter, and will be available if you click on “Show more replies” or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search.

Results

In our early testing in markets around the world, we’ve already seen this new approach have a positive impact, resulting in a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations. That means fewer people are seeing Tweets that disrupt their experience on Twitter.

Our work is far from done. This is only one part of our work to improve the health of the conversation and to make everyone’s Twitter experience better. This technology and our team will learn over time and will make mistakes. There will be false positives and things that we miss; our goal is to learn fast and make our processes and tools smarter. We’ll continue to be open and honest about the mistakes we make and the progress we are making. We’re encouraged by the results we’ve seen so far, but also recognize that this is just one step on a much longer journey to improve the overall health of our service and your experience on it.