Read more inau.htm at MelonFarmers.co.uk

We Happy Few We Happy Few is a 2018 Canada survival horror from Compulsion Games

We Happy Few is the tale of a plucky bunch of moderately terrible people trying to escape from a lifetime of cheerful denial in the city of Wellington Wells. In this alternative 1960s England, conformity is key. You’ll have to fight or blend in with the drug-addled inhabitants, most of whom don’t take kindly to people who won’t abide by their not-so-normal rules.

In May 2018, the Australian Censorship Board announced that We Happy Few has been banned in Australia.

The censors noted that the game’s depictions of drug use related to incentives and rewards, in this case the beneficial effects of using Joy pills, could not be accommodated within the R 18+ category.

The Soma-like drug Joy is used in the game to detract the citizens of Wellington Wells from the Orwellian reality they live in.

There’s no word yet on if Compulsion Games will make cuts to the game to satisfy the Board, but it s often the case.

The game is set for release on PlayStation 4, Xbox One and PC this summer.

Advertisements
Read more inus.htm at MelonFarmers.co.uk

us congressDemocrats in the United States House of Representatives have gathered 90 of the 218 signatures they’ll need to force a vote on whether or not to roll back net neutrality rules, while Federal Communications Commission Chair Ajit Pai has already predicted that the House effort will fail and large telecommunications companies publicly expressed their anger at last Wednesday’s Senate vote to keep the Obama-era open internet rules in place.Led by Pai, a Donald Trump appointee, the FCC voted 3-2 along party lines in December to scrap the net neutrality regulations, effectively creating an internet landscape dominated by whichever companies can pay the most to get into the online fast lane.

Telecommunications companies could also choose to block some sites simply based on their content, a threat to which the online porn industry would be especially vulnerable, after five states have either passed or are considering legislation labeling porn a public health hazard.

While the House Republican leadership has taken the position that the net neutrality issue should not even come to a vote, on May 17 Pennsylvania Democrat Mike Doyle introduced a discharge petition that would force the issue to the House floor. A discharge petition needs 218 signatures of House members to succeed in forcing the vote. As of Monday morning, May 21, Doyle’s petition had received 90 signatures . The effort would need all 193 House Democrats plus 25 Republicans to sign on, in order to bring the net neutrality rollback to the House floor.

Read more me_internet.htm at MelonFarmers.co.uk

google news logoFor its updated news application, Google is claiming it is using artificial intelligence as part of an effort to weed out disinformation and feed users with viewpoints beyond their own filter bubble.Google chief Sundar Pichai, who unveiled the updated Google News earlier this month, said the app now surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. It marks Google’s latest effort to be at the centre of online news and includes a new push to help publishers get paid subscribers through the tech giant’s platform.

In reality Google has just banned news from the likes of the Daily Mail whilst all the ‘trusted sources’ are just the likes of the politically correct papers such as the Guardian and Independent.

According to product chief Trystan Upstill, the news app uses the best of artificial intelligence to find the best of human intelligence – the great reporting done by journalists around the globe. While the app will enable users to get personalised news, it will also include top stories for all readers, aiming to break the so-called filter bubble of information designed to reinforce people’s biases.

Nicholas Diakopoulos, a Northwestern University professor specialising in computational and data journalism, said the impact of Google’s changes remain to be seen. Diakopoulos said algorithmic and personalised news can be positive for engagement but may only benefit a handful of news organisations.  His research found that Google concentrates its attention on a relatively small number of publishers, it’s quite concentrated. Google’s effort to identify and prioritise trusted news sources may also be problematic, according to Diakopoulos. Maybe it’s good for the big guys, or the (publishers) who have figured out how to game the algorithm, he said. But what about the local news sites, what about the new news sites that don’t have a long track record?

I tried it out and no matter how many times I asked it not to provide stories about the royal wedding and the cup final, it just served up more of the same. And indeed as Diakopoulos said, all it wants to do is push news stories from the politically correct papers, most notably the Guardian. I can’t see it proving very popular. I’d rather have an app that feeds me what I actually like, not what I should like.

Read more uk_internet_censors.htm at MelonFarmers.co.uk

matt hancockCulture Secretary Matt Hancock has issued to the following press release from the Department for Digital, Culture, Media & Sport:

New laws to make social media safer

New laws will be created to make sure that the UK is the safest place in the world to be online, Digital Secretary Matt Hancock has announced.

The move is part of a series of measures included in the government’s response to the Internet Safety Strategy green paper, published today.

The Government has been clear that much more needs to be done to tackle the full range of online harm.

Our consultation revealed users feel powerless to address safety issues online and that technology companies operate without sufficient oversight or transparency. Six in ten people said they had witnessed inappropriate or harmful content online.

The Government is already working with social media companies to protect users and while several of the tech giants have taken important and positive steps, the performance of the industry overall has been mixed.

The UK Government will therefore take the lead, working collaboratively with tech companies, children’s charities and other stakeholders to develop the detail of the new legislation.

Matt Hancock, DCMS Secretary of State said:

Internet Safety StrategyDigital technology is overwhelmingly a force for good across the world and we must always champion innovation and change for the better. At the same time I have been clear that we have to address the Wild West elements of the Internet through legislation, in a way that supports innovation. We strongly support technology companies to start up and grow, and we want to work with them to keep our citizens safe.

People increasingly live their lives through online platforms so it’s more important than ever that people are safe and parents can have confidence they can keep their children from harm. The measures we’re taking forward today will help make sure children are protected online and balance the need for safety with the great freedoms the internet brings just as we have to strike this balance offline.

DCMS and Home Office will jointly work on a White Paper with other government departments, to be published later this year. This will set out legislation to be brought forward that tackles a range of both legal and illegal harms, from cyberbullying to online child sexual exploitation. The Government will continue to collaborate closely with industry on this work, to ensure it builds on progress already made.

Home Secretary Sajid Javid said:

Criminals are using the internet to further their exploitation and abuse of children, while terrorists are abusing these platforms to recruit people and incite atrocities. We need to protect our communities from these heinous crimes and vile propaganda and that is why this Government has been taking the lead on this issue.

But more needs to be done and this is why we will continue to work with the companies and the public to do everything we can to stop the misuse of these platforms. Only by working together can we defeat those who seek to do us harm.

The Government will be considering where legislation will have the strongest impact, for example whether transparency or a code of practice should be underwritten by legislation, but also a range of other options to address both legal and illegal harms.

We will work closely with industry to provide clarity on the roles and responsibilities of companies that operate online in the UK to keep users safe.

The Government will also work with regulators, platforms and advertising companies to ensure that the principles that govern advertising in traditional media — such as preventing companies targeting unsuitable advertisements at children — also apply and are enforced online.

Read more inus.htm at MelonFarmers.co.uk

iarc logoThe Entertainment Software Rating Board has confirmed it will cease offering free age and content ratings for online video games next month. The Short Form ratings process the ESRB currently offers for download-only and online games will be discontinued in June. The ESRB will continue with the higher cost Long Form ratings, primarily used for physical/boxed games. A date has not yet been set for the end of the service.

Developers feared that they would be forced to pay for the higher cost rating otherwise they would not be allowed to release their titles on key platforms like Xbox that demand a content rating.

However the ESRB’s official Twitter feed responding that:

Developers of digital games and apps will still be able to obtain ESRB ratings at no cost through the IARC rating process. The Microsoft Store deployed IARC years ago and has committed to making IARC ratings accessible to all Xbox developers. So, developers should not be concerned.

The International Age Rating Coalition is a newer system for obtaining age ratings for multiple territories and storefronts with a single process. While ESRB single out the Xbox Store, it is also accepted on Google Play, the Nintendo eShop, and the Oculus Store.

There is currently no word on when this will apply to the PlayStation Store, but an IARC press release in December 2017 said the platform would be added soon.

Read more me_asa.htm at MelonFarmers.co.uk

ASA logo ASA’s code writing arm, CAP, has launched a public consultation on a new rule to tackle harmful gender stereotypes in ads, as well as on guidance to advertisers on how the new rule is likely to be interpreted in practice. The purpose of today’s announcement is to make public the proposed rule and guidance, which includes examples of gender portrayals which are likely to fall foul of the new rule.

The consultation proposes the introduction of the following new rule to the ad codes which will cover broadcast and non-broadcast media:

Advertisements must not include gender stereotypes that are likely to cause harm, or serious or widespread offence.

The consultation comes after the ASA published a report last year – Depictions, Perceptions and Harm which provided an evidence-based case for stronger regulation of ads that feature certain kinds of gender stereotypical roles and characteristics. These are ads that have the potential to cause harm by contributing to the restriction of people’s choices, aspirations and opportunities, which can affect the way people interact with each other and the way they view their own potential.

We already apply rules on offence and social responsibility to ban ads that include gender stereotypes on grounds of objectification, inappropriate sexualisation and depiction of unhealthily thin body images.

The evidence does not demonstrate that the use of gender stereotypes is always problematic or that the use of seriously offensive or potentially harmful stereotypes in advertising is endemic. The rule and guidance therefore seek to identify specific harms that should be prevented, rather than banning gender stereotypes outright.

The consultation on guidance to support the proposed new rule change provides examples of scenarios likely to be problematic in future ads. For example:

  • An ad that depicts a man with his feet up and family members creating mess around a home while a woman is solely responsible for cleaning up the mess.

  • An ad that depicts a man or a woman failing to achieve a task specifically because of their gender e.g. a man’s inability to change nappies; a woman’s inability to park a car.

  • Where an ad features a person with a physique that does not match an ideal stereotypically associated with their gender, the ad should not imply that their physique is a significant reason for them not being successful, for example in their romantic or social lives.

  • An ad that seeks to emphasise the contrast between a boy’s stereotypical personality (e.g. daring) with a girl’s stereotypical personality (e.g. caring) needs to be handled with care.

  • An ad aimed at new mums which suggests that looking attractive or keeping a home pristine is a priority over other factors such as their emotional wellbeing.

  • An ad that belittles a man for carrying out stereotypically “female” roles or tasks.

Ella Smillie, gender stereotyping project lead, Committees of Advertising Practice, said:

“Our review of the evidence strongly indicates that particular forms of gender stereotypes in ads can contribute to harm for adults and children by limiting how people see themselves and how others see them and the life decisions they take. The set of standards we’re proposing aims to tackle harmful gender stereotypes in ads while ensuring that creative freedom expressed within the rules continues to be protected.”

Director of the Committees of Advertising Practice, Shahriar Coupal said:

“Amid wide-ranging views about the portrayal of gender in ads is evidence that certain gender stereotypes have the potential to cause harm or serious offence. That’s why we’re proposing a new rule and guidance to restrict particular gender stereotypes in ads where we believe there’s an evidence-based case to do so. Our action is intended to help tackle the harms identified in the ASA’s recent report on the evidence around gender portrayal in ads.”

The consultation closes on 26 July 2018 .

Read more inme.htm at MelonFarmers.co.uk

twitter 2015 logoTwitter has outlined further censorship measures in a blog post:

In March, we introduced our new approach to improve the health of the public conversation on Twitter. One important issue we’ve been working to address is what some might refer to as “trolls.” Some troll-like behavior is fun, good and humorous. What we’re talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search. Some of these accounts and Tweets violate our policies, and, in those cases, we take action on them. Others don’t but are behaving in ways that distort the conversation.

To put this in context, less than 1% of accounts make up the majority of accounts reported for abuse, but a lot of what’s reported does not violate our rules. While still a small overall number, these accounts have a disproportionately large — and negative — impact on people’s experience on Twitter. The challenge for us has been: how can we proactively address these disruptive behaviors that do not violate our policies but negatively impact the health of the conversation?

A New Approach

Today, we use policies, human review processes, and machine learning to help us determine how Tweets are organized and presented in communal places like conversations and search. Now, we’re tackling issues of behaviors that distort and detract from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. By using new tools to address this conduct from a behavioral perspective, we’re able to improve the health of the conversation, and everyone’s experience on Twitter, without waiting for people who use Twitter to report potential issues to us.

There are many new signals we’re taking in, most of which are not visible externally. Just a few examples include if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that repeatedly Tweet and mention accounts that don’t follow them, or behavior that might indicate a coordinated attack. We’re also looking at how accounts are connected to those that violate our rules and how they interact with each other.

These signals will now be considered in how we organize and present content in communal areas like conversation and search. Because this content doesn’t violate our policies, it will remain on Twitter, and will be available if you click on “Show more replies” or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search.

Results

In our early testing in markets around the world, we’ve already seen this new approach have a positive impact, resulting in a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations. That means fewer people are seeing Tweets that disrupt their experience on Twitter.

Our work is far from done. This is only one part of our work to improve the health of the conversation and to make everyone’s Twitter experience better. This technology and our team will learn over time and will make mistakes. There will be false positives and things that we miss; our goal is to learn fast and make our processes and tools smarter. We’ll continue to be open and honest about the mistakes we make and the progress we are making. We’re encouraged by the results we’ve seen so far, but also recognize that this is just one step on a much longer journey to improve the overall health of our service and your experience on it.