Ofcom goes full on nightmare with age/ID verification for nearly all websites coupled with a mountain of red tape and expense.

Read more uk_internet_censors.htm at MelonFarmers.co.uk

Ofcom logo With a theatrical flourish clamouring to the ‘won’t somebody think of the children’ mob, Ofcom has proposed a set of censorship rules that demand strict age/ID verification for practically ever single website that allows users to post content. On top of that they are proposing the most onerous mountain of expensive red tape seen in the western world.There are few clever slight of hands that drag most of the internet into the realm of strict age/ID verification. Ofcom argues that nearly all websites will have child users because 16 and 17 year old ‘children’ have more or less the same interests as adults and so there is no content that is not of interest to ‘children’

And so all websites will have to offer content that is appropriate to all age children or else put in place strict age/ID verification to ensure that content is appropriate to age.

And at every stage of deciding website policy, Ofcom is demanding extensive justification of decision made and proof of data used in making decisions. The amount of risk assessments, documents, research, evidence required makes the ‘health and safety’ regime look like child’s play.

On occasions in the consultation documents Ofcom acknowledges that this will impose a massive administrative burden, but swats away criticism by noting that is the fault of the Online Safety Act law itself, and not Ofcom’s fault.

Comment: Online Safety proposals could cause new harms

See article from openrightsgroup.org

open rights group 2020 logo Ofcom’s consultation on safeguarding children online exposes significant problems regarding the proposed implementation of age-gating measures. While aimed at protecting children from digital harms, the proposed measures introduce risks to cybersecurity, privacy and freedom of expression.

Ofcom’s proposals outline the implementation of age assurance systems, including photo-ID matching, facial age estimation, and reusable digital identity services, to restrict access to popular platforms like Twitter, Reddit, YouTube, and Google that might contain content deemed harmful to children.

Open Rights Group warns that these measures could inadvertently curtail individuals’ freedom of expression while simultaneously exposing them to heightened cybersecurity risks.

Jim Killock, Executive Director of Open Rights Group, said:

Adults will be faced with a choice: either limit their freedom of expression by not accessing content, or expose themselves to increased security risks that will arise from data breaches and phishing sites.

Some overseas providers may block access to their platforms from the UK rather than comply with these stringent measures.

We are also concerned that educational and help material, especially where it relates to sexuality, gender identity, drugs and other sensitive topics may be denied to young people by moderation systems.

Risks to children will continue with these measures. Regulators need to shift their approach to one that empowers children to understand the risks they may face, especially where young people may look for content, whether it is meant to be available to them or not.

Open Rights Group underscores the necessity for privacy-friendly standards in the development and deployment of age-assurance systems mandated by the Online Safety Act. Killock notes, Current data protection laws lack the framework to pre-emptively address the specific and novel cybersecurity risks posed by these proposals.

Open Rights Group urges the government to prioritize comprehensive solutions that incorporate parental guidance and education rather than relying largely on technical measures.

Read more eu.htm at MelonFarmers.co.uk

1984 big brother A controversial and secretive push by European Union lawmakers to legally require messaging platforms to scan citizens’ private communications for child sexual abuse material (CSAM) could lead to millions of false positives per day, hundreds of security and privacy experts have warned in an open letter .Concern over the EU proposal has been building since the Commission proposed the CSAM-scanning plan two years ago, with independent experts, lawmakers across the European Parliament and even the bloc’s own Data Protection Supervisor among those sounding the alarm.

The EU proposal would not only require messaging platforms that receive a CSAM detection order to scan for known CSAM, but they would also have to use unspecified detection scanning technologies to try to pick up unknown CSAM and identify grooming activity as it’s taking place, leading to accusations of lawmakers indulging in magical thinking-levels of technosolutionism.

The open letter has been signed by 309 experts from 35 countries. The letter reads:

Dear Members of the European Parliament, Dear Member States of the Council of the European Union,

Joint statement of scientists and researchers on EU’s new proposal for the Child Sexual Abuse Regulation: 2nd May 2024

We are writing in response to the new proposal for the regulation introduced by the Presidency on 13 March 20241. The two main changes with respect to the previous proposal aim to generate more targeted detection orders, and to protect cybersecurity and encrypted data. We note with disappointment that these changes fail to address the main concerns raised in our open letter from July 2023 regarding the unavoidable flaws of detection techniques and the significant weakening of the protection that is inherent to adding detection capabilities to end-to-end encrypted communications. The proposal’s impact on end-to-end encryption is in direct contradiction to the intent of the European Court of Human Rights’s decision in Podchasov v. Russia on 13 February, 2024. We elaborate on these aspects below.

Child sexual abuse and exploitation are serious crimes that can cause lifelong harm to survivors; certainly it is essential that governments, service providers, and society at large take major responsibility in tackling these crimes. The fact that the new proposal encourages service providers to employ a swift and robust process for notifying potential victims is a useful step forward.

However, from a technical standpoint, to be effective, this new proposal will also completely undermine communications and systems security. The proposal notably still fails to take into account decades of effort by researchers, industry, and policy makers to protect communications. Instead of starting a dialogue with academic experts and making data available on detection technologies and their alleged effectiveness, the proposal creates unprecedented capabilities for surveillance and control of Internet users. This undermines a secure digital future for our society and can have enormous consequences for democratic processes in Europe and beyond.

1. The proposed targeted detection measures will not reduce risks of massive surveillance

The problem is that flawed detection technology cannot be relied upon to determine cases of interest. We previously detailed security issues associated with the technologies that can be used to implement detection of known and new CSA material and of grooming, because they are easy to circumvent by those who want to bypass detection, and they are prone to errors in classification. The latter point is highly relevant for the new proposal, which aims to reduce impact by only reporting users of interest defined as those who are flagged repeatedly (as of the last draft: twice for known CSA material and three times for new CSA material and grooming). Yet, this measure is unlikely to address the problems we raised.

First, there is the poor performance of automated detection technologies for new CSA material and for the detection of grooming. The number of false positives due to detection errors is highly unlikely to be significantly reduced unless the number of repetitions is so large that the detection stops being effective. Given the large amount of messages sent in these platforms (in the order of billions), one can expect a very large amount of false alarms (in the order of millions).

Second, the belief that the number of false positives will be reduced significantly by requiring a small number of repetitions relies on the fallacy that for innocent users two positive detection events are independent and that the corresponding error probabilities can be multiplied. In practice, communications exist in a specific context (e.g., photos to doctors, legitimate sharing across family and friends). In such cases, it is likely that parents will send more than one photo to doctors, and families will share more than one photo of their vacations at the beach or pool, thus increasing the number of false positives for this person. It is therefore unclear that this measure makes any effective difference with respect to the previous proposal.

Furthermore, to realize this new measure, on-device detection with so-called client-side scanning will be needed. As we previously wrote, once such a capability is in place, there is little possibility of controlling what is being detected and which threshold is used on the device for such detections to be considered of interest. We elaborate below.

High-risk applications may still indiscriminately affect a massive number of people. A second change in the proposal is to only require detection on (parts of) services that are deemed to be high-risk in terms of carrying CSA material.

This change is unlikely to have a useful impact. As the exchange of CSA material or grooming only requires standard features that are widely supported by many service providers (such as exchanging chat messages and images), this will undoubtedly impact many services. Moreover, an increasing number of services deploy end-to-end encryption, greatly enhancing user privacy and security, which will increase the likelihood that these services will be categorised as high risk. This number may further increase with the interoperability requirements introduced by the Digital Markets Act that will result in messages flowing between low-risk and high-risk services. As a result, almost all services could be classified as high risk. This change is also unlikely to impact abusers. As soon as abusers become aware that a service provider has activated client side scanning, they will switch to another provider that will in turn become high risk; very quickly all services will be high risk, which defeats the purpose of identifying high risk services in the first place. And because open-source chat systems are currently easy to deploy, groups of offenders can easily set up their own service without any CSAM detection capabilities.

We note that decreasing the number of services is not even the crucial issue, as this change would not necessarily reduce the number of (innocent) users that would be subject to detection capabilities. This is because many of the main applications targeted by this regulation, such as email, messaging, and file sharing are used by hundreds of millions of users (or even billions in the case of WhatsApp).

Once a detection capability is deployed by the service, it is not technologically possible to limit its application to a subset of the users. Either it exists in all the deployed copies of the application, or it does not. Otherwise, potential abusers could easily find out if they have a version different from the majority population and therefore if they have been targeted. Therefore, upon implementation, the envisioned limitations associated with risk categorization do not necessarily result in better user discrimination or targeting, but in essence have the same effect for users as a blanket detection regulation.

2. Detection in end-to-end encrypted services by definition undermines encryption protection The new proposal has as one of its goals to protect cyber security and encrypted data, while keeping services using end-to-end encryption within the scope of detection orders. As we have explained before, this is an oxymoron.

The protection given by end-to-end encryption implies that no one other than the intended recipient of a communication should be able to learn any information about the content of such communication. Enabling detection capabilities, whether for encrypted data or for data before it is encrypted, violates the very definition of confidentiality provided by end-to-end encryption. Moreover, the proposal also states that This Regulation shall not create any obligation that would require [a service provider] to decrypt or create access to end-to-end-encrypted data, or that would prevent the provision of end-to-end encrypted services. This can be misleading, as whether the obligation to decrypt exists or not, the proposal undermines the protection provided by end-to-end encryption.

This has catastrophic consequences. It sets a precedent for filtering the Internet, and prevents people from using some of the few tools available to protect their right to a private life in the digital space; it will have a chilling effect, in particular to teenagers who heavily rely on online services for their interactions. It will change how digital services are used around the world and is likely to negatively affect democracies across the globe. These consequences come from the very existence of detection capabilities, and thus cannot be addressed by either reducing the scope of detection in terms of applications or target users: once they exist, all users are in danger. Hence, the requirement of Art. 10 (aa) that a detection order should not introduce cybersecurity risks for which it is not possible to take any effective measures to mitigate such risk is not realistic, as the risk introduced by client side scanning cannot be mitigated effectively.

3. Introducing more immature technologies may increase the risk The proposal states that age verification and age assessment measures will be taken, creating a need to prove age in services that before did not require so. It then bases some of the arguments related to the protection of children on the assumption that such measures will be effective. We would like to point out that at this time there is no established, well-proven technological solution that can reliably perform these assessments. The proposal also states that such verification and assessment should preserve privacy. We note that this is a very hard problem. While there is research towards technologies that could assist in implementing privacy-preserving age verification, none of them are currently in the market.5 Integrating them into systems in a secure way is far from trivial. Any solutions to this problem need to be very carefully scrutinized to ensure that the new assessments do not result in privacy harms or discrimination causing more harm than the one they were meant to prevent.

4. Lack of transparency It is quite regretful that the proposers failed to reach out to security and privacy experts to understand what is feasible before putting forth a new proposal that cannot work technologically. The proposal pays insufficient attention to the technical risks and imposes – while claiming to be technologically neutral – requirements that cannot be met by any state-of-the-art system (e.g., low false-positive rate, secrecy of the parameters and algorithms when deployed in a large number of devices, existence of representative simulated CSA material).

We strongly recommend that not only should this proposal not move forward, but that before such a proposal is presented in future, the proposers engage in serious conversations about what can and cannot be done within the context of guaranteeing secure communications for society.

5. Secure paths forward for child protection Protecting children from online abuse while preserving their right to secure communications is critical. It is important to remember that CSAM content is the output of child sexual abuse. Eradicating CSAM relies on eradicating abuse, not only abuse material. Proven approaches recommended by organisations such as the UN for eradicating abuse include education on consent, on norms and values, on digital literacy and online safety, and comprehensive sex education; trauma-sensitive reporting hotlines; and keyword-search based interventions. Educational efforts can take place in partnership with platforms, which can prioritise high quality educational results in search or collaborate with their content creators to develop engaging resources.

We recommend substantial increases in investment and effort to support existing proven approaches to eradicate abuse, and with it, abusive material. Such approaches stand in contrast to the current techno-solutionist proposal, which is focused on vacuuming up abusive material from the internet at the cost of communication security, with little potential for impact on abuse perpetrated against children.

UK signatories

Dr. Ruba Abu-Salma King’s College London Prof. Martin Albrecht King’s College London Dr. Andrea Basso University of Bristol Prof. Ioana Boureanu University of Surrey Prof. Lorenzo Cavallaro University College London Dr. Giovanni Cherubin Microsoft Dr. Benjamin Dowling University of Sheffield Dr. Francois Dupressoir University of Bristol Dr. Jide Edu University of Strathclyde Dr. Arthur Gervais University College London Prof. Hamed Haddadi Imperial College London Prof. Alice Hutchings University of Cambridge Dr. Dennis Jackson Mozilla Dr. Rikke Bjerg Jensen Royal Holloway University of London Prof. Keith Martin Royal Holloway University of London Dr. Maryam Mehrnezhad Royal Holloway University of London Prof. Sarah Meiklejohn University College London Dr. Ngoc Khanh Nguyen King’s College London Prof. Elisabeth Oswald University of Birmingham Dr. Daniel Page University of Bristol Dr. Eamonn Postlethwaite King’s College London Dr. Kopo Marvin Ramokapane University of Bristol Prof. Awais Rashid University of Bristol Dr. Daniel R. Thomas University of Strathclyde Dr. Yiannis Tselekounis Royal Holloway University of London Dr. Michael Veale University College London Prof. Dr. Luca Vigano King’s College London Dr. Petros Wallden University of Edinburgh Dr. Christian Weinert Royal Holloway University of London

Read more eu.htm at MelonFarmers.co.uk

internet hacker European police chiefs have called for Europeans to be deprived of basic internet security used to protect against Russian & Chinese spies, scammers, thieves and blackmailers. The police chiefs write:

Joint Declaration of the European Police Chiefs

We, the European Police Chiefs, recognise that law enforcement and the technology industry have a shared duty to keep the public safe, especially children. We have a proud partnership of complementary actions towards that end. That partnership is at risk.

Two key capabilities are crucial to supporting online safety.

First, the ability of technology companies to reactively provide to law enforcement investigations  —  on the basis of a lawful authority with strong safeguards and oversight  — the data of suspected criminals on their service. This is known as lawful access.

Second, the ability of technology companies proactively to identify illegal and harmful activity on their platforms. This is especially true in regards to detecting users who have a sexual interest in children, exchange images of abuse and seek to commit contact sexual offences. The companies currently have the ability to alert the proper authorities  — with the result that many thousands of children have been safeguarded, and perpetrators arrested and brought to justice.

These are quite different capabilities, but together they help us save many lives and protect the vulnerable in all our countries on a daily basis from the most heinous of crimes, including but not limited to terrorism, child sexual abuse, human trafficking, drugs smuggling, murder and economic crime. They also provide the evidence that leads to prosecutions and justice for victims of crime.

We are, therefore, deeply concerned that end to end encryption is being rolled out in a way that will undermine both of these capabilities. Companies will not be able to respond effectively to a lawful authority. Nor will they be able to identify or report illegal activity on their platforms. As a result, we will simply not be able to keep the public safe.

Our societies have not previously tolerated spaces that are beyond the reach of law enforcement, where criminals can communicate safely and child abuse can flourish. They should not now. We cannot let ourselves be blinded to crime. We know from the protections afforded by the darkweb how rapidly and extensively criminals exploit such anonymity.

We are committed to supporting the development of critical innovations, such as encryption, as a means of strengthening the cyber security and privacy of citizens. However, we do not accept that there need be a binary choice between cyber security or privacy on the one hand and public safety on the other. Absolutism on either side is not helpful. Our view is that technical solutions do exist; they simply require flexibility from industry as well as from governments. We recognise that the solutions will be different for each capability, and also differ between platforms.

We therefore call on the technology industry to build in security by design, to ensure they maintain the ability to both identify and report harmful and illegal activities, such as child sexual exploitation, and to lawfully and exceptionally act on a lawful authority.

We call on our democratic governments to put in place frameworks that give us the information we need to keep our publics safe.

Trends in crime are deeply concerning and show how offenders increasingly use technology to find and exploit victims and to communicate with each other within and across international boundaries. It must be our shared objective to ensure that those who seek to abuse these platforms are identified and caught, and that the platforms become more safe not less.

See article from reclaimthenet.org

Here we have Europol and the UK’s National Crime Agency (NCA), teaming up to attack Meta for the one thing the company is apparently trying to do right. And that’s implementing in its products end-to-end encryption (E2EE), the very, necessary, irreplaceable software backbone of a safe and secure internet for everybody. Yet that is what many governments, and here we see the EU via Europol, and the UK, keep attempting to damage.

But mass surveillance is a hard sell, so the established pitch is to link the global and overall internet problem, to that of the safety of children online, and justify it that way.

The Europol executive director, Catherine De Bolle, compared E2EE to sending your child into a room full of strangers and locking the door. And yet, the technological truth and reality of the situation is that undermining E2EE is akin to giving the key to your front door and access to everybody in it, children included, to somebody you trust (say, governments and organizations who like you to take their trustworthiness for granted).

But once a copy of that key is out, it can be obtained and used by anybody out there to get into your house at any time, for any reason. That includes governments and organizations you don’t trust or like, straight-up criminals — and anything active on the web in between.

Read more bw.htm at MelonFarmers.co.uk

Mary Poppins DVD The BBFC commented about a recent decision to uprate Mary Poppins from U to PG.

Historically rated U, the film was recently submitted for classification ahead of a 60th anniversary cinema re-release. When distributors choose to resubmit older films, the BBFC considers them under its current Classification Guidelines so that the modern rating is in line with public expectations. This means that older titles may sometimes require a higher or lower age rating than in previous years.

Viewed under the current guidelines, MARY POPPINS required a PG rating for two uses of the discriminatory term hottentot. In the film, Admiral Boom (who believes he is a naval commander in charge of a ship travelling the world) asks Michael, a child, if he is going on an adventure to defeat hottentots. Later, the character shouts We’re being attacked by hottentots! after seeing a gang of chimney sweeps (who have faces blackened with soot) dancing on a rooftop. He then proceeds to launch fireworks in their direction.

At U, the guidelines state that Discriminatory language or behaviour is unlikely to be acceptable unless clearly disapproved of. There is no disapproval of the language used in MARY POPPINS, and so a PG rating was required.

The decision received considerable attention from the press, both in the UK and around the world, with factual reporting as well as opinion pieces both supporting and criticising the decision. David was interviewed about the reclassification live on the BBC.

The Classifiers suggested that it might be wise in future to publish a statement on our website about any reclassification decisions that generate similar levels of media interest, to ensure that accurate information is readily available for anyone who needs it.

Read more inee.htm at MelonFarmers.co.uk

chechenskoy respubliki Authorities in Russia’s republic of Chechnya have imposed limits on music tempos to abide by strict cultural norms in the Muslim-majority region.From now on all musical, vocal and choreographic works should correspond to a tempo of 80 to 116 beats per minute, Chechnya’s Culture Ministry said in a statement earlier this week.

The new tempo standard, which is relatively slow in the context of popular music, was announced following Chechen Culture Minister Musa Dadayev’s meeting with local state and municipal artists.

Chechen leader Ramzan Kadyrov had instructed Dadayev to make Chechen music conform to the Chechen mentality, according to the statement.

Local artists were ordered to rewrite their music by June 1 to accommodate the changes. Otherwise, they would not be allowed for public performance, the Culture Ministry wrote on the messaging app Telegram.

Read more megames.htm at MelonFarmers.co.uk

FTC logoThe US games censor, the Entertainment Software Ratings Board (ESRB) has been working on a facial recognition tool to verify gamers’ ages and this method was submitted to the Federal Trade Commission (FTC) for approval.

In a blog post, the FTC has just announced that it was denying the company’s application for the technology. The FTC stated that it denied the application in a vote of 4-0, noting that it received over 350 comments on the issue before the vote. As the FTC notes, those who opposed the application cited privacy, protections, accuracy, and deepfakes as concerns.

Had the application been approved, the FRC would have added the facial age detection tech to the list of acceptable forms of receiving parental consent for collecting information from minor-aged users under the Children’s Online Privacy Protection Act (COPPA). This Act requires parental consent for the collection or use of personal data for users under the age of 13.

Last year, the ESRB partnered with the digital identity firm Yoti and SuperAwesome to create this technology to verify users’ ages. The ESRB claimed it was not meant to identify individuals outright but rather estimate the user’s age and stated it would not store the data after the analysis concluded. However age companies offering facial age estimation also offer facial recognition, so users would have to somehow trust big tech companies (or national authorities) not to identify users. And let’s face it, such institutions haven’t proved themselves to be very trustworthy in the past.

While the FTC rejected the proposal, it said that ESBR could re-file the application in the future,presumably after improvements to the system.

Read more gcnews.htm at MelonFarmers.co.uk

scottish government logo Scotland’s disgraceful new hate crime law has come into force that will undoubtedly restrict free speech and give power to those wih a scores to settle regardless of the merits of their claims.The Hate Crime and Public Order (Scotland) Act 2021 creates a new crime of stirring up hatred relating to age, disability, religion, sexual orientation, transgender identity or being intersex. The maximum penalty is a prison sentence of seven years. A person commits an offence if they communicate material, or behave in a manner, that a reasonable person would consider to be threatening or abusive, with the intention of stirring up hatred based on the protected characteristics.

The bar for this offence is lower than for the other protected characteristics, as it also includes insulting behaviour, and as the prosecution need only prove that stirring up hatred was likely rather than intended.As well as the offence of stirring up hatred, the Hate Crime Act also consolidates the existing law on crimes which are aggravated by prejudice. These are where an offender demonstrates malice or ill-will towards their victim based on a protected characteristic, which can be taken into account by a sheriff or judge with a longer sentence or a higher fine than would otherwise have been the case. This is the first time that age has been included in the list of protected characteristics for aggravated offences, a move welcomed by some campaign groups.

Adam Tomkins, professor of public law at Glasgow University, and a former Conservative MSP, voted against the bill because it could see someone convicted of stirring up hatred for a comment they make in private in their own home, not just in public, and I just don’t think that’s where the criminal law belongs.Susan Smith of For Women Scotland fears those who are investigated under the new law will have their lives upended.  She told BBC News:

The tests are quite woolly and we don’t know how people are going to interpret this. We do anticipate that there will be a lot of malicious complaints, a lot of rather trivial complaints and potentially people who are investigated will see their lives upended. I imagine there will be many complaints, for example, made against JK Rowling.

Ch Supt Rob Hay of the Association of Scottish Police Superintendents (ASPS), which represents senior officers, said there was the potential for a huge uplift in complaints about social media posts. And as is so often the case, the police have sided with complainers and have pledged to investigate every hate crime complaint it receives.BBC News understands that these will be assessed by a dedicated team within Police Scotland including a number of hate crime advisers to assist officers in determining what, if any, action to take.

Categorised as a mountain of suffocating censorial red tape…

Read more uk_internet_censors.htm at MelonFarmers.co.uk

ofcom logo Ofcom writes:

Ofcom is seeking evidence to inform our codes of practice and guidance on the additional duties that will apply to some of the most widely used online sites and apps — designated as categorised services – under the Online Safety Act.

Under the new laws, all in-scope tech firms must put in place appropriate safety measures to protect users from online harms. In addition, some online services will have to comply with extra requirements if they fall into one of three categories, known as Category 1, 2A or 2B.

These extra duties include giving users more tools to control what content they see, ensuring protections for news publisher and journalistic content, preventing fraudulent advertising and producing transparency reports. Different duties apply, depending on which category a service falls into.

The Act requires us to produce codes of practice and guidance outlining the steps that companies can take to comply with these additional duties. We are inviting evidence from industry, expert groups and other organisations to help inform and shape our approach. A formal consultation on the draft codes and guidance will follow in 2025, taking account of responses to today’s call for evidence. Advice to Government on categorisation thresholds

Alongside this, we have also today published our advice to Government on the thresholds which would determine whether or not a service falls into Category 1, 2A or 2B. We advise that:

Category 1 (most onerous): should apply to services which meet either of the following conditions:

  • Condition 1 – uses a content recommender system; and has more than 34 million UK users on the user-to-user part of its service, representing around 50% of the UK population;

  • Condition 2 – allows users to forward or reshare user-generated content; and uses a content recommender system; and has more than 7 million UK users on the user-to-user part of its service, representing circa 10% of the UK population.

Category 2A: should apply to services which meet both of the following criteria:

  • is a search service, but not vertical search service

  • has more than 7 million UK users on the search engine part of its service, representing circa 10% of the UK population.

Category 2B: should apply to services which meet both of the following criteria:

  • allows users to send direct messages;

  • and has more than 3 million UK users on the user-to-user part of the service, representing circa 5% of the UK population.

Taking our advice into consideration, the Secretary of State must set the threshold conditions in secondary legislation. Once passed, we will then gather information, as needed, from regulated services and produce a published register of categorised services.

Read more latest.htm at MelonFarmers.co.uk

The Demoniacs 4K Blu-ray The Demoniacs is a 1973 French/Belgian horror by Jean Rollin.
With Joëlle Coeur, Willy Braque and Paul Bisciglia. Melon Farmers link BBFC link 2020 IMDb

Cut by the BBFC for 18 rated UK DVD releases. Later the Extended Version was passed 18 uncut for Blu-ray. There are 2 cut variants in the US with one that claims that it is Director Approved.

Summary Notes

A gang of pirates rape the two sole survivors of a ship wreck. The violated girls are rescued by the strange inhabitants of a supposedly haunted island, where they are granted supernatural powers to strike revenge against the pirates.

UK: Uncut and BBFC 18 rated for sexual violence, strong sex, nudity:

  • 2024 Powerhouse Films Limited Edition (Theatrical + Extended Version + Curse of the Living Dead) R0 4K Blu-ray at UK Amazon #ad released on 22nd April 2024
  • 2024 Powerhouse Films Limited Edition (Theatrical + Extended Version + Curse of the Living Dead) R0 Blu-ray at UK Amazon #ad released on 22nd April 2024

Promotional Material

“The Demoniacs (Les Démoniaques) sees director Jean Rollin eschew his regular subject of vampirism for a brutal nautical tale of ritual horror and supernatural vengeance.

A gang of ruthless pirates, known as ‘the wreckers’, rape two survivors of a shipwreck. The women, now mute, are guided by a mysterious clown to a ruined castle, where they receive magical powers with which to exact their revenge on the gang.

Starring Joëlle Coeur (Schoolgirl Hitchhikers), John Rico (Blood Orgy of the She-Devils), Willy Braque (Lips of Blood), and Mireille Dargent (The Iron Rose), The Demoniacs is one of its director’s most shocking and violent tales of the fantastique.”

Extras “INDICATOR LIMITED EDITION 4K UHD SPECIAL FEATURES

  • Brand-new 4K HDR restoration from the original negative by Powerhouse Films
  • 4K (2160p) UHD presentation in Dolby Vision (HDR10 compatible)
  • Three presentations of the film: the original theatrical version; the longer, explicit export version; and Curse of the Living Dead, the alternative English-language cut
  • Original mono audio
  • Audio commentary with film expert Tim Lucas (2024)
  • Selected scenes commentary with Jean Rollin (2005)
  • Jean Rollin Introduces ‘The Demoniacs’ (1998)
  • One of the Demoniacs (2024): new presentation of an interview with regular Rollin collaborator Jean-Pierre Bouyxou
  • Newly edited archival interview with actor Willy Braque (2024)
  • Critical appreciation by author and musician Stephen Thrower (2024)
  • Outtake footage
  • Original theatrical trailer
  • Image gallery: promotional and publicity material, and behind the scenes
  • New and improved English translation subtitles
  • New and improved English subtitles for the deaf and hard of hearing for Curse of the Living Dead
  • Limited edition exclusive 80-page book with a new essay by Alexandra Heller-Nicholas, an archival piece on the making of the film by Jean Rollin, an archival interview with Rollin, an archival interview with actor Monica Swinn, and full film credits.
  • World premiere on 4K UHD

Limited edition of 10,000 individually numbered units (6,000 4K UHDs and 4,000 Blu-rays) for the UK and US

All extras subject to change”

Read more bw.htm at MelonFarmers.co.uk

bbfc guidelines 2024 The BBFC press release reads:

BBFC launches new Classification Guidelines reflecting shifts in public opinion towards violence, drug use, sex and use of language

Today, the BBFC publishes its new Classification Guidelines, informed by its largest-ever public consultation. Throughout 2023, the BBFC spoke to 12,000 people across the UK to explore what matters most to audiences when it comes to classification.

The research found people are now more concerned about depictions of violence on screen in content across all age ratings. The findings demonstrated broad support for how the BBFC currently classifies violence, but audiences expressed concerns about how distressing or disturbing some forms of violence can be. Going forward, a higher rating may be required for violence across all age-rating categories, especially when particularly intense or impactful scenes occur.

The last time the BBFC conducted this research was in 2019. Then, as now, sexual violence was the biggest area of concern for UK audiences. Since 2019, however, suicide and self-harm has risen to the second biggest area of concern – ahead of sex, violence and drugs. People expressed a clear desire to be warned about this type of content, and the BBFC will continue to highlight suicide and self-harm in its content advice.

Although people are largely in agreement with the BBFC’s current approach towards classifying drug content across all age categories, audiences have become more relaxed about cannabis use and solvent misuse than they were in 2019.

The research revealed that people are now slightly more accepting of cannabis misuse at 12A/12, so long as it is not detailed, glamorised or frequent. The BBFC will therefore take a less restrictive approach to such material, but maintain its current standards on other drugs. People also felt the BBFC’s current policy towards classifying solvent misuse was overly cautious, and this finding has been supported by expert advice. Such content will now be treated less restrictively.

People are calling for a more cautious approach to classifying sex scenes at the border of 12A/12 and 15. Participants were concerned by the level of sexual detail, nudity and the duration of the sex scenes rated 12A/12 under the 2019 guidelines. Similar content is now more likely to be rated 15. However, the research also indicates that audiences are happy for classification to be more lenient towards some sex references at the 15/18 borderline, especially in comic contexts.

The new research also offers valuable insights into the evolving perspectives and expectations of parents and caregivers, grandparents, educators, and young people.

Parents are concerned about the normalisation of bad language, especially terms with sexual or misogynistic connotations. For example, terms such as son of a bitch, bitch, dick. The worry is that young viewers may hear and repeat such language. Language such as this may now require a higher age rating.

Audiences are comfortable with the relaxation of standards across most areas when it comes to trailers, especially in the junior categories (U, PG and 12A/12). For example, in some cases, this may include isolated use of strong language in trailers at 12A/12.

More than eight in ten (81%) people want a consistent age rating system across cinema, physical media and Video on Demand (VoD)/streaming services. Netflix, which carries BBFC age ratings across its entire UK platform, is now the third most popular source of recognition for BBFC age rating symbols (behind cinema and physical media) and the second most recognised source for content advice (behind cinema).

The findings support that the BBFC and its iconic age rating symbols are highly trusted, well recognised and valued by audiences across the UK.

99% of respondents recognise at least one BBFC age rating symbol, with 78% recognising them all 97% of people see a benefit to age ratings for some or all audiences 90% of parents/caregivers (a 7% increase in the last five years), 91% of teachers, and 88% of teens aged 16-19 trust BBFC age ratings all or most of the time 87% of parents/caregivers agree with BBFC age ratings all or most of the time 85% of respondents trust BBFC age ratings to accurately reflect the nature of the content they consume

To help parents facilitate important conversations and help increase awareness, the BBFC has created a collection of BBFC Guides that focus on various classification areas such as discrimination, drugs, and sex. Each bite-sized guide provides an overview of how the BBFC classifies different issues, what they mean at each age rating, and a collection of film case studies.

President, Natasha Kaplinsky OBE said:

At the BBFC, we’re dedicated to ensuring what we do is responsive to the ever-evolving world around us. Since we last asked people across the country what they thought about our standards, society has changed, and opinions have followed 203 it’s fascinating how this vast body of new research reflects this.

This is the first Classification Guidelines update I have overseen as President. Not only am I proud and thrilled to launch these findings, but as someone who has always looked to the BBFC for guidance for myself and my family, seeing first-hand the level of dedication and insight that went into this process has been eye-opening and inspiring. Without a doubt, we are truly shaped by you.

Chief Executive, David Austin OBE said:

The effectiveness of what we do relies entirely on trust. To ensure we have that trust, and to get to the heart of what audiences think and feel, we go directly to them. Only by doing this can we ensure we are classifying content in line with the expectations of families across the UK. This is essential to maintaining the extraordinarily high levels of public trust that the BBFC is privileged to enjoy.