EU: putting fundamental rights at the forefront of digital regulation


(Brussels) – The European Union’s draft rules to regulate internet platforms have the potential to better protect human rights online, Human Rights Watch said today. But the European Parliament should be more ambitious in holding tech companies accountable for human rights abuses resulting from their practices and introduce stronger safeguards against government abuse.

“The draft regulations include key steps to establish transparency standards and provide remedies for users, which the European Parliament should support,” said Deborah Brown, senior digital rights researcher and lawyer at Human Rights Watch. “However, the proposal is expected to end abusive surveillance and profiling of people by tech companies and change vague provisions that call on the government to go too far.”

The European Parliament is expected to vote on the European Digital Services Act (DSA) in the week of January 17, 2022.

The draft proposal preserves the conditional liability of online platforms and prohibits general surveillance of users, imposed by law or not, who are cornerstones of the protection of freedom of expression online. Conditional liability prevents incentives for platforms to excessively suppress legitimate online speech to avoid the risk of liability.

It also introduces important measures to increase platform transparency, requiring companies to explain to users how they moderate content, disclose if and how automated tools are used and the number of content moderators for each official language of the l ‘EU, and provide access to the data to researchers, including non-governmental organizations.

But the proposal is insufficient in some key respects and needs to be strengthened.

Potential for expanding government censorship online: The regulation is based on the principle that “what is illegal offline is illegal online”. It relies on existing European and national laws on what constitutes illegal content and essentially transposes that standard to online discourse. However, some EU member states have laws that restrict expression that is protected under international human rights law. The regulation would effectively seal the EU’s approval for the application of these abusive standards online.

It also provides for the removal of content by injunction not only of the judicial authorities, but of “administrative authorities” which are not subject to any requirement of independence, contrary to EU law. The compromise text includes references to EU human rights instruments, but it still risks creating new powers that facilitate the removal of internationally protected expressions from the internet by requiring platforms to be removed. ” they remove content for alleged violation of abusive national laws without judicial authorization.

Failure to ban targeted advertising based on surveillance: Although the draft regulation includes measures to increase the transparency of online advertising and would allow people to opt out of content recommended to them on the basis of profiling their online behavior, it does not address the business model of surveillance-based advertising that dominates today’s digital marketplace. environment.

The principle behind the online advertising ecosystem is that everything people say or do online can be captured, turned into data that is profiled and leveraged to maximize attention and engagement on platforms. while selling targeted advertisements. This business model relies on ubiquitous tracking and profiling of users who invade their privacy and fuel algorithms that promote and amplify divisive and sensational content. Studies show that such content generates more engagement and, in turn, benefits for businesses. The pervasive surveillance upon which this model is based is fundamentally incompatible with human rights.

Restricted mandate to assess rights risks: The draft regulation would oblige very large online platforms to carry out systemic risk assessments covering the dissemination of illegal content and the real or foreseeable risk of certain human rights violations arising from the design, algorithms, intrinsic characteristics , the functioning and use of their services in the EU, which they are then required to mitigate.

Under the proposal, very large online platforms would be subject to third-party audits by “organizations which have been recognized and monitored by the Commission” to assess the platforms’ compliance with the Regulation. More independent corporate oversight is useful, but the proposal falls short of the kind of human rights due diligence that international standards require.

Under the United Nations Guiding Principles on Business and Human Rights, businesses must exercise human rights due diligence, which includes identification, prevention, shutdown, mitigating, remedying and accounting for potential and / or actual negative impacts on human rights. But the proposed regulation risks taking a more limited approach by assuming that all risks can be mitigated and not explicitly requiring companies to use the full range of remediation tools.

The narrowly predefined risks included in the current proposal also exclude the wider range of economic and social rights that platforms’ practices affect – for example the labor rights of content moderators, whose work is necessary for companies to comply. to the obligations set out in the regulations. , to a healthy and safe workplace. Content moderators are typically low-wage workers who experience job-related psychological trauma and emotional damage by reviewing disruptive topics, poor working conditions, and a lack of support and training.

These shortcomings make it all the more urgent for the European Union to develop strong general rules governing the human rights and environmental due diligence obligations for companies, in accordance with the envisaged project. by the European Parliament in 2021.

Potential for setting bad global precedents that are ripe for abuse: As highlighted by the Digital Services Act Human Rights Alliance, this regulation will have far-reaching consequences beyond the EU, both because of its potential to inspire legislation in other regions and because ‘it can set standards that businesses can apply around the world. The draft includes problematic provisions that are ripe for abuse.

For example, it obliges service providers established outside the EU to appoint a legal or natural person as their legal representative in one of the Member States where they offer their services. This legal representative may be held responsible in the event of non-compliance. This is akin to the “hostage” clause that a number of governments, including Turkey, Indonesia, Russia and India, have put in place, which encourages companies to comply with overly broad ordinances. and subjects businesses and their staff to legal threats and intimidation, making it more difficult for businesses to resist abusive or inappropriate government demands. Any requirement imposed on platforms to establish contact points or legal representatives in the EU should avoid creating a risk of personal liability for that representative.

The regulation would also require the European Commission to publish the names, addresses and email addresses of all “signals of trust”. Transparency in this area is positive, but disclosure of this information can endanger civil society groups who act as trusted flaggers, especially groups who flag content from government actors. This can set a dangerous precedent for forcing companies to disclose the identity of trusted flaggers in law enforcement contexts. Adding an exception to the inclusion of information that could endanger trusted flaggers would resolve this issue.

Proposals that would require strict and short deadlines for the removal of content have so far been rejected. Faced with short review periods and the risk of high fines, companies have little incentive to err on the side of freedom of expression. This approach also generally does not offer judicial review or redress. Germany’s flawed social media law, NetzDG, has already inspired similar laws in more than 13 countries around the world to regulate online content, including in more repressive contexts. The inclusion of similar provisions in this regulation would further standardize and encourage such measures and should be avoided.

Lack of independence for monitoring and enforcement: The project envisages a shared responsibility for the application between the digital services coordinators at national level, the European Digital Services Council and the Commission. The coordinators will have a monitoring role at national level, with both investigative and enforcement powers, including powers generally reserved for judicial authorities.

However, the draft regulation does not require full independence of these bodies, only that they operate independently. The draft regulation also confers on the Commission supervision, investigation, enforcement and control responsibilities with regard to the obligations of very large online platforms, without requiring independent judicial review. By acquiring a monitoring role, the Commission, an executive body, blurs the separation of powers which is essential to ensure checks and balances between the EU institutions. EU lawmakers should strive to find a solution that provides both structural and functional independent oversight under the regulation.

The European Parliament should consolidate progress on the DSA and support amendments that protect both governments and businesses from human rights violations online, Human Rights Watch said.

“The DSA is an opportunity for the EU to take more decisive action to protect people from violations and abuses resulting from online content, as well as online surveillance and censorship, through meaningful regulation and respectful of the rights of Internet platforms, ”said Brown. “The EU should amend the regulation to ensure that it addresses this challenge and puts human rights ahead of the profits of tech companies. “

About Ricardo Schulte

Check Also

Controlling aggressive selling | Trade Standard Editorials

New guidelines tightening disclosure standards in advertisements, especially those aimed at children, on …