Facebook, Twitter and social media in times of COVID 19 and #BlackLivesMatter

Global
Issue:
Document Type: Op ed
Date: 2020

An Opinion Editorial by Carlos Lopez, Senior Legal Adviser at the ICJ in Geneva.

The decision by Facebook to allow a post by US President Donald Trump, which its peer social media platform Twitter decided to label as incitement to violence, has sparked controversy and pushed the debate about the role of social media in the moderation of content posted by their users to the forefront of the public agenda.

Confronted with growing instances of disinformation, “fake news” and hate speech, most social media platforms are moving from an initially neutral position, refusing to “arbitrate” on what is seen as the exercise of a right to free speech, to a more or less active stance which sometimes leads decisions to delete the contested content. What is the right balance for social media companies’ content moderation policies? What objective parameters should they use to define their policies?  Is there a role for governments in this field? Should they regulate and, if so, in which direction? The treatment of such questions is central to strategies to protect human rights in the context of activities of social platforms.

The incident that triggered the present backlash is the latest in a string of other similar instances concerning high-ranking public officers in different parts of the world.

Donald Trump’s tweet, “when looting starts, shooting starts” in relation to social protests in the United States was widely regarded as a threat and potential incitement to violence and a much rebuked direct quote used by a Miami Police chief during civil rights protests in 1967.

Another tweet that also met with widespread disapproval was Trump’s threat to send the military to Minneapolis, the epicentre of widespread popular protests against the killing of African American citizen George Floyd by  a police officer who knelt in his neck for nearly 10 minutes while Floyd was already overpowered and lying on the floor with difficulties to breath.

But Facebook and its CEO Mark Zuckerberg’s refusal to follow twitter’s example and label Trump tweets with a warning to the public about its dangerous nature has received also wide condemnation, even by ordinary and prominent Facebook employees.

In response to criticism, including from its own Facebook employees, Zuckerberg stood by his decision refusing to be the arbiter of truth. Where twitter took action to mask the potentially harmful message with a warning to the public and action to limit its spread by the use of algorithms that limit users interaction with it, Facebook decided to allow the post appear and to be shared without hindrance.

This unacceptable stance from an ethical point of view is also problematic in the light of international standards, including the UN Guiding Principles, on business’ human rights responsibilities that require companies to avoid contributing to harmful conduct by others.

It may also raise issues of legal liability for social media in circumstances where serious crimes are committed at the instigation or facilitation of content allowed to be published in the knowledge of its likely impact. Standing-by to facilitate content to be widely shared in full knowledge of its likely harmful impact, is unethical, not human rights compliant and in certain circumstances can lead to legal responsibility.

Refusing to take action when one is in a position to act and knows that inaction is likely to instigate crimes, may also trigger legal responsibility.  Instead, companies should take reasonable diligence measures to prevent clearly harmful content to be published and disseminated in their platforms.

Although a few days later Zuckerberg, in an apparent change of tack, announced internal consideration of options regarding Facebook’s policies on content moderation, he did not promise any concrete change or a timeframe.

In tackling violence and harm to human rights, company policies and procedures matter. In recent years, social media platforms such as Twitter, Facebook, Instagram and Youtube have revamped their policies on content moderation in response to growing concern by the general public.

Facebook has recently established an Oversight Board– a purportedly independent body to address in appeal disputed decisions on content moderation-, drafted its by-laws and appointed half of its membership. The ongoing health crisis due to the COVID-19 pandemic is also serving as catalyser for action given the widespread circulation of harmful disinformation.

Facebook also reacted to incidents in which the platform was used by Buddhist extremists and military officials in Myanmar to incite hatred and violence against that country’s Muslim minority, the Rohingya, in 2017.

But most policies are still to be implemented and their effectiveness yet to be proved. Policies also widely differ from each other across the social media spectrum as dramatic events around the killing of George Floyd show.

But for companies to adopt and effectively implement sound policies and actions to respect human rights, internal leadership at the highest level is essential, especially for companies that are owned by one or a few individuals, as is the case of many of the major social media platforms. Here, there has been a serious failure that needs to be corrected for meaningful changes in policies and procedures to take place.

The spotlight on social media companies’ policies and actions and their leadership should not obscure the also crucial role that States have under international human rights law to take action to protect human rights.

With a few exceptions, States have also been failing in their duty to protect human rights in the context of activities by social media and other tech companies and have generally opted for abstention, fearful of impinging in the exercise of human rights and fundamental freedoms.

President Trump’s threats to regulate social media by exposing them to heightened risk of legal liability for the content they allow users to post, is not the best or more human rights-compatible way for State action.

Retaliation against social media – Twitter, in this case – for acting responsibly is also unacceptable. Instead, regulation that follows international standards on human rights, especially freedom of expression, opinion and to seek information, is possible and needed.

Some States have adopted punitive approaches that result in the restriction of freedoms and allow them increased control over social media. The ICJ report on practices across Asian countries are illustrations of the harmful nature of such regulation and why it should be changed.

These legal frameworks usually contain vague, and broadly defined legal provisions, severe and disproportionate penalties, lack of independent oversight mechanisms, and fail to provide effective remedy or accountability for cases of abuse.

But regulation that delineates the responsibilities of all actors and their possible legal liabilities for misbehaviour based on guidance from international human rights law is possible and could also be effective in tackling disinformation and various forms of “fake news”.

Both the UN Committee on the Rights of the Child and the Committee on Economic, Social and Cultural Rights, have said that the respective treaties require States to ensure business under their jurisdiction adopt policies to respect human rights and adopt processes of due diligence and remediation to avoid or mitigate risks of human rights violations.

Clearly, neither social media nor their CEOs can be left to their own devices. States need urgently to take action in compliance with their international obligations in this respect.

Translate »