Title: A Proposed International Human Rights Law Framework for Facebook and Its Pending Oversight Board’s Content Regulations
In November 2018, Mark Zuckerberg proposed a new “Supreme Court” for Facebook, the “Facebook Oversight Board,” which will handle content moderation issues. The proposal is a response to the widespread incitement of violence on Facebook and WhatsApp that resulted in real-life violence in countries such as Sri Lanka, India, New Zealand, and particularly Myanmar. Facebook policy teams and the Facebook Oversight Board should apply international human rights law (IHRL) to content moderation rules and decisions on the Facebook platform. I have proposed a structured IHRL framework to do so, adapting multilateral treaties written for nation states, in particular the International Covenant for Civil and Political Rights (ICCPR), to private sector social media companies.
The ICCPR is the primary multilateral treaty concerned with freedom of expression and currently has 173 states parties. Since Facebook is a worldwide platform and information on Facebook generally constitutes content, the ICCPR would be the most applicable international law treaty to regulation of Facebook content. Articles 19 and 20 of the ICCPR are the most pertinent: Article 19 sets out the IHRL right to freedom of expression and appropriate restrictions on that right; Article 20 outlines a narrow band of expression that state parties must restrict by law.
The IHRL framework applies whenever Facebook regulates content, including drafting Community Standards (CS), amending CS, repealing CS, other content moderation outside of the CS (e.g. the current Facebook COVID-19 policy), and enforcement of these content regulations. The framework I propose first looks at the narrow category of speech required to be prohibited under Article 20, and then looks to Article 19 to see if the speech may be restricted.
The Article 20 test is not applicable in most cases. For a required prohibition, speech must consist of advocacy of hatred that constitutes incitement which is likely to result in discrimination, hostility, or violence. These will likely comprise a narrow band of cases.
The Article 19 test applies in most cases. Adapting the text of Article 19 to social media companies, its three prongs apply when social media companies may restrict speech: legality, legitimacy, and necessity and proportionality. For example, the legality prong applied to Facebook’s recent COVID-19 policy would decide whether this guideline is precise and public and gauge Facebook’s level of discretion. Today, Facebook can make up its content rules unilaterally, but the Oversight Board is supposed to add independent review. As another example, look to Facebook’s initial rules, or lack thereof, on ethnic armed organizations. At the time Facebook banned four ethnic armed groups in Myanmar, its rules for doing so lacked definitions and were not precise—Facebook categorized groups as ethnic armed organizations in secret, and Facebook had unilateral discretion over its decisions. Under my proposed legality test, this rule would be struck by the Oversight Board.
The second Article 19 prong requires Facebook to present a “legitimate interest” for the content restriction. There are five categories of interests: public health, public order, rights and reputations of others, morals, and national security. Public health is straightforward, covering instances like the current COVID-19 global pandemic. Meanwhile, the scope of public order depends in part on the user experience of the tech platform. For example, Tinder is a dating app, so it may restrict expression in order to maintain the central dating user experience. An app like Facebook, which covers much of human expression, has a different public order interest, where it may need to be more or less permissive of expression depending on the circumstances. Rights and reputations covers IP rights, the right to vote, and potentially public figure and defamation issues, though the last two especially will require much more analysis from commentators and the Board.
Notably, morals and national security are difficult to enforce. In my view, morals should only be used when there is a widely adopted multilateral treaty with enumerated morality rules, but it is up to the Oversight Board and commentators to define “widely adopted”. With regard to national security, private sector companies should not express a national security interest, since they are not nation states and this could open the door for actual nation states to force companies to comply with their interests.
If Facebook has presented a legitimate interest, the test continues to the next prong. Otherwise, the rule should be struck down.
The last Article 19 prong is necessity and proportionality. The restriction must be the least restrictive means to achieve the legitimate interest presented. In practice, this prong is a balancing test. Facebook is not a nation state and thus cannot imprison or levy fines on users. However, Facebook can limit sharing and virality, take down content, temporarily ban users, and permanently ban users. My proposed test balances the four punishments against the content.
I propose adapting David Kaye’s proposal for the Rabat Plan of Action (“Rabat”). The Rabat plan of Action emerged out of expert workshops convened by the Office of the United Nations High Commissioner for Human Rights on responding to incitement to national, racial, or religious hatred. Rabat attempts to interpret vague terms in Article 20, such as “incitement,” “hatred,” and “advocacy.” Rabat also presents six factors which, if satisfied, would permit criminalization of certain expressions in accordance with Article 20. I propose that the Rabat test, adapted to Article 19 without the criminalization test, could process the facts of content moderation cases to determine necessity and proportionality.
The six factors are the following: social and political context; status of speaker; intent; content and form of speech; magnitude and size of audience; and likelihood, including imminence of harm.
Since this is an adapted Rabat test, and not the criminalization severity test it was originally written for, each content moderation case does not need to satisfy all factors. A given case can meet some factors and not others, to varying degrees. The stronger the Rabat factors found, the stronger the acceptable punishment. Conversely, the weaker the factors found, the weaker the acceptable punishment. Essentially, the test determines whether the severity of punishment is proportional to the content.
The Oversight Board could conceive of other balancing tests than Rabat as new content moderation situations come up; the pace of technology and content moderation on tech platforms is evolving so rapidly, unforeseen content scenarios will likely come up in the future. However, Rabat could be a useful test to use as part of the proportionality prong of the Article 19 analysis as applied to social media companies.
I hope this framework gives the Oversight Board a functional test with which to make clear and predictable decisions that will balance the right to freedom of expression against social media companies’ interest in preventing harm from occurring on their platform in a more concrete and transparent way than current Facebook regulations. By drawing on IHRL, Facebook’s content regulations gain legitimacy: better to have the consent of the 168 member countries of the ICCPR than to have the status quo, where Facebook makes rules up with consent from zero countries. This framework is merely a start, and I expect that commentators, human rights and civil society organizations, Facebook, and the Oversight Board itself will continue to refine and hone this international human rights law framework in order for content regulations to have sufficient transparency, processes, reasoning, and independent review.
. . .
Michael Lwin is the Managing Director and Co-Founder of Koe Koe Tech, an IT social enterprise in Yangon, Myanmar. Koe Koe Tech has been working on Facebook content detection algorithms for Myanmar languages that conform to international human rights law, and training civil society organizations on how to flag content as being potentially violative of international human rights law.