Following a terrorist attack on Christchurch mosques in March 2019, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron issued the “Christchurch Call.” This article summarizes steps New Zealand has taken and some recommendations of the Royal Commission of Inquiry that reported in November 2020. Policymakers everywhere face hard questions on balancing freedom of expression with protection from harm.
Social Media-Fueled Terrorism
On March 15, 2019, a white nationalist terrorist shot and killed fifty-one people and injured forty more at two mosques in Christchurch, New Zealand during Friday prayer. Shortly before the attacks, he uploaded a manifesto to seven file-sharing websites and shared the links on Facebook, then livestreamed the first seventeen minutes of the attack on Facebook Live. It was viewed around 4,000 times before Facebook took it down. Initially, social media algorithms may have “recommended” it as trending content.
During the next twenty-four hours, Facebook removed 1.5 million copies of the video, but it had gone viral and copies were re-posted with altered digital identifiers. At one point, YouTube was removing one copy per second. Between 15 March and 30 September 2019, Facebook took down 4.5 million pieces of content related to the Christchurch attack, reporting that “the Christchurch attack was unprecedented in both the use of live streaming technology and the rapid sharing of video, image, audio and text-based content depicting the attack.” The manifesto and livestream inspired subsequent actual or planned attacks in the United States, Germany, Norway, and Singapore, marking “a grim new age of social media-fueled terrorism.” These incidents raise pressing and urgent questions for policymakers about extremism, “hate speech,” and the regulation of social media.
The Christchurch Call Two months after the terrorist attack in Christchurch, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron hosted a summit in Paris that brought together government and technology sector leaders to adopt the Christchurch Call “to eliminate terrorist and violent extremist content online.” This is an aspirational and well-intentioned, yet impossible, goal in a global game of Whack-a-Mole. Impossibility is not, however, an excuse to do nothing. One year later, Ardern and Macron issued a joint statement reporting progress in multi-stakeholder collaboration, the restructure of the Global Internet Forum to Combat Terrorism, and adoption and testing of a Content Incident Protocol. This ensured that content from a copycat terrorist attack in Halle, Germany, in October 2019 had significantly less online impact than the Christchurch attack.
Meanwhile in New Zealand
In addition to this international effort, New Zealand Justice Minister Andrew Little initiated a review of “hate speech” laws after the attack. (Currently, sections 61 and 131 of the Human Rights Act 1993 address inciting racial disharmony on the basis of colour, race, or ethnic or national origins, but not religion.) The Islamic Women’s Council, the Federation of Islamic Associations of New Zealand and others have called for the specific recognition of “hate crimes” and “hate speech,” a safe system (with a single process) to report “hate speech” and “hate crimes,” and for that system to be linked to security agencies’ databases. Any proposals emerging from this review, which was not consulted on publicly, failed to gain support among coalition parties before the October 2020 election.
The Government did, however, reform gun control laws and introduced amendments to censorship legislation to address regulatory gaps and authorize an expanded web filter to block access to violent extremist and other “objectionable” online content. In December 2020, the Government released the report of the Royal Commission of Inquiry with a companion report on “hate speech” and “hate crime”-related legislation. Four of forty-four recommendations concern “hate speech” and “hate crime.” They include creating a separate category of “hate crime” offences in the Summary Offences Act and the Crimes Act, with hate motivation to be recognized as an element of (existing) offences; and repealing s131 of the Human Rights Act and inserting a new provision in the Crimes Act for an offence of “inciting racial or religious disharmony.”
The latter recommendation and the Commission’s proposed wording of a new provision in the Crimes Act will—and should—provoke debate, particularly because it recommends including religion as a “protected characteristic” without any qualification along the lines of s29J of the United Kingdom’s Public Order Act 1986 that would protect freedom to discuss, criticize or express antipathy, dislike, ridicule, insult or abuse of particular religions or the beliefs or practices of their adherents, or similar qualifications in s18(D) of Australia’s Racial Hatred Act 1995 and s319(3) of Canada’s Criminal Code. Ardern has stated that the Government will consult with community groups and parties across Parliament to test proposals before bringing forward legislative change and “take the time to get it right.” Indeed, striking a fair balance in regulating social media and reducing harmful online and offline communication while protecting freedom of expression raises hard questions not just for Ardern, but for policymakers in any free, open, and democratic society.
The Perfect as the Enemy of the Good?
The challenge for policymakers is to preserve a free, open, and secure internet and freedom of expression while protecting from harm, maintaining public order, promoting social cohesion, and ensuring that the law is enforceable. The global dominance of the so-called FAANG (Facebook, Apple, Amazon, Netflix, Google) or FAMGA (Facebook, Apple, Microsoft, Google, Amazon) digital media and communications platform companies compounds this challenge. The President of the European Commission has called on the United States to join the European Union’s attempts to “contain the immense power” of Big Tech and create a “digital economy rule book.” UN Secretary-General António Guterres has called fora global regulatory framework.
New Zealand’s Chief Censor has commented, “The core of the debate to come is: ‘Is the perfect the enemy of the good…?’ It’s quite clear that no regulatory response is going to be perfect. But does that mean we just give up and do nothing?” No, but any policy approach must address three key questions.
First, who calls the shots? Following the storming of the US Capitol on January 6, 2021, Twitter permanently banned then President Donald Trump from using its platform. Facebook and Instagram indefinitely blocked his accounts, and Snapchat, Pinterest, Reddit, YouTube, Twitch, and Shopify limited his access to their services. A private company is within its rights to terminate a contract for services that a user has voluntarily agreed to, but for all the awfulness of Trump’s communications, it was extraordinary and unprecedented for an incumbent, democratically elected head of state to be blocked from communicating to millions of his supporters.
Facebook has referred its decision to its new (2020) Independent Oversight Board for review. While this provides a measure of independent accountability, in effect it is a new form of transnational, corporate governance acting as a court to determine the boundaries of free speech. Decisions to restrict freedom of expression should be made within a framework of laws defined by democratically elected legislators and be open to review and appeal. In the global governance of digital intermediaries, what is the “right mix” of governmental and inter-governmental regulation, industry self-regulation, industry-wide standards, multi-lateral, multi-stakeholder agreements and initiatives, technology innovation, and market pressure by advertisers, consumers, and service users?
Second, should regulations be framed around hate or harm? International law distinguishes between public communication that constitutes a criminal offence, communication that may not be criminally punishable but may justify a civil suit, and “lawful hate speech” that is not subject to criminal or civil sanctions but still raises concerns about tolerance, civility, and respect for others. Regulation should focus not on the emotion of hate but on the effect of harm caused by public communication that incites discrimination, hostility, or violence against a social group with a common “protected characteristic” such as nationality, race, or religion.
A democratic state cannot justifiably restrict freedom of opinion and expression by criminalizing criticism, dislike, “hurtful” remarks, or even hatred. Government has no business prescribing, or proscribing, what citizens are to feel, think, believe, or value. Governments can, however, encourage and support counter-speech strategies as alternatives or complements to regulation. Options include investment in public education programs in civics, human rights, conflict resolution, and digital literacy; building stronger partnerships with communities, civil society groups, public sector institutions, and industry; reducing inequalities and marginalization on all fronts, and outreach, early intervention, and rehabilitation to prevent extremism from taking root; and in well-funded public broadcasting that provides access to authoritative information and diverse ideas. All of this, however, costs money. Governments can also withhold non-profit status and tax privileges to groups that fail to respect the values of freedom and equality that underpin democratic societies.
Third, can policymakers protect believers without protecting beliefs? Especially since the adoption of the UN Human Rights Council’s Resolution 16/18 and the adoption of the Rabat Plan in 2011, international law has distinguished between (justifiable) protection of religious believers and (unjustifiable) protection of religious beliefs. The Rabat Plan recommended that states repeal any blasphemy laws. Communities of faith can reasonably expect the state to use its coercive powers to protect them from harmful public communication that incites discrimination, hostility, or violence. They cannot reasonably expect the state to protect them, or their beliefs, values, and practices, from criticism, insult, “hurtful” remarks, satire, offence, or ridicule. And their being offended does not justify retaliatory acts of violence or incitement to violence, as in the beheading of French teacher Samuel Paty in October 2020 and subsequent Islamist terrorist attacks in Nice and Vienna.
In brief, empathy with the victims of “hate crimes” should not mislead policymakers in New Zealand or elsewhere into resuscitating blasphemy laws. We do not yet know the New Zealand Government’s policy proposals, but they will need to strike a fair balance between protecting communities of faith from harm and preserving freedom of opinion and expression.
Restore Civility and Stop the Shouting
While governments need to regulate harmful communication that incites discrimination, hostility, or violence, there are limits to what the state can or should do to enable citizens in pluralist societies to resolve conflict without recourse to domination, humiliation, cruelty, or violence. Recovery of civility is everyone’s responsibility. That includes pulling back from angry, “woke” virtue signaling and “cancel culture,” and “calling in” instead of “calling out” those with whom we disagree. As President Joe Biden said in his inauguration speech on January 20, 2021, it’s time to “stop the shouting, and lower the temperature.”
. . .
Dr. David Bromell worked in senior policy advisory roles in central and local government in New Zealand from 2003 to 2020. He is a Senior Associate of the Institute for Governance and Policy Studies (IGPS), School of Government, Victoria University of Wellington, and an Adjunct Senior Fellow in the Department of Political Science and International Relations at the University of Canterbury. Currently, he is a research fellow at the Center for Advanced Internet Studies (CAIS) in Bochum, NRW, Germany. During March–April 2021, his research on the Christchurch Call and related issues will be published in a series of seven working papers on the IGPS website.
Outright deplatforming and suppression of religiously and ideologically motivated extremist users, groups, and content does not appear to effectively curb their influence or prevent the radicalization of new users. This…