*Primary research and drafting by: Sophia Papaioannou, J.D. Candidate

(Class of 2022), UC Irvine School of Law


Is speaker intent the pivotal element that explains the different outcomes of two hate speech cases decided by the Facebook Oversight Board? In one case, a user wrote the Russian word for “wash bowl,” “taziks,”—which has embedded in it a racial slur (“azik”)—to describe Azerbaijanis in a post intended to raise awareness of the destruction of Armenian-built city Baku and the churches therein.[1] In the second case, a user in Myanmar posted a widely shared photo of a Syrian toddler who drowned attempting to reach Europe and captioned the post with “those male Muslims have something wrong in their mindset.”[2] The user also stated their sympathies for the deceased child were diminished due to recent violence in France, and the post implied the child may have grown up to be an extremist.[3] Despite the fact that both of these cases involved offensive speech aimed at marginalized groups, the Board came to different conclusions in each—finding that one user intended to insult and dehumanize while the other intended to raise awareness and engage in societal commentary. The instant working paper reflects on this apparent contradiction and teases out the role of each user’s intent in the Board’s assessment of hate speech under applicable international human rights standards.

I. The Board’s Contrasting Decisions in Seemingly Analogous Cases

As demonstrated below, the pertinent facts of the two cases are substantially analogous. Yet the Board upheld Facebook’s content removal in the Azerbaijian case while overturning its removal in the Myanmar case. For each case, the Board provided separate analyses: one predicated on Facebook’s Community Standard on Hate Speech and stated company values, another rooted in applicable international human rights standards (which is the analysis addressed in this paper) .[4] One distinguishing factor was the surmised intent of the users who posted the content at issue.

A.  The Board’s Observations Regarding Intent in the Azerbaijan Case

In the Azerbaijan case decision (Case Decision 2020-003-FB-UA), the Board analyzed a post stating in Russian that Armenians built Baku, Azerbaijan and that this heritage had been destroyed and describing Azerbaijanis as nomads with no history compared to Armenians.[5] The post used the word “taziks”—meaning “wash bowl” in Russian—to describe Azerbaijanis.[6] According to independent linguistic analysis commissioned by the Board, “taziks” can also be understood as “wordplay on the Russian word ‘aziks,’ a derogatory term for Azerbaijanis,” which is included in Facebook’s internal list of prohibited slur terms.[7] The user who posted the content denied that it was hate speech and claimed the intent behind the post was to demonstrate the destruction of Baku’s cultural and religious heritage.[8]

Notwithstanding, a majority of the Board concluded that the post was intended to insult and dehumanize Azerbaijanis and therefore warranted removal.[9] The Board found significant that the content was disseminated during an armed conflict between Armenia and Azerbaijan, although it found no incitement.[10] The Board’s decision placed particular emphasis on the slur, finding that the word “aziks” clearly violated Facebook’s hate speech prohibition and noting that although words may be demeaning in one context while benign in another, connecting a national identity to an inanimate, unclean object “plainly” qualified as an “insulting label.”[11] Additionally, the Board observed that the user attempted to conceal the slur from Facebook’s automated detection tools by placing punctuation between the slur’s letters (spelling “t.a.z.i.k.s.”), indicating that the user had a “subjective understanding” the word used was prohibited.[12] The majority “noted that the post, when read as a whole, made clear the user’s choice of slur was not incidental but central to the user’s argument that the target group was inferior.”[13]

B. The Board’s Observations Regarding Intent in the Myanmar Case

In the Myanmar decision (Case Decision 2020-002-FB-UA), the Board unanimously found that a post showing a drowned Syrian toddler of Kurdish ethnicity with accompanying Burmese text stating “those male Muslims have something wrong in their mindset” did not constitute hate speech and therefore did not warrant removal.[14] The post questioned the general lack of response by Muslims to the mistreatment of Uyghur Muslims in China compared to killings in response to cartoon depictions of the Prophet Muhammad in France.[15] The post also indicated that the user’s sympathies for the depicted child have been lessened by recent events in France and implying the child may have grown up to be an extremist.[16] The user claimed that the post was sarcastic and intended to compare extremist religious responses in different countries.[17] The user added that they were opposed to all forms of religious extremism.[18]

In its reasoning, the Board assessed the intent behind the post. The Board found that the post, although pejorative and offensive towards Muslims, “did not advocate hatred or intentionally incite any form of imminent harm.”[19] Additionally, the Board considered the user’s claims of sarcastic intent and opposition to religious extremism.[20] Also, the Board noted that the content was posted within a group claiming to be for “intellectual and philosophical discussion.”[21] These factors, taken together, led the Board to conclude that the user intended his post as a “commentary on apparent inconsistencies between Muslim reactions to events in France and in China.”[22]

II. Applicable International Human Rights Standards

The United Nations Guiding Principles on Business and Human Rights (“UNGPs”) provide an authoritative framework for how companies should respect human rights through policy, due diligence, implementation and remedy.[23] UNGP 31 sets forth a list of effectiveness criteria for company grievance mechanisms, such as the Oversight Board. Pursuant to these criteria, grievance mechanisms must strive to be predictable and legitimate, such that decision-making is clear and consistent and stakeholders can trust in its processes.[24] These mechanisms must also be rights-compatible, meaning that they should “ensur[e] that outcomes and remedies accord with internationally recognized human rights.”[25]

Among these internationally recognized human rights is the right to freedom of expression, as enshrined in Article 19 of the International Covenant on Civil and Political Rights (“ICCPR”). This right protects the ability to seek, receive and impart information and ideas of all kinds—even those that are inaccurate or may cause offense.[26] Article 19(3) recognizes that expression may be subject to a limited set of narrow restrictions, which must satisfy the three-part test of legality, legitimacy, and necessity and proportionality.[27] Such restrictions satisfy the legitimacy prong if they aim to respect the rights of others.[28] The rights of others include the right to non-discrimination under ICCPR Articles 2 and 26.[29]

Meanwhile, Article 20 of the ICCPR requires States to prohibit “any advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence.”[30] Advocacy to incite such acts must necessarily entail the speaker’s intent to do so.[31] Indeed, the UN Rabat Plan of Action, which elaborates on Article 20(2),[32] explains that “’[h]atred’ and ‘hostility’ refer to intense and irrational emotions of opprobrium, enmity and detestation towards the target group; the term ‘advocacy’ is to be understood as requiring an intention to promote hatred publicly towards the target group. . . ”[33] Intent to incite is one of six factors set forth by the Rabat Plan to determine the severity of incitement under Article 20.[34]

The requisite intent for prohibited hate speech is a high bar. Expression used to “provoke strong feelings” and without the aim of inciting violence, discrimination or hostility does not satisfy this threshold.[35] Such intent must be more than negligence or recklessness to cause harm and must be more than the intent to merely distribute offensive material.[36] As the UN Special Rapporteur on freedom of opinion and expression has made clear:

A person who is not advocating hatred that constitutes incitement to discrimination, hostility or violence, for example, a person advocating a minority or even offensive interpretation of a religious tenet or historical event, or a person sharing examples of hatred and incitement to report on or raise awareness of the issue, is not to be silenced under article 20 (or any other provision of human rights law).[37]

The Special Rapporteur has also affirmed that international human rights law “protects the rights to offend and mock.”[38] At base, “insults, ridicule or slander of persons or groups or justification of hatred, contempt or discrimination” may only be prohibited where it “clearly amounts to incitement to hatred or discrimination.”[39]

III. The Role of Speaker Intent in the Azerbaijan and Myanmar Case Decisions

The Oversight Board’s divergent decisions in the analogous Azerbaijan and Myanmar cases call into question its adherence to the UNGPs. Indeed, the Board’s inconsistent findings as to whether the content at issues in each case violates international human rights standards may undermine the Board’s predictability and legitimacy, pursuant to UNGP 31. These case decisions may also raise concerns with respect to UNGP 31’s rights-compatibility criterion.

In terms of substantive human rights, the decisions do not make clear exactly what factor(s) distinguished the cases. Both decisions found that the offensive posts were protected by the right to freedom of expression under ICCPR Article 19 and that there was no incitement and no violation of Article 20.[40] Accordingly, each post was analyzed with reference to Article 19(3) and not Article 20 or the Rabat Plan. Nonetheless, the speaker’s intent seems to be a likely candidate for a distinguishing factor.

In particular, the surmised intent of the user in the Myanmar case seemed to be highly persuasive to the Board and possibly dispositive of its Article 19(3) analysis. The Board found that “while some may consider the post offensive and insulting towards Muslims, the Board does not consider its removal to be necessary to protect the rights of others.”[41] In support of its conclusion, the Board cited the user’s purported sarcasm, the user’s statement that they are opposed to religious extremism, and the fact that the post was in a Facebook group for intellectual and philosophical discussion.[42] Meanwhile, in the Azerbaijan case, the Board seemed to cast doubt on the user’s claimed intent to highlight harms committed against Armenian culture, due to their use of a slur.[43]

Perhaps the Board’s approaches to these cases placed outsized emphasis on speaker intent, especially given that no incitement under Article 20 was found in either case. Arguably, emphasizing intent—at the expense of emphasizing the detrimental effects of speech—is antithetical to the right to non-discrimination under international human rights law. For example, the International Convention on the Elimination of All Forms of Racial Discrimination defines racial discrimination as “any distinction, exclusion, restriction or preference based on race, colour, descent, or national or ethnic origin which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise, on an equal footing, of human rights . . ..”[44] Yet the Board did not seem to put substantial emphasis on the post’s effects when concluding there was no threat to the rights of others.

The effects of the content in the Myanmar case were potentially severe. Insinuating that a deceased child refugee does not deserve sympathy because they may have grown up to be an extremist could promote hostility against Muslims. The Board even recognized that “Facebook’s sensitivity to the possibility of anti-Muslim hate speech in Myanmar [wa]s understandable, given the history of violence and discrimination against Muslims in that country.”[45] The potentially devastating effects of such speech was laid bare in the 2018 report of the U.N. Human Rights Council’s independent fact-finding mission on Myanmar, which found that Facebook played a “significant” role in the genocide as a “useful instrument for those seeking to spread hate.”[46]

IV. Conclusion

A side-by-side comparison of the Myanmar and Azerbaijan cases indicates that speaker intent played a pivotal role as a distinguishing factor. In particular, the Board seemed to place substantial weight on the user’s claim of sarcastic intent in the Myanmar case. This underscores the importance of user statements in the Board’s decision-making process—and may highlight the need for the provision of advocates to help users craft compelling user statements.


[1] Oversight Board, Case Decision 2020-003-FB-UA (Jan 18, 2021) [hereinafter “Azerbaijan Case”].

[2] Oversight Board, Case Decision 2020-002-FB-UA (Jan 18, 2021) [hereinafter “Myanmar Case”].

[3] Id. ¶ 2.

[4] Facebook’s operative Community Standard on Hate Speech defines hate speech as “a direct attack on people based on what we call protected characteristics—race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.” Azerbaijan Case ¶ 4. Facebook’s company values include Safety (“Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook.”) and Dignity (“We expect that people will respect the dignity of others and not harass or degrade others.”). Facebook, Community Standards.

[5] Azerbaijan Case ¶ 2.

[6] Id.

[7] Id.

[8] Id. ¶ 5.

[9] Meanwhile, a minority of the Board’s membership found that post’s reference to an inanimate object was “offensive, but not dehumanizing.” Id. ¶ 8.3.

[10] Id.

[11] Id.

[12] Id.

[13] Id.  

[14] Myanmar Case ¶ 8.

[15] Id. ¶ 2.

[16] Id.

[17] Id. ¶ 5.

[18] Id. ¶ 8.3.

[19] Id.

[20] Id.

[21] Id.

[22] Id. ¶ 8.1.

[23] United Nations Guiding Principles on Business and Human Rights, U.N. Doc. A/HRC/17/31 (Mar. 21, 2011) [hereinafter “UNGPs”]. According to the UN Working Group on human rights and transnational corporations and other business enterprises, businesses should be required to do more than simply “respect human rights” when warning signals of a violation are apparent. Such warning signals include “events or measures that create an environment conducive to serious human rights abuses or which suggests a trajectory towards their perpetration”; this includes “increased inflammatory rhetoric or hate speech targeting specific groups or individuals.” Issue of Human Rights and Transnational Corporations and other Business Enterprises, UN General Assembly (July 21, 2020).

[24] UNGPs, Principle 31(a), (c).

[25] UNGPs, Principle 31(f).

[26] International Covenant on Civil and Political Rights, art. 19(3) [hereinafter “ICCPR]; United Nations Human Rights Committee, General Comment No. 34 ¶ 11 (Sept. 12, 2011).

[27] ICCPR, art. 19(3).

[28] ICCPR, art. 19(3)(a).

[29] ICCPR Article 2(1) guarantees rights to all individuals “without distinction of any kind,” and Article 26 provides that “the law shall prohibit any discrimination and guarantee to all persons equal and effective protection against discrimination on any ground.”

[30] ICCPR, art. 20(2).

[31] Article 19, Belarus: Right to Freedom of Expression and ‘Extremism’ Restrictions (Nov. 2020).

[32] Freedom of Expression vs Incitement to Hatred: OHCHR and the Rabat Plan of Action, OHCHR (Mar, 14, 2021).

[33] Report of the United Nations High Commissioner for Human Rights on the expert workshops on the prohibition of incitement to national, racial or religious hatred, U.N. Doc. A/HRC/22/17/Add.4, appendix, footnote 5.

[34] One-Pager on “Incitement to Hatred”, OHCHR (March 15, 2021). These factors are: (1) the social and political context surrounding the speech, (2) the status of the speaker, (3) the intent to incite the audience against a target group, (4) the content and form of the speech, (5) the extent of its dissemination, and (6) the likelihood of harm. Id.

[35] Id.

[36] Id.; Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression on hate speechU.N. Doc. A/74/486 (Oct. 9, 2019), ¶ 14 [hereinafter “UNSR Report”].

[37] UNSR Report ¶ 10.

[38] Id. ¶ 17.

[39] United Nations Committee on the Elimination of Racial Discrimination, General Recommendation No. 35 ¶ 13 (Sept. 26, 2013). The NGO Article 19 has suggested that intent can be assessed by looking at “questions such as how explicit was the language used or whether the language was direct without being explicit,” as well as the tone of the speech and its surrounding circumstances. Article 19, Towards an interpretation of article 20 of the ICCPR: Thresholds for the prohibition of incitement to hatred at 11 (2010).

[40] Myanmar Case ¶ 8.3; Azerbaijan Case ¶ 8.3.

[41] Myanmar Case ¶ 8.3.

[42] Id.

[43] Id.

[44] ICERD, art. 1 (emphasis supplied).

[45] Myanmar Case ¶ 8.3.

[46] Report of the Independent International Fact-finding Mission on Myanmar, U.N. Doc. A/HRC/39/64 ¶ 74 (Sept. 12, 2018).

Decoding Intent in Two (Seemingly Contradictory) Facebook Oversight Board Decisions on Hate Speech*