UNESCO’s Guidelines for regulating digital platforms (Draft 2.0) (“draft” or “draft guidelines”) aim “to support the development and implementation of regulatory processes that guarantee freedom of expression and access to information while dealing with content that is illegal and content that risks significant harm to democracy and the enjoyment of human rights.” (¶ 8) The draft makes several contributions to the global debate over the regulation of online platforms. Among other things, it centers human rights, pays attention to the health and safety of content moderators, emphasizes the value of the UN Guiding Principles on Business and Human Rights, opposes general monitoring obligations and evidently upload filters (¶ 27(f)), and reinforces multi-stakeholder approaches to internet governance. Throughout the twenty-eight page document, the draft advances principles drawn directly from human rights law and UN mechanisms that have interpreted it in the context of the challenges of the digital age.

Still, hovering over the guidelines is an existential question: why are they here? Where is the demand for specific guidance to regulatory systems as opposed to high-level human rights principles that should guide states, which they, in turn, would implement according to their own traditions of law and regulation? Will these guidelines be helpful for OfCom in the UK? The European Commission as it implements DSA, an already fully articulated regulatory framework? Would these resonate with Indian, American, Brazilian, Argentinian, South African, Korean (and so on) regulatory traditions? I don’t know that the draft’s admixture of specificity and principle, which comes together as a lengthy statement of “guidance”, achieves a concrete goal for democratic states. On the other hand, how might non-democratic states – or the authoritarian-minded within democratic states – use them to articulate a restrictive vision of internet freedom?

Even apart from the overarching question of why, the guidelines raise several questions big and small.

First, the draft introduces a scope whose boundaries are unclear. It purports to address “digital platforms that allow users to disseminate content to the wider public, including social media networks, messaging, apps, search engines, app stores, and content-sharing platforms.” ¶10(a). What is it to “disseminate content to the wider public”? To be more precise, why does “wider” modify “public”? Some platforms, such as messaging apps, aim to disseminate content point-to-point or to groups of defined size. Others, like certain social media platforms, enable one to limit one’s content sharing to defined groups of account-holders. What is covered here? The guidelines immediately suggest that it would be up to national regulatory bodies to determine their scope, including by identifying platforms by “size, reach, and the services they provide,” among other things. Id. But this is a fundamental question that global guidelines, to be genuinely valuable, need to answer: how does scale and service-type matter for how platforms are to be regulated? Small, niche platforms have a far different impact than the major behemoths of social, search and messaging, while even those behemoths have mechanisms to limit sharing. How should regulatory bodies address these differences in ways that are principled and protective of human rights? Put another way, small platforms – or groups on large platforms – advance not only freedom of expression but also freedom of association. How to address and protect that right in an emergent regulatory environment?  

Second, the draft provides limited if any guidance as to the definition of the problem it is meant to address. From a legality perspective (“provided by law”), this is deeply concerning. Early on, the draft emphasizes “content that is illegal under international human rights law and content that risks significant harm to democracy and the enjoyment of human rights.” ¶ 11(a). I blanched when I saw that first category, since generally speaking (with two exceptions) international law does not make content illegal; it provides a framework of guaranteed individual rights (Article 19: seek, receive and impart information and ideas of all kinds, regardless of frontiers) along with a set of narrow limitations as to when the state may restrict those rights. It is true that Article 20 of the International Covenant on Civil and Political Rights (ICCPR) obligates states to prohibit “propaganda for war” and “advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence”. But if that’s what the draft means to address, why not say that directly? This may seem like an editing issue, but the lack of clarity opens the door to state arguments that categories of content many want to limit (e.g., defamation of religion, lèse-majesté, false information, extremism, and so on) are not merely subject to restrictionbut illegal under international law. This could amount to a major win for governments not, shall we say, entirely enamored of Article 19 of the ICCPR.

Perhaps more serious, however, is the category of content that “risks significant harm to democracy”. Nowhere in the draft is this phrase defined. What is significant harm? For that matter, what is democracy (not a term of art in human rights law)? I don’t want to be misunderstood here: there is no doubt that platforms have enabled the sharing of content directly at odds with democratic governance, such as the kind of widely-posted and -shared incitement that facilitated the January 6, 2021, attack on the U.S. Capitol. Some have enabled harassment of journalists and human rights defenders, pillars of democratic governance, worldwide. All that and more is true. But targeting that requires naming it first, and ‘significant harm to democracy’ does nothing of the sort. Imagine, for instance, an argument that mocking political candidates, or even reporting on past bad behavior, risks harming democracy by misleading voters about the real character of candidates. Nobody can possibly think that’s an unlikely hypothetical, because it is the argument one finds in countless jurisdictions. I have to imagine that the drafters of the guidelines themselves do not intend to cover that kind of speciousness, but the draft opens the door wide to arguments entirely at odds with well-established understandings of the foundational purposes of freedom of expression.

Third, there is virtually nothing in the draft guidelines about digital security, privacy and anonymity. It refers to “a safe and secure internet environment for users” (¶ 23) but says nothing about how that is to be protected by law. But user privacy and security are preconditions of online freedom of expression. Look no further than the recurrent efforts to undermine encryption to see that even democratic states are wobbly on the issue of user security. Guidelines developed to promote “safeguarding freedom of expression and access to information” (according to the sub-heading of the draft’s title) should begin with user privacy and security.

Fourth, the draft introduces contradictions and confusions about systems-level regulation and specific content. Its main point on this provides, “Regulation should focus mainly on the systems and processes used by platforms, rather than expecting the regulatory system to judge the appropriateness or legality of single pieces of content.” ¶ 18. In principle, this is laudable. In practice, however, I have my doubts that the draft sustains the distinction. For instance, the draft continues, “The regulator will expect digital platforms to adhere to international human rights standards in the way they operate and to be able to demonstrate how they are implementing these standards and other policies contained in their terms of service.” ¶ 19 (emphasis added). How will a platform make such a demonstration without pressure to speak to specific cases? How will a regulator conduct this kind of analysis without examining specific content? Will it be an anonymized analysis of aggregate treatment of content? There seemingly are no guardrails to prevent this kind of regulatory creep.

This confusion is acute in ¶45, which says that regulatory bodies “should have the power to assess applications or perform inspectorial, investigative, or other compliance functions . . . while moderating illegal content and content that risks significant harm to democracy and the enjoyment of human rights” consistent with Article 19 of the ICCPR. I struggle to understand this language, and later the draft suggests regulators should have power to “[s]ummon any digital platform deemed non-compliant with its own policies or failing to protect users” (¶ 46(c)). Perhaps the meaning simply eludes me, but how is this possible if the regulator is not to examine specific content? How do you know if the company’s system of content moderation is not working without evaluation of specific content? Or, if a regulatory body has “a complaints process that offers users redress” if they are treated unfairly (¶ 46(e)), how does a regulator do that without evaluating the specific content at issue?

Fifth, the draft speaks of “co-regulation” involving state law and regulatory bodies, on the one hand, and “self-governing bodies” which may make and apply rules “sometimes through joint structures or mechanisms”. ¶ 21. Civil society would evidently provide “public scrutiny” but they’re not explicitly included in the regulatory framework. This may be the appropriate approach for some countries. But is it appropriate for all? ARTICLE 19 has proposed a multi-stakeholder approach drawn from the press council model, a so-called social media council. Perhaps that model – chartered by domestic law, directly involving civil society as participants rather than outside observers – could prove more consistent with some democratic contexts. At any rate, the guidelines propose one model without grappling with the possibility of others.

Sixth, the guidelines are confused about the tools of technology. They clearly have a concern about algorithmic decision-making and recommendation systems, a worthy subject of regulatory consideration. But under an “automated” content moderation section, it calls for platforms to commission “regular external audits of machine learning tools”. ¶ 62. Why focus only on machine learning, a specific instance of AI systems? Later on, in the section on gendered disinformation and gender-based violence, the draft focuses on algorithmic amplification of such misogynistic harassment without acknowledging that much of that harassment is human, non-algorithmic.

Seventh, the draft guidelines address children’s rights by starting from a perspective of harm. ¶85. Nowhere does the draft explain how children themselves enjoy the rights to freedom of opinion and expression, to freedom of association, to non-discrimination, etc. That, however, is the appropriate starting point for any discussion about measures to address risks to children’s mental and physical well-being. Instead, the relevant section begins with reference to children’s “unique stage of development” and “the fact that negative experiences in childhood can result in lifelong or transgenerational consequences”. Id. Those are important, to be sure, but out of the context of children’s right to access information, for instance, we introduce ingredients into the stew of contemporary efforts to censor what children can access (recognizing, of course, that “children” itself is a term that requires evaluating the age of the relevant audience).

I sympathize with UNESCO’s desire to impose some order on the tech reckoning that has been spreading around the world for years. In my view, that order requires UN mechanisms to consider how they can ensure that regulatory efforts promote and protect human rights online. Threats to online rights come from the state, from the private sector, from individuals, and human rights principles should indeed guide the actions of all of these actors. But it is not for the UN to promote specific regulatory models, much less to develop templates of regulatory processes. Rather, the UN works best when it reasserts foundational principles and then provides mechanisms of review – whether through UN processes like Universal Periodic Review in the Human Rights Council or periodic review or individual complaints in the human rights treaty bodies. By the same logic, UN principles can be relevant to oversight by regional human rights courts and national human rights institutions and domestic courts.

Put another way, the better approach, in my view, would be to ask, first, what are the rights that individuals enjoy online; second, what positive obligations do states owe to ensure those rights may be exercised; and third, in addressing threats to those rights, how do the principles of legality, legitimacy and necessity and proportionality apply. Much of that work, supported by civil society, has been done by the Human Rights Council and its Special Procedures, the Office of the High Commissioner for Human Rights, and UNESCO itself. Revive that work. Deepen it. And reconsider whether the time is ripe for detailed regulatory guidance.

UNESCO Guidelines for Regulating Digital Platforms: A Rough Critique