The Dangerous Inconsistencies of Digital Platform Policies

Why did platforms take action when they did? The timing behind new content moderation policies tells the rest of the world everything it needs to know about platform priorities.

Last year, more people started to ask the right questions of platforms. Beyond the questions of who should set what terms of online speech and how, lay another important question — when should that someone set the terms. The deceptively simple issue of timing has laid bare the inconsistencies in platforms’ policies and the harm that they can wreak. We have seen this play out in three dimensions over the past year: political violence, voting and vaccines.

Since 2016, it was clear that Donald Trump would undermine election results if he did not win. The problem was merely delayed by his victory in November 2016. In summer 2020, a high-level bipartisan group gamed out how Trump might try to undermine the peaceful transition of power if he lost in 2020; in many of the group’s scenarios, social media played a crucial role in spreading lies. Despite growing evidence and predictions of such likely consequences, platforms took little to no action, not even to enforce some of their own policies against incitement to violence. (The police, too, seem, disturbingly, to have overlooked or to have fundamentally misjudged online material).

To continue reading this Centre for International Governance Innovation (CIGI) report, go to:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.