A timeline of moderators reconstructed for /r/conspiracy using Internet Archive snapshots.

A timeline of moderators reconstructed for /r/conspiracy using Internet Archive snapshots.

Overview

Online communities empower their users, letting people from a diverse range of backgrounds communicate with one another, learn new things, and much more. In the past few years, the proliferation of internet access across the world has led to very high penetration of social media platforms like Facebook, not only in developed nations but across the globe. The ever-increasing extent of connectivity means that Internet-based platforms have wide-reaching impacts across all societies.

Unfortunately, however, the same broad reach and deep engagement that make internet communities so empowering also makes them vulnerable to abuse and manipulation. Three harmful behaviors prevent everyone from having their voice expressed online.

  1. Internet abuse, which includes harassment and bullying, makes spaces feel unsafe for vulnerable community members and contributes to a toxic environment, more generally.

  2. Manipulation, the use of online communities for spreading mis- or dis-information, or the control of topical narratives by malicious actors for specific purposes.

  3. Echo-chambers are created when communities lack a diversity of ideas, and instead amplify rhetoric until it reaches unhealthy levels. All three of these types of behavior prevent openness in online communities and exclude users, and all are widespread problems. (Have citation here?)

Many online communities, including Facebook and Reddit, turn to volunteer moderators to provide a system of scalable governance, with the goal of reducing the prevalence of abuse, manipulation, and echo-chamber-ness while maintaining an open and respectful community. Moderators are responsible for setting rules, communicating them to the community, and enforcing punishments against rule breakers. Moderators have enormous leeway in their actions, and in many cases human moderators are assisted by AI tools.

In this project, I seek to support moderators of online communities by building tools to assist with moderation, and to base these tools in a data-driven understanding of how moderation online impacts community outcomes, including abuse, manipulation, and echo-chambers. Recently, I’ve focused on developing scalable measures to quantify these factors.

This work is supervised by Allen School Professor Tim Althoff.