Harvard Law Review, Forthcoming
Abstract
Private online platforms have an
increasingly essential role in free speech and participation in democratic
culture. But while it might appear that any Internet user can publish freely
and instantly online, many platforms actively curate the content posted by
their users. How and why these platforms operate to moderate speech is largely
opaque.
This Article provides the first
analysis of what these platforms are actually doing to moderate online speech
under a regulatory and First Amendment framework. Drawing from original
interviews, archived materials, and leaked documents, this Article not only
describes how three major online platforms—Facebook, Twitter, and
YouTube—moderate content, it situates their moderation systems into a broader
discussion of online governance and the evolution of free expression values in
the private sphere. It reveals that private content moderation systems curate
user content with an eye to First Amendment norms, corporate responsibility,
and at the core, the economic necessity of creating an environment that
reflects the expectations of its users. In order to accomplish this, platforms
have developed a detailed system with similarities to the American legal system
with regularly revised rules, trained human decision-making, and reliance on a
system of external influence.
This Article argues that to best
understand online speech, we must abandon traditional doctrinal and regulatory
analogies, and understand these private content platforms as systems of
governance operating outside the boundaries of the First Amendment. These
platforms shape and allow participation in our new digital and democratic
culture. They are the New Governors of online speech.
No comments:
Post a Comment