Intermediary Liability: How to Kill Content on the Internet.
Thanks to statutes like Section 230 of the Communications Decency Act and even
the Digital Millennium Copyright Act, intermediaries in the United States are
at least theoretically shielded from liability for how actions by users on
their systems. As a result, the internet has become a diverse and vibrant
platform for speech of all sorts. Some, however, seek routes around these
limitations on liability. One vector of attack is through the DMCA, but new
tactics are also being tried, including the pursuit of domain‐wide
injunctions, site blocking, and threats to payment providers, as well as
attempts to limit Section 230. Many of these attacks on intermediaries are
launched by entities focused on controlling intellectual property rights . In
this talk we’ll consider those, and also how these tactics are being used by
those with objections to other types of speech, such as adult content,
including “revenge porn” and online advertisements, or content governed by the “Right
to Be Forgotten.” And we’ll look at what can be done to better protect all
online speech from censorship by strengthening intermediaries’ legal and
political positions.
Cathy Gellis, Attorney (moderator): §230 and DMCA for intermediaries. §230 is very strong for intermediaries. Plaintiffs hate that. Can’t be compelled to remove content. But
see Barnes v. Yahoo! (contract claims are allowed). Carveout for federal crimes; IP. No state carveouts, but they try. DMCA for ©—much more complicated and
conditional than §230.
Emma Llanso, Director, Free Expression Project, Center for
Democracy & Technology: §230 is incredibly strong legal protection for
intermediaries—really important. Strength
has drawn the attention of people who’d really like to be able to sue an
intermediary for a harm caused by a third party. Various attempts to amend/work
around. Victim of child trafficking who
has judgment against the person who trafficked them, but that person is
judgment-proof. Seeking to go after
website that posted ads related to trafficking: very sympathetic P; law
enforcement learns that §230 bars making the website liable.
53 State/Territorial AGs sent a letter to Congress proposing
an amendment to allow state laws for intermediary liability to override
§230. Defeated due to outcry; since
then, would-be imposers of liability have gotten more clever. Big one, especially
this year: looking at the exemption for federal criminal law. Efforts to create new federal criminal laws
with some kind of liability for hosting third-party content, such as the SAVE
Act: Stop Advertising Victims of Exploitation Act. Additional liability beyond existing
trafficking liability—the crime of advertising a person for trafficking
purposes. Passed earlier this year, without
formal hearing. Ultimately added to
Justice for Victims of Trafficking Act, which got tied up in Lynch confirmation
so rational debate ended. Not clear what
“advertising” means. Not defined in
statute. Point of law according to
advocates was to be able to go after platforms that host advertising. But the
person who creates the ad is arguably the advertiser—still unsettled.
Rebecca Tushnet, Professor of Law, Georgetown University Law
Center: Compared to §230, §512 notice and takedown doesn’t make anybody happy. That doesn’t mean it’s good, but it does mean
it’s a compromise that may be extremely hard to change, despite the
unanticipated volume of notices—hundreds of millions overall.
Takedown problems: (1) algorithmic overreach—recently dealt
with a Harry Potter takedown notice based on the fact that the name of the file
is Harry Potter and the Deathly Hallows; have had similar issues with fan
fiction stories that happened to use the same name as a popular song—insufficient
use of metadata—also a problem of figuring out that some online instances are
in fact noninfringing because authorized by
the rightsholder—for example, NBC Universal recently took down its Canadian
affiliate’s stream of Mr. Robot; (2) claim overreach—attempts to use DMCA for
privacy, trademark/getting competitors’ links removed, suppressing criticism (claims
made on behalf of the Argentinean gov’t; Apple trying to hide its contract
terms)—Google’s Transparency Report, WordPress’s hall of shame. Need for better deterrent to claim arbitrage;
some hope that Lenz will improve
matters, at least for problem (1). Big
differences in behavior of the small number of senders who send millions of
notices—algorithms are the problem—versus the large number of senders who send
a few notices a year—for whom misuse or at best misunderstanding is the
problem. For fascinating empirical data:
Daniel Seng, The
State of the Discordant Union: An Empirical Analysis of DMCA Takedown
Notices, 18 Va. J. L. & Tech 369 (2014)
Other than judicial decisions interpreting §512, what’s
going on? Experience from PTO/NTIA
DMCA best practices group: weak tea.
Didn’t even go as far as Lenz in
its recommendations for submitters of DMCA notices: “it is a good practice to
take measures that are reasonable under the circumstances (e.g. taking into
account the information visible to the notifier and the apparent volume of
infringement at the location, etc.) to … appropriately consider whether use of
the material identified in the notice in the manner complained of is not
authorized by the copyright owner, its agent or the law.” Didn’t specify much
of anything for senders or recipients of notices b/c the DMCA already gives
plenty of detail about what you’re supposed to do.
Participants insisted on having best practices for
counternotification even though we all understood that counternotices don’t
come from people who read PTO/NTIA best practices. Compare the bad practice for notice senders: Bad
practices for notice senders include “Falsely
asserting that the notice submitter has a good faith belief that use of the
material in the manner complained of is not authorized by the copyright owner,
its agent or the law.” Bad practices for counternotice senders include, “Failing to take reasonable efforts to
form a good faith belief that the material was removed or disabled as a result
of a mistake or misidentification of the identified material.” Again, compare Lenz: the courts are doing a better job.
Larger lesson: this was a meaningless exercise, and we still
couldn’t agree on much of anything. Stalemate that will leave current law in
place, making judicial interpretations the most important source of law
unless/until raw political power produces a different result for copyright
owners.
Speaking of which: testimony on §512 before Congress. Content owners have come up with a phrase
that is Frank Luntzian in its brilliance: “notice and takedown” is
insufficient, so we should have “notice and staydown.” Just a little change! Essentially indifferent to the reality that
this is filtering with a rhyme. Attitude
from representatives at the hearings earlier this year was essentially “get the
nerds on it” and “Google can do this with Content ID for video and sound on the
YouTube servers it controls, so it must be possible for everyone to do this
with everything for the entire internet.” Risks: It is very difficult to get a
representative to understand something when her campaign donations depend on
not understanding it.
Final random note: design patent—none of the exclusions we’re
used to, including fair use; people are getting design patents on logos and
graphic interfaces, potentially creating huge new sources of infringement
litigation. Newly discovered by plaintiffs’
lawyers and ordinary businesses, which have less interest in balancing free
speech interests than media plaintiffs do.
So, like the ITC, this is an issue for intermediaries to keep an eye on.
Gellis: People want content down from the internet. How will they use/abuse/change/route around
laws to get the content taken down?
What other content is being seen as bad and worthy of
takedown?
Llanso: We are increasingly hearing “Google can do it” from
regulators around the world—drawn from implementation of DMCA notice and
takedown. A lot of policymakers think of notice from private individuals to
intermediaries resulting in content takedowns, without court involvement, as
standard/comfortable. Proposals around “extremist”
content—but one big problem is that people don’t offer much of a definition of
extremist content. Concerning to look at
a category that runs the gamut from political/religious speech to incitement to
commit imminent violence. France: wants
public-private partnerships w/intermediaries. UK has internet referral unit,
official gov’t unit to use content policies defined by major platforms like
YouTube and content flagging mechanisms, as a way for gov’t to get rid of
content that may or may not be inconsistent w/ the law. No court involved. No obvious angle for users whose content has
been taken down/opportunity for appeal/oversight from independent third party.
(RT: May be another reason to endorse algorithmic approach
approved by the court in Lenz: makes more clear that © notice and takedown is a
poor model for other types of content, which can’t be determined in an
algorithmic way.)
Gellis: right to be forgotten.
Llanso: Right. Even
if it’s private action, speakers feel censored.
Right to be forgotten has created a circumstance for search engines that
they have to respond to assertions from private parties or they’ll violate
European data protection law—have to be responsive to private parties, not
court orders.
Gellis: DMCA is about not using the courts—lowers transaction
costs but also creates them.
RT: if you get sued as an intermediary, hard to get out on a
motion to dismiss. Means that to get big
you need an agreement with the industry.
Gellis: out of the courts: validity of the claim is never
tested, which means there’s vulnerability to abuse. Small intermediaries: not as bullish as RT—many
in the content industry regret allowing Google to grow big. Small intermediaries can’t take the risk of
ignoring the takedown notice—it may be crap, but the safer course is just to
delete/just censor.
Llanso: threats to credit card companies led them to refuse
to process transactions for Backpage—not just for adult content but for
everything. Backpage sued sheriff for pressuring the credit card companies as a
prior restraint on speech. Important case
to watch—TRO granted against sheriff. But
denied PI, so the case is being appealed to the 7th Circuit. One of the most concerning elements of the
dct opinion was characterizing the letters as containing threats but also
containing the Sheriff’s own 1A-protected advocacy—a stunning result. Adding advocacy shouldn’t be able to avoid
liability for censorship—a roadmap for any official to threaten.
Gellis: nothing on the internet happens w/o the help of
someone else. Intermediaries are
everywhere—some issues are restricted to a particular sector, but that’s why
people try to recharacterize issues. Protecting
intermediaries is important b/c it’s difficult to get the good stuff if we
pressure intermediaries to take down everything bad.
Q: View of speech as an absolute and internet as exceptional
medium. Financial intermediaries are
also intermediaries, but we restrict them offline, for a variety of different
transactions, including restrictions on arms trade, other blockades. When this is electronically, and we can’t get
jurisdiction over people, how do we get at those kinds of transactions w/out
targeting the intermediary? What’s the
minimal harm way of targeting the intermediaries?
Llanso: there are laws proportionate to the harm. When we don’t have gov’t acting under a
clearly defined law, with balances worked out, we have a big problem. If the rule is “don’t make the sheriff angry”
and the content isn’t adjudicated unlawful, that’s a big problem.
No comments:
Post a Comment