Working with Chris Bavitz and the Cyberlaw Clinic, I drafted a law professors' brief on the issue of substantial similarity as a ground for affirming the district court. Drawing on Jeanne Fromer's presentation on memes and copyright:
Friday, February 28, 2020
Monday, February 24, 2020
DOJ 230 workshop part 4
DOJ Section 230 Roundtable, afternoon
Chatham House Rules
Session 1: Content Moderation, Free Speech, and Conduct
Beyond Speech
How do platforms deal w/defamation? Standard practice is to
review the complaint, the content; compare to TOS/code of conduct. Removal if warranted,
contact poster if warranted and sanction at some level if warranted. If no
violation, it can be difficult. Service provider can rarely determine falsity.
Hassel v. Bird is part of a trend of expanding 230 immunities
outside recognizable form. Involved a statement adjudged to be
false/defamatory. Court ordered platform to take it down; platform declined.
Platforms just say they don’t care.
Distributor liability would be consistent w/text of 230.
Zeran ignored 300 years of common law If
platform is made aware, there is a responsibility to do something.
That’s not what distributor liability was. There is not 300
years of distributor liability; 300 years ago people were still getting
arrested for lese-majeste. There is no
common law of internet liability. 230 did cut off common law development of
secondary liability but it is possible that the First Amendment requires
something very much like 230.
There are three levels in the Restatement: publisher,
distributor, and provider of mechanisms of speech, such as photocopier
manufacturer and telephone company. That third level is not liable at all.
There are cases on the latter. The Q is whether internet companies are
publishers, distributors, or providers of mechanisms of distribution. Consider the city: if someone is libeling me
on the sidewalk, I can’t sue the city. Tend to be people who are by law
prohibited from discriminating—broadcasters when they sell space to political
candidates. [Notably, not photocopier providers—at least I can’t imagine that
anyone thinks that their liability for defamation copied on their machines
turns on whether they only sell to politically correct copiers.] The real
innovation of 230 was not to abandon a 2 level structure, but it also said that
service providers get protection even though they do have the power to decide
what’s allowed. Maybe we should abandon this category, or reserve it for
service providers that don’t discriminate, but traditional rules also had
absolute liability, not just notice and takedown. [When communication is not
private one-to-one but one-to-many, nobody wants platforms to not discriminate
against bad content, because that makes them unusable. So the common carrier
neutrality requirement might not be a good fit, though of course that doesn’t
necessarily mean that immunity would be the constitutional rule in the absence of
230.]
The libelous/not libelous problem is very difficult. A lot
of libel judgments in the record are provably fraudulently obtained, not
counting the ones that are outright forgeries. Default judgments, stipulated
judgments—no reason to think they’re trustworthy. Deeper problem: I run a blog,
someone comments negatively on Scientology, which complains. If you impose
notice and takedown, I have to take it down b/c I’m in no position to judge.
Judgment in hand=much smaller set w/its own problems; w/ notice regime, there
will be default takedowns. Maybe that’s fine, but that’s the downside.
No one is arguing against freedom of speech, but there’s a
reality that some platforms with recommendation engines/algorithms have more
power than newspaper over what we will see, amplifying their content. So we
should figure out a category for a digital curator that classifies companies
that use behavioral data to curate and amplify content, and then the
responsibility is not just in allowing the content; did the algorithm amplify it. You’ll have to decide the thresholds, but
there is a missed conversation in acting like all platforms are the same.
230 cut off development of state law to see how we can
develop rules to fit a solution that is not analogous to a photocopier. These are
highly curated, controlled environments they are creating. 230 represents a
tradeoff, and they should give something back in public responsibility. That
was the deal in common carriage. In return, they got immunity from libelous
content.
230 clearly meant to reject notice & takedown b/c of
moderator’s dilemma. Most of these cases would fail; moderator can’t tell what
is true. Anti-SLAPP laws are also applicable. Defamation can’t be extrapolated to things
like CSAM, which is illegal under all circumstances.
If they’d fail on the merits, why have 230? It bolsters the
case b/c it shows the real risk of death by 10,000 duck bites. There may be
businesses w/o the wherewithal to deal with a number of frivolous lawsuits. 230
has been useful for getting companies out of litigation, not out of liability;
removing burdens from court system.
DMCA has notice and takedown. Not just sole discretion of
moderator, right?
It is often abused to take down obviously noninfringing
material. Even if the person responds, you can still have the content down for
2 weeks, and that’s very handy in a political system. People use the system for
non © purposes. 512(f) has been interpreted by the courts in ways that make it
extremely difficult to enforce.
Notice & takedown is good for © but overwhelming which
is why the content owners want staydown. © is always federal so there’s less of
a mismatch. DMCA isn’t about illegal content (CSAM), whereas © infringement is
illegal distribution, not illegal original content.
Where it gets tricky is often where the use involves fair
use b/c it can be difficult to build filters/automatic process to distinguish
lawful/unlawful, which matters for our discussion b/c much of the content isn’t
going to be easy to figure out.
Many, many studies and anecdotal accounts of bad takedown
notices. And the content companies are constantly complaining about the DMCA. The
best regime is the one you’re not operating under.
Notion that 230 didn’t contemplate curation is flatly wrong.
Libraries are curators; Stratton Oakmont was a curator. 230 was intended
to incentivize curation. Ultimately,
what is demoting vitriolic content online to make a community less toxic?
That’s curation.
There is a fourth Restatement model: 581 on distributors;
was almost made up by the Restatement reporters. There’s almost no case law
support for the distributor liability; Dobbs hornbook agrees that 1A would not
tolerate distributor liability. It is just not the case that there were a bunch
of distributor liability cases. But there is a property owner/chattel owner
provision of liability: if you own a bulletin board or something like that if
you’re given notice: that seems far closer than distributor liability, but the
legal authority for that is also extraordinarily weak. Even if as a matter of
principle there ought to be such liability, we don’t have 100 years of it. If
we did, it’s unlikely to survive NYT v. Sullivan. Cutting the other direction: to the degree
there is amplification, at common law, or republication, even if purely third
party content: extraordinarily strong precedent for liability for republication
regardless of whether you know or don’t know defamatory content. No particular
reason to think 1A would cut into the republication rule. Defamation cases that
go over the top [in imposing liability] involve republication. That’s just a
mistake by the courts.
A lot of harm at issue is not defamation. Illicit drugs.
What if $ goes through the payment systems they host? If they know about animal
torture rings, pedophilia groups, Satanic groups are hosting video—these are
not hypotheticals.
CDA was about pornography, not just defamation. Indecent
content is very difficult to regulate, b/c it is constitutionally protected for
adults to access. 230 means that many platforms block this constitutionally
protected speech b/c otherwise their platforms would be unusable. 230 allows
platforms to do what gov’t couldn’t.
Should platforms be encouraged to be politically neutral in
content moderation? Is it a danger we should be concerned about as more
political speech occurs in private forums?
Anecdotally, conservatives think that Silicon Valley is
biased against them. If you made it actionable, it would just hide better.
[Leftists say the same thing, BTW.]
Invites us to have a panel where 18 engineers talk about
what law professors should do better. We haven’t had any numbers here.
Discussions of how companies make decisions are completely detached from how
big companies make decisions. People care deeply, but all moderation is about
knobs. You can invest time & effort, but when you moderate more you make
more false positives and you moderate less you make more false negatives. Never
sat in any meeting where people said “we’re not legally liable so who
cares?” Political bias: rules v.
enforcement. The rules are generally public. Example: Twitter has a rule you
can’t misgender somebody. There is nothing hidden there. Then there’s bias in
enforcement; companies are very aware of the issue; it’s much larger outside of
the US b/c companies have to hire people with enough English & education to
work at a US tech company, and that tends to be a nonrandom subset of the
population. So that tends to be from
groups that may be biased against other subgroups in that country. There are
some tech/review attempted solutions to this but anecdotes aren’t how any of
this works. Millions and millions of decisions are being made at scale. There’s
a valid transparency argument here.
It’s a false flag to say that conservatives feel this way so
it’s true. Did we take down more ads on one side than the other? We don’t know
which side violated policies more, so that counting won’t work. Need criteria
for what is an ad/what is abusive, and we lack such criteria. This political
conversation takes away from the debate we should be having. [Also: If you are sufficiently transparent to
show what’s going on in detail that might satisfy critics, then you get a lot
of complaints about how you’re sharing bad content, as Lumen has been accused
of doing, and you may also provide a roadmap for bad actors.]
Transparency principles: disclose the numbers; explain to
users what the rules are and which one they violated; provide an opportunity
for appeal. Many companies didn’t have appeal options. We’ve seen improvements
on that.
Everthing is biased, but transparency can increase
understanding and trust. Build up to a fullblown standards process where all
stakeholders can be in the room, big and small companies, different users. Not
all use cases are the same. Also, AI is not one tech but a variety of enabling
tech. Its complexity is one of the things that standard developers are
grappling with. Starting with bias, risk, predictability, governance.
It’s a fact that Google held a meeting w/sr executives after
the 2016 election saying it was bad, and conservative engineers were fired for
their views. They should have a transparency report about this.
Of course there’s bias. The policies are deliberately
complex. Not just Google. Executives admit they hire liberal staff. [There is
deep confusion here between the moderators and the executives.] Twitter is the
worst actor. Despite all that, companies should solve the problem themselves.
These hate speech policies are garbage. EU hate speech policy would be much
worse. We have a 1A here that Europe doesn’t believe in. You could ban the Bible
under FB’s code, and Bible-related posts have been removed. Tens of millions of
Americans are sure there’s a problem.
The problem is at scale: every single group in the world
thinks they’re the one that’s targeted. Gay people, Bernie bros, conservatives.
The problem is a massive amount of innumeracy and non quantitative thinking in
this debate. 1000s of examples of bad
decisions exist even if you’re at nine-nines accuracy. Not a single one of
Google’s employees who was marching about the travel ban makes a single content
moderation decision or oversees anyone who makes a content moderation decision.
It is obvious that everyone in the world will not agree what speech moderation
should be, and they will all think they’re the victims.
There are plenty o’ conservatives at Facebook.
Should there be any affirmative obligation on transparency
in 230? New problems appear that you didn’t anticipate: people with white
supremacist usernames. You could say
that adding a new rule is inconsistent/not transparent, but when you didn’t
have the problem previously you have to develop some response.
Common law development is the norm w/in platforms. As FB
scaled, they went from one page of instructions “if you make you feel bad, take
it down” to rules that could be administered based on content. People didn’t
want to see hate speech. This wasn’t conservative bias, but civil society
groups & sometimes informed by European rules. There was no reptilian brain behind it all.
To claim that things didn’t change after 2016 is a fantasy:
big tech responded b/c Donald Trump won an election based on big tech.
No, that’s when you started to care about it.
Transparency: what is it really? Google’s numbers don’t
really tell you how the system works, they just provide numbers of requests.
Need to talk about granularity as well as scale. GDPR is another consideration.
Europeans won’t allow disclosure of personally identifiable information. That
means some gov’t here will have to extract that data as part of transparency.
Speaking of bias, consider the possible bias of gov’t in
determining whether the platforms are biased. You can’t tell a bookstore which
books to stock, and you can’t go to the NYT or Fox and require them to disclose
their editorial policies in the name of appropriate transparency. Assumption
that this is a matter for gov’t regulation instead of letting the market decide
is a mistake, at least in the US where the 1A constrains.
Misgendering activist was kicked off Twitter for tweets she
made before the anti-misgendering policy, and that’s the core of her legal
claim. 230(c)(2)(A) doesn’t say “hate speech” [though it does say “otherwise
objectionable” and it also says “harassing”]. You can’t have it both ways in
not being responsible for 3d party speech and not being responsible for your
own moderation decisions.
Other courts have said that spam is covered; other courts
have wanted something more like what’s listed in (c)(2). This isn’t a 230 issue
at all: the courts are recognizing that platforms themselves have 1A rights and
that they cannot violate 1A rights as they are not gov’t actors. Nothing to do
w/230.
As for mandatory transparency, many companies do have law
enforcement transparency reports and are expanding their efforts. Reporting
numbers may be a pretty dry read, but if you dig into the help pages of any
number of sites, you can get a better idea of what the rules actually mean.
Here is where small businesses would need a carveout; when you’ve built computer
systems to do one thing, it can be very hard to convert it into another (the
data you’d need for a transparency report). There’s been a transition period
for companies to revamp their systems in a way that’s useful for transparency
reporting.
Is the court overreading 230 and treating it as anti SLAPP
statute/MTD stage? It is an affirmative
defense, and the Q is whether the elements are present on the face of the
pleading? Usually there isn’t much question of whether a platform is an ISP,
whether the content originated w/a third party, etc. Discovery, where it
occurred, has focused on whether there was third party content, and that’s
correctly limited discovery.
1A right to discriminate in policy and enforcement?
Platforms, when acting in recommendation capacity like Google search/FB
stories, get to decide what to include and what not to include. Doesn’t
completely answer what happens solely in platform capacity: YT in what it
chooses to host. One way of thinking about it: first scenario is clearly Miami
Herald v. Tornillo; for the second, there’s a plausible argument that content
or viewpoint neutrality rules could be imposed under Pruneyard/FCC v. Turner on
what they host. The traditional model did say essentially that common carriers
got total immunity, while distributors with power to choose got notice and
takedown. There’s room for argument that 1A immunity requires even-handedness.
Not positive it’s constitutional, but not positive it’s not either.
Evenhandedness is impossible to define. What violates the
policy is the key. Let’s talk about real
victims who were connected to abuse via platforms. Conversation about political
bias is a sideshow that undermines search for help for victims.
Session 2: Addressing Illicit Activity and Incentivizing
Good Samaritans Online
Hypo: user posts a pornographic photo of a young woman.
Individual claiming it’s her asserts it was posted w/out her consent. Platform
doesn’t respond for four weeks. Alleged subject sues for damages she suffered
as a result of the photo. Suppose: Anonymous user posts it; alleged subject
claims it was posted when she was 13.
Argument that 230 still doesn’t cover it: there’s an
exception for crimes, including CSAM. If you look at the provisions that are
covered, they include 2255 & 2252(a), both of which have civil liability.
Argument that 230 does cover it: Doe v. Bates: this case has
already been litigated. The statute is
very clear about being about “criminal” law, not about civil penalties that
might be part of it.
Should 230 immunize this content against civil
claims? The platforms are horrified by the material, didn’t know it was there, and
took action when they knew. If you have a rule that you’ll be liable in these
circumstances, you’ll have platforms stick their heads in the sand. Given
potential criminal exposure, this is not a real life hypothetical.
What’s the current incentive to address this? Criminal
responsibility; notification obligation. And being human beings/adults in the
rooms. Criminal incentive is very strong.
Even with FOSTA/SESTA wasn’t about creating federal law; they took down
Backpage w/o it, it was about creating state AG authority/allowing survivors to
sue.
What would FB do in this situation? FB unencrypted: Every
photo is scanned against photoDNA. Assume it’s not known. All public photos are
run by ML that looks for nudity. If classified as such, looks for CSAM. Would
be queued for special content review; trained reviewer would classify it by
what’s happening and what age the person is.
Depending on classification, if there was a new high level
classification they would look for more content from the same user, directly
call FBI/law enforcement.
14 year olds sending their own nude selfies violate child
porn laws.
Sextortion victims are big population of CSAM material.
They’re being thought of as less egregious b/c it’s not hands on abuse but
suicide risk is almost doubled. In terms of 14 year olds breaking the law, feds
wouldn’t look at charging them for that.
But: State law enforcement has charged 14 year olds,
which is relevant to whether we ought to have more state lawsuits against
people that the states blame for bad conduct.
FB doesn’t allow public nude photos. If not marked as child,
would just be deleted. If reported to FB as nonconsensual, deleted and FB would
keep the hash to make sure it doesn’t reappear with a better algorithm than
PhotoDNA. If the victim knew the ex had
the photo, she can submit it to FB and that goes to a content moderator that
can prevent it from being uploaded. That’s a controversial project: “FB wants
your nudes.”
Regardless of company size, we care greatly about CSAM and
NCII (nonconsensual intimate images). Everyone views responding to complaints
as the baseline. Companies take idea of being in violation of criminal law very
seriously. Penalties for failing to report went up significantly in 2008; criminal
piece of this is so important: any state law consistent w/this section (CSAM)
could be enforced.
FB is the company that does the best at finding the worst,
but that’s very unusual. A child couldn’t anticipate that with every platform.
No prosecutions on failure to report. The fine is $150,000 which isn’t
significant for a tech company.
Not every tech company is funded like WeWork was. In fact almost no tech companies are and the
ones that are, are definitely taking action. People who aren’t deterred by
$150,000 and criminal liability are rare, and where you could deter them more
is by enforcement not by increasing the penalty.
Suppose someone sues the platform for damages as a result of
availability: should provisions be sensitive for different kinds of harm? If
someone is threatened and then raped or murdered, that’s different than having
personal information exposed. We might want to focus liability on the type of
harm proved to have flowed from this.
Identity of the user Q: if there were a criminal
prosecution, then the company would have to turn over the information, and also
if there were a civil prosecution you can get discovery. Dendrite/similar
standards can be used to override anonymity & get the info.
Platforms send responses to preservation letters telling sender
they have no obligation to preserve evidence for anybody outside of law enforcement.
They fight the subpoenas even though offenders are usually judgment proof. Costs
$250,000 to fight subpoenas. FB automatically deletes stuff after a few months.
Platforms ought to perform an evaluation of the
validity of the subpoena.
Euro data law has made platforms turn over data on 30/60 day
windows, that’s global.
Deleting content is very serious; many platforms immediately
delete the material reported to them. Helps criminals cover their own tracks.
There needs to be some type of regulation that when toxic content is reported
there’s some curative time you have to save it but not leave it up.
Reporting to NCMEC is a precedent for that. Should be used for war crimes, not just
domestic.
The process of giving us hypotheticals in isolation is a
problem. Each example ignores the problem of scale: you get a bunch of these
each day. And there are problems of error and abuse. E.g., French authorities notified the
Internet Archive that they were hosting terrorist content and had 2 hours to
take it down.
Hypo: terrorist uses online platform to recruit. Algorithm
recommends the group to new individuals, who join in planning acts of terror.
Platform gets paid for ads. Survivors of individuals killed in terror act sue
under Anti-Terrorism Act.
Should look at how banks are regulated for terror and crime
content. Money laundering. Financial services industry argued that they were
just P2P platforms that couldn’t be regulated for storing money for illicit
actors, but the US government imposed regulations and now they have to monitor
their own platforms for money laundering. Terror/organized crime aren’t
supposed to use banking services. You agree that your activity will be
monitored, and if bank suspects you’re engaged in illicit activity, a suspicious
activity report will be filed. That’s not working particularly efficiently. How
can we look at systems like NCMEC or other reporting mechanisms to improve upon
them? [This seems like a basic problem that money is not obviously and always illicit,
like CSAM. We’ve just been hearing about NCMEC’s challenges so it seems weird
to look at it for a model—the system that’s best is always the system you’re
not using!] Many companies that produce
chemicals and electronics have to control their supply chains to avoid
diversion by drug cartels or go to IEDs. Why does the tech industry get freedom
from liability for the harms their products cause? There are 86 designated
terror groups and we find activity on major social media platforms. Not fans
but Hezbollah has an official website and an official FB and Twitter feed. They
do fundraising and recruit.
Interagency colleagues are thoughtful about this—NSC and
other alphabet agencies. They have ideas about financial services, money
laundering, and that would be a productive conversation. But at the end of the
day, there is still a First Amendment, and that’s your challenge. EC is de
facto setting global standards for large platforms. The large platforms would
like something like GDPR because those costs are sunk. EC doesn’t have a 1A
hanging over them; EC is already looking at this at the member level. W/o the
Brits, it could happen within a few years.
On GDPR as a potential model to get big change out of
platforms: It is in fact impossible to comply w/GDPR. The reason it kind of works is that European
regulators sometimes let you fudge if they think you’re acting in good faith,
though that is not without its own political bias. The kind of strict
compliance regularly required by both US regulators and civil litigation is not
compatible with the kind of rules that you can borrow from GDPR type
regimes. Civil liability and especially
class actions are not a significant part of the European model. Having one national regulator to answer to is
very different than having to fend off lawsuits any time anyone thinks you
didn’t act fast enough.
Financial services people know more than internet services:
physical branches, physical IDs, etc. Scale: JPMorganChase has 62 million users
and makes $80/per user; FB has billions and makes $8/user. If you want people
to be charged $80/year, you can apply money laundering rules to FB and FB will
have branches.
As if we know what is a terrorist organization: the platform
knows there is a page. But there are anti abortion groups, anti police groups,
pro Palestinian, environmental groups w/radical fringe. Somebody calls them a
terrorist group. The initial hypo says that they use the service to recruit,
radicalize, and promote: the 1A protects a vast range of promoting violence.
Holder v. Humanitarian Law Project: Restricting interaction w/designated groups
is lawful only b/c people can set up independent promotion. Liability for this would
require platforms to remove content that anyone links to violence.
What if they knew it was a terrorist organization? Knowingly
facilitates, solicits, profits from.
How does Google “know”?
This scenario happens all the time. FB takes down a bunch,
misses a bunch. Consider how many women on the planet are named Isis. Terrorism
is one of the hardest things; FB needs its own list of terrorist organizations,
b/c some countries use their lists to suppress racial minorities. A lot of
speech is currently overcensored b/c lots of victims are Muslims related to
political Islam who are not terrorists and not enough people who matter care.
What if they knew someone was convicted of stalking on a
dating app? Assume the matching service
knew that but failed to warn. Would 230 immunize that? [Again we have just
decided to assume away the hardest part of the hypo: knowledge, as opposed to
accusation.]
There are a number of cases that speak to when an
interactive service is facilitating/participating in development, like
Roommates and Accusearch. You can’t state that b/c the service is being used,
it’s facilitating. It has to elicit the content in some specific way. If the
site is promoting content likely to be of interest to the user, you can’t jump
to that. Having seen OECD process on terrorism: when companies are saying we
need to be reporting on terrorist content, whose list should we use? There is
no consensus on who should be on the list. We can’t just call something a
terrorist organization w/o speaking to definitions and authority.
Agencies can issue guidance if laws are unclear. Can be
illustrative; could be multistakeholder process.
We’re hearing GDPR a lot with things like ICANN. We have to
make a decision about whether we will kowtow to the EU. If Russians/Chinese had
done GDPR, we’d be raising holy hell. Cost per customer is misleading beause the
customers are actually the advertisers [though the advertisers are not the only
people providing content/in need of screening, which is the source of the
problem!]. Google and FB are bigger than banks and making a lot of money. Conduct has to be where we start, not
content/bias. Knowing facilitation/profiting is easier, as w/FOSTA/SESTA.
Didn’t have the ability to pierce the veil w/Backpage b/c of 230. The reason Backpage
went down was a Senate investigation and then the DOJ, but state AGs couldn’t
do it and survivors couldn’t—stopped at the courthouse steps.
Gov’t should have a higher burden of proof for identifying
terrorists, but the blanket immunity is a problem. Tech platforms know that if
all else fails they can fall back on immunity. Hard cases are always extremely
complicated. Blanket immunity can’t be sustainable.
Shady company, MyEx, charged people to take down pictures
that had been posted of them, and sometimes they wouldn’t even take them down.
FTC shut down the site. There are ways to deal w/some of these issues that
don’t involve rebalancing 230. Law enforcement is super important here, and
resources for that are really important.
Are we concerned w/230 invoked against FTC?
State of Nevada was on MyEx case as well. This was §5
deception, not about third party content. We can make those cases. If the
companies are doing something, we can go after that. 230 doesn’t help them if
they’re actually doing the stuff. MyEx
didn’t raise 230, but it wouldn’t have helped.
See also: Accusearch.
Civil suits are by nature anecdotal and not scaleable. Every
individual should have the right to bring these cases. [But if you don’t have
strict liability, then most of them should lose.] Extreme cases w/nothing to do
with speech are getting thrown out of court—P is suing for an offender’s
conduct, like creating fake profiles and sending people to someone else’s home.
Grindr is an important case b/c the product itself facilitated the harm. It
wasn’t encouraging violence. It was the actual mode of the violence. The words
the offender used in his DMs weren’t important to the cause of action. Cts have
interpreted 230 so extravagantly. Companies don’t have to build safer products.
One victim was murdered by a first date on Tinder, and another person had been
raped earlier that week, and yet he still got to use that dating app. How do we
stay safe against these platforms?
Proposed changes to 230 that add words to limit it more to
speech. E.g., add “in any action arising out of the publication of content
provided by that information content provider.”
230 has been interpreted to short circuit all this analysis. Shouldn’t
apply to commercial transactions: a dog leash that blinds someone. If these
actions happened in physical space, they wouldn’t be treated as speech. [In cases
on false advertising, who is responsible for the false advertising is very much
an issue; the retailer is not, according to a wave of recent cases. I actually
think the case for Amazon’s product liability is fairly strong, but 230 hasn’t
precluded it.]
The proposed change doesn’t do useful work. Courts have to
have a sense of what information doesn’t constitute speech. There is no well
defined principle; Sorrell v. IMS makes clear that data is speech. It’s not going to do what you want it to.
Tech always wins; we need some unpredictability.
Unpredictability is not an asset in these situations; find a
rule that predictably yields the result you want. Immunity for Amazon: the
reason isn’t that leashes are treated as info but not as speech; the reason is
that Amazon is a place for people to publish certain information and Amazon
can’t be held liable; the ad for the leash is info and speech. There are things
you can do, but these suggestions aren’t those things. If you require speech to be an element of the
tort for 230 to apply, then people will just assert IIED instead of defamation
[though Hustler says you can’t do that for 1A purposes]. Would leave disclosure
of private facts immune (including revenge porn) but not IIED. If you have in mind some particular kinds of
harm to regulate against, like revenge porn, you can do that, but if you’re
trying to do that based on elements of the offense, the proposal won’t work
very well.
Reasonability could be done a lot of different ways: safe
harbors, best practices.
There have only been 2 federal criminal cases: Google and
Backpage. It’s not enough resources to go after the wide range of criminal
activity. Frustrated by discussion that hasn’t looked at major criminal
organizations that have infested major platforms. They should have responsibility
to take reasonable steps to keep the content off their platforms. Particularly
critical for fentanyl sales, but there are other issues ranging from wildlife
crime to gun sales to antiquities and artifacts in conflict zones. Illicit
networks are weaponizing platforms: hosting chatrooms where organized crime
takes place.
Lack of data about use of 230 is important. There are a
number of 230 cases involving discovery. There are a number of cases rigorously
applying the exceptions. We’re aware of a decent number of cases where the
conduct of the platform was more than sufficient for the court to find
liability. Reviewed 500 most cited 230 cases, and before we rush to change 230
it would be helpful to have a full understanding of how it’s applied.
Is blanket immunity still justified now that sites are using
E2E encryption and thus can’t comply with process to hand stuff over?
EARN IT: Every word other than “that” will be subject to
litigation. Even if you assigned it to a commission, that will be litigated.
Full employment for lawyers but not great for clients. As you think of
proposals, don’t think about FB. Think about the Internet Archive, or
Wikimedia, or newspaper comments section. The individual user is who 230
protects. Encryption: vital to many good things; there’s no way you can build
E2E encryption that works only for certain, good people. It works for everyone
or it doesn’t work at all. People would like that not to be true.
EARN IT: substantive Q of tying immunity to encryption;
procedural Q of how it’s done. Procedural answer is straightforward: EARN IT
Act is terrible way to do it. (1) Can’t give rulemaking authority to DOJ—there
are many equities. (2) This is a high level policy tradeoff that should
properly be done by the legislature. Like nuclear power: big risks, big
benefits. (3) Bad politics. 230 is already so complicated. Adding the only
thing that is more complicated than 230 is bad. (4) Kind of trolling/encouraging
pointless disputes: creates a committee, then gives authority to AG.
Possible for a site to ignore the AG as long as it’s not
reckless; do have to take some liability if people start distributing child
porn on their network if they’re reckless about it. [But how do you decide
whether enabling encryption is reckless?] Just stopping looking for child porn
is the same thing as encrypting all the traffic. [That seems like the wrong causation to me,
which I suppose makes me a supporter of the doctrine of double effect.]
Companies are capturing all the benefit of encryption and offloading costs onto
victims of child porn. If E2E is such a great idea, the costs o/weigh the
benefits and we should put them on the same actors, the guys selling the
encryption. [Wait, that’s not recklessness. That’s a description of strict
liability/liability for ultrahazardous activities, unless you have
predetermined that using encryption is reckless. If the standard is truly recklessness, then
if the benefits outweighed the costs, it shouldn’t be found to be reckless.]
4th Amendment impact: platforms know they should never take
direction from law enforcement. If we act as agents of gov’t, we violate 4th A
rights. If you legally have to look for child porn and have to report it, hard
to argue you’re not an agent of the state doing a nonwarranted search on every
piece of content from the service. Zuckerberg would love this law b/c it would
solve his Tiktok and Snapchat threats. E2E encryption is the best way to turn a
massive data breach that could destroy the company—or the nation—into a bad
weekend. One of our only advantages we have over China is trust, and protecting
private info is how we get trust. Destroy encryption=Longterm harm to
competition and national security. FB has stopped terrorist attacks; handed info
to law enforcement. If you want platforms to do it, has to be voluntary and not
involuntary.
4th amendment is a gray area: have to lean into the tough
area to protect children. 2 issues on encryption: DOJ’s issue is law
enforcement access. NCMEC’s issue is detecting the abuse and being able to make
the report. Encryption works to block both. If you don’t see the child, law
enforcement action is irrelevant. Has heard from technologists that E2E is
wonderful; got to find a way to blend the power of that tool w/child protection
measures. 12 million reports is too many reports to lose.
People are working on options other than law enforcement back
doors.
DOJ 230 workshop part 3
Panel 3: Imagining the Alternative
The implications on competition, investment, and speech of
Section 230 and proposed changes.
Moderator: Ryan Shores, Associate Deputy Attorney General
Professor Eric Goldman, Santa Clara University: (c)(1) means
no liability for 3d party content. Difference between 1st/3d party content isn’t
always clear. (2) protects good faith filtering and (2)(b) also helps providers
of filters. Exclusions: IP, federal criminal law, federal privacy, FOSTA sex
trafficking. No prerequisites for immunity as w/DMCA, no scienter required for
(1). Not claim-specific unless excepted. Common-law exceptions: (1) Roommates:
when sites encourage/require provision of illegal content. (2) Failure to warn?
(3) Promissory estoppel. (4) Anticompetitive animus.
Neil Chilson, Senior Research Fellow, Charles Koch Institute:
Taxonomy of possible regimes: what type of bad thing are we concerned about? Is
it illegal already or should it be? Who should be held liable? Person doing,
person providing tools? In what situations: strict, participation in creation,
knowledge, unreasonability? Can you get immunity back by taking action, e.g. by
takedown after notice? Concerns about incentives created. How do we protect
speech/public participation? Other countries don’t have 1A. Over-removal: ideal
outcome is sorting legal/illegal, but it’s hard to align incentives to do that.
Who makes the decision about legit speech remaining? Can companies decide for
themselves to remove legal speech? Does our approach disadvantage specific
business models? What affects on legal
certainty are there?
Possible legislative alternatives: (1) exemptions approach,
like PLAN Act focusing on homesharing sites, (2) bargaining chip proposals:
keep 230 if you do X; Hawley’s proposal for politically neutral content
moderation/EARN IT for commission to define X.
David Chavern, President, News Media Alliance: 230 was
designed to nurture new industry, became distortion: punishes folks who are
willing to take responsibility for their content. News publishers’
responsibility for content wasn’t hindrance to our growth; we were pretty good
at it [but see: Alabama in the civil rights era]. 230 means our content is subject to extreme
editorial control by major platform cos. Google News: someone has decided to
surface different content for you than for me. Their business value is
algorithmic judgments; they should be responsible for their judgments. They also
make decisions about reach. Small slander w/no impact could reach 10 people or 10
million, they should be responsible for that. Anonymity: a design factor that
prevents going after a speaker. If you’re a journalist, part of your job is
being abused online w/no redress, esp. if you’re a female journalist. Need incentives for quality, investment in
quality content. Zuckerberg says FB is b/t a newspaer and a telecom pipe—but they
can’t be neither. Not impressed by the billions of pieces of content:
they built it, that’s their problem.
Julie Samuels, Executive Director, Tech:NYC: As we think
about landscape, think through lens of smaller cos. Need to incentivize competition;
230 is crucial for that. Printing press allowed one to many and we’re in another
fundamental shift moment to many to many. Worried that we think we can put
genie back in bottle. It’s hard if certain industries don’t work like they used
to but that can be ok.
Goldman: Elevate existing benefits, even if there are also
costs. It is balancing; easy to overlook benefits. Millennials don’t know what
they have: don’t take for granted what the internet provides. Benefits haven’t
changed; we didn’t know what tech could do when 230 was enacted, but we don’t
know what it can do now. 230 preserves freedom to see where we can go.
Solves moderator’s dilemma, that if you try and fail you’ll be liable for
having tried. 230 still lowers barriers to entry. Baseline is not “can we
eliminate all online harms.” Internet as mirror: people are awful to each other
all the time. Might be able to find ways to make us kinder: Nextdoor is trying
algorithms to suggest kindness.
Chilson: Conservative principle of individual responsibility,
not tool responsibility: the normal way we do things in the US. Tort law
generally favors punishing actors over intermediaries—authors, not bookstores—social
media users, not platforms. Unusual to hold one person responsible for acts of
others; need good reason to do that. 230 doesn’t immunize produced content, as
newspapers are liable for their own content. Google is liable for its own
content; they just do different things. Services connect people on
unprecedented scale. Participation in group for people parenting a child with
clubfoot: b/c FB didn’t have to vet any post, that group exists and is greatly
beneficial to participants. Can’t build a business model around that alone, but
can build FB.
Pam Dixon, Executive Director, World Privacy Forum: Promoting
voluntary consensus standards. Just finished a multiyear study on FERPA, has
lessons learned. Striking that this area suffers from (1) lack of systems
thinking and (2) lack of research on fact patterns. Systems thinking: people
called in w/privacy harms in about 3-4 categories including (1) victims of
domestic violence/rape, fleeing/trying to stay alive; (2) people with genetic
based illness. It is rare to find a situation with one platform/issue; need
system analysis: public records, health records, educational records, other
platforms. Lack of fact patterning is a problem. OECD principles on AI: we all
learned that we were correct in our own way. Disagreement is ok but can we find
consensus? Individuals and organizations can lose trust in systems, platforms
can lose trust in gov’t. In our interest to solve trust problems. Voluntary
consens standards as a solution: not self-regulation. What if a more formal
process allowed all stakeholders, not just the big guys, to find consensus on a
discrete, observable, solvable problem?
Ability exists under OMB rules. FDA has recognized it for medical
devices.
Q: some proposals have carveouts for small & medium
entities. OK?
Samuels: size carveouts are worrisome. Small isn’t
automatically good. Swiss cheese approach. Small startups have big legal costs for
handling all kinds of issues; 230 is good at the pleading stage by making MTDs
relatively cheap, otherwise survival becomes difficult. Compare to the patent
troll problem: cottage industry of suing SMEs.
Chavern: we’re the only business mentioned in the 1A.
Incremental approach is justified. A few platforms matter more to society. Not a
lot of search, or social media, startups. Great scale = great responsibility.
Not irrational to start there.
Chilson: threshold concern: current antitrust investigation
is about search/social media killzone. If you have a threshold at which content
moderation becomes required, then the only safe way to cross that threshold
will be to get acquired. That’s not good. Big players are younger than many in
this room; they can come and go if competitive environment doesn’t cement their
market power into place.
Dixon: carveouts have unintended consequences. Right now no
unitary privacy test done by carveouts: should do that. Voluntary standards can
ID all stakeholders & discuss better solutions. Standard would be there if
you want to adopt it, not if you don’t.
Goldman: there are small companies in top 15 services, like
Craigslist, Wikipedia, Reddit. Some large cos have small UGC presence. Easy to
wrongly trip threshold. Concept great, translating hard.
Q: F/x on speech?
Chavern: Many complaints about speech we don’t like, not all
of it illegal. Freedom of speech isn’t freedom of reach. No inherent problem
with asking companies to be accountable about the act of deciding what to
disseminate. They’re deciding what you get to see, should be accountable for
that like a publisher. Weird that they get immunity for commercial decisions
that help their product. Unsustainable.
Samuels: That looks like a fundamentally different internet
experience. [Consider if you, an individual, had to have your posts go through
FB’s libel review before they’d post.] Social networks would be total chaos
without moderation etc. Real impact on users. Social movements and connections
happen now in incredible ways. Need to talk about end user experience.
Goldman: 230 can be the solution of how we interact as
humans; enables developments of better tools, services taking action on Gab.
Users on Gab, however, did have chilled conversations as a result. This is not
free. 230 enables diversity of editorial practices, not all like traditional
media. Finding communities that understand one another.
Dixon: Points to need for additional research and fact
patterning. Predictive speech is a coming issue.
DOJ 230 workshop part 2
Panel 2: Addressing Illicit Activity Online
Whether Section 230 encourages or discourages platforms to
address online harms, such as child exploitation, revenge porn, and terrorism,
and its impact on law enforcement.
Moderator: The Honorable Beth A. Williams, Assistant
Attorney General Office of Legal Policy
Yiota Souras, Senior Vice President and General Counsel,
National Center for Missing and Exploited Children: One main program is Cyber
Tipline, reporting mechanisms for public/ISPs to report suspected child sexual
exploitation. We analyze and make available to law enforcement. Receive reports
including CSE, trafficking, enticement, molestation. Largest category: CSAM.
Tremendous growth in reports: 2019, just under 17 million, w/over 69 million
files including video and images. Continues to grow. Many preverbal children as
well as younger teens.
Professor Mary Anne Franks, University of Miami: Cyber Civil
Rights Initiative: aimed at protecting vulnerable populations, online
exploitation, harm to women/sexual minorities/racial minorities. Civil rights
relate to tech. On nonconsensual pornography, active in (1) legislation where
needed, (2) working with tech cos on policies, (3) general social awareness.
For all tech’s good, have to be attentive to social media amplifying abuse and
civil rights violations. Bad actors, bystanders, accomplices, those who profit
from bads and hide under shield. Model statute issue: faced pushback from tech
& civil liberties groups. Many states take this issue seriously, thanks to brave
victims; rapid development in state law. Now up to 46 states & DC with
restrictions. Not solving problem: in many states the law is too narrow. Many
states require personal intent to harm victim, which is not how the internet
works. Average revenge porn site owner doesn’t intend to harm any given person,
just doesn’t care/interested in profits/voyeurism. 79% cases aren’t personal
intent to harm.
230 is the other big problem: trumps state criminal law.
Only way to maneuver around is federal criminal law on nonconsensual porn. We’ve
introduced a bill, not voted on yet.
Q. re reposting as harm to victims.
Franks: That’s one of the most severe aspects of attack: infinite
replicability. Much is initially obtained nonconsensually, via assault or secret
recording, or distribution without consent. It’s a harm each time. What happens
when a search on one’s name reveals all porn. 230 isn’t fulfilling its goals
for good samaritans. 230 doesn’t distinguish between helpers, bystanders, and
thieves. Intermediaries solicit, encourage, amplify violations. Also domestic
terrorism, misogyny, disinformation: harm to democracy/erosion of shared
responsibility for terrible actions.
Q: FOSTA/SESTA tried to address this for CSE. Impact?
Franks: we don’t see impact b/c we deal with adult victims,
not trafficking but privacy. Piecemeal tinkering on one bad form isn’t best way
to reform, makes unwieldy and sets up hierarchy of harms. Sex trafficking isn’t
the only bad.
Souras: we’ve seen Backpage go down, overlapped w/enactment
of FOSTA/SESTA. Immense disruption in market for child sex trafficking, which
continues. Feds did move against Backpage, no single co has risen up to fill
that lucrative gap. We’d love to see more federal action but there is
deterrence.
The Honorable Doug Peterson, Attorney General of Nebraska:
Trafficking online: federal prosecutors were very active in Nebraska; developed
a state law. No revenge porn prosecutions yet but can see issues with drug
sales and fraud, limited by 230. Nat’l Ass’n of AGs proposal: allow states and
territories to prosecute, just like feds: simple solution. Acceleration of
online crimes is significant, especially for young people targeted by apps.
Feds require a certain threshold; need to get aiders/abettors.
Q: challenges to law enforcement?
Peterson: some platforms: good cooperation. Murder case on
Tinder, which was v. helpful. Google & others have informed us and allowed
prosecution, esp. child porn. Enabled more thorough investigation.
Matt Schruers, President, Computer & Communications
Industry Association: What platforms are doing: over 100,000 people focused on
trust and safety. Large services have elaborate & sophisticated tech tools,
frequently made available to others. Participates with NCMEC and other private sector
initiatives. 10s of millions of reports to law enforcement. More investement
can and should be done—not industry alone. Many cases industry refers to law
enforcement don’t result in action: fewer than 1500 cases.
Q: why do companies report?
Schruers: no one wants service to be used for illegal
activity, regardless of law. There are bad actors, but a number of cases
illustrate that services that solicit/participate in unlawful content lack 230 protection.
Q: what about bad samaritans who don’t report their
knowledge: should industry set standards?
Schruers: There’s a role for best practices, much of which
is going on now. Don’t generalize a few bad actors.
Q: does 230 mean companies aren’t obligated to remove
harmful content?
Soares: most companies have separate reporting obligation
for CSAM. But co can choose to moderate or not; protected if they moderate or
they can moderate sporadically. Incentive promise has become aspirational.
There are cos that are partners, do tremendous work, but others turn the other
way recklessly.
Q: when did industry recognize existing problem and what did
it do?
Professor Kate Klonick, St. John’s University: doesn’t represent
any co. Her work doesn’t focus predominantly on illegal content but on harmful/violation
of community standards. There’s a huge difference in top 3 cos and many sites
discussed today. Different incentives to keep up/take down. FB etc. seek to
make platforms what people want to see over breakfast. Many incentives to
remove bad content—economic harms from bad media, users, advertisers who don’t
want ads to run against CSAM or revenge porn. Techlash in which it’s easy to
gang up on platforms. Since 2008 FB has been very robust on systems &
processes to avoid these. Not all tech/platforms are the same.
Peterson: AG of Pa had Tree of Life mass shooting; D was
using Gab before he struck. Looked at Gab’s engagement, but Paypal and GoDaddy
reacted quickly and industry response was so quick there was nothing to go
after.
Schruers: 230 protects those decisions by service providers.
Undermine that=no incentive to cut off.
Franks: distinguish b/t (c)(1) and (c)(2) [of course if you
only had (2) then any failure is held against you if you were correct once
before]. No incentive to act like good samaritans. They only grew a conscience
after public pressure in response to victims. Could have been avoided in first
place if design had been less negligent. Why should any of us be at the mercy
of corporations to see whether firearms are sold to a mass shooter? (c)(1)
doesn’t do anything to encourage cos to do better. Google is not a clean, well
lit place, nor is Twitter, if you’ve been attacked. Some people have always had
privacy, free speech, and ability to make money. But civil rights is about who’s
been left out. Descriptively not true that internet is by and large a good
place.
Q: Klonick says economic incentives align with moderation
for some. What to do about other companies where there’s a market for revenge porn
and CSAM?
Klonick: agree w/Shield Act: there are things to be done
with regulation and companies. This is a norm setting period: what to make of
what’s happening. Tech moves forward and our expectations change again. Concern
over acting quickly; hard to know ramifications.
Q: does 230 address safety?
Schruers: these trust and safety programs are not new. More
can & should be done. Prepared to engage w/ law enforcement; predate recent
bad press, part of doing business. There are a few bad actors, not entitled to
230, which creates exactly the right incentives by allowing policing w/o fear of
liability. (c)(1) does create issues when content is not taken down, but if it
were gone, there’d be nothing but takedowns, suppressing marginal voices and
unpopular views. We see this in other jurisdictions; no protection for lawful
but unpopular viewpoints. Requires balancing; there will be missed calls.
Q: what does more can & should be done mean?
Schruers: Asymmetry between reports & prosecutions; new
tools to be shared. Engaging w/IGOs around the world, OECD cooperation to
measure and respond to problems.
Q: CSAM reports grew a lot last year. How is there still so
much?
Souras: there is tremendous work being done by largest
companies, typically the best screeners & reporters. Once we drop off top
4-6 companies, there are 1000s of platforms around the world—chat, filesharing.
One problem: there is no level set. Moderation is helpful but completely
voluntary. Many choose not to screen. Larger companies also inconsistent over
time/across platforms/lack transparency.
When we talk about 100,000 duck bites, there’s a harmed person behind
every one of those cases even if also a business cost.
Q: Is automation/AI the answer? Small business burdens?
Souras: We have supported tests of AI/ML. We are far away
from that eliminating the proliferation.
Q: why so far away? Zuckerberg says 5-10 years.
Franks: there will always be promises around the corner.
Human judgment is required. Have to stop illusion of control from tech tools.
Problems are structural/design problems. Whether cos recognized the problem 10
years ago or now, this is the world 230 built. Do we think we’re living in best
possible world? Only people who aren’t sent death/rape threats can speak freely
because laws don’t stop threats and abuse from happening. Imagine any other
industry killing people w/toxic products getting away w/it and promising to fix
it later. FB Live was used to livestream murderes and rapes. Zuckerberg didn’t
think it would be misused. That’s unacceptable as an answer. Industry has been
treated like gun industry—immune from all harm caused. How long will we allow
this? Don’t look to tech for how serious the problem is. Industry keeps
promising tools but law is about changing human behavior for good. We’ve seen
that status quo has failed.
Klonick: The internet is everything that makes you mad about
humanity. Zuckerberg didn’t murder or rape anyone. He created transparency so
now we see how terrible we all are and now you want tech cos to clean it up for
you. Tech cos don’t make murder a product, they surface action that has already
taken place.
Schruers: Role of tech: sometimes held out as perfectable,
but not a cureall for humans. Journey, not a destination; ML/AI is being
deployed as we speak. They have false positives and false negatives. This
requires both tech and people.
Peterson: talk is cheap. Deeds are precious. Mississippi AG’s
concerns about prescription drugs, for which he sent Google CIDs, were rejected
and Google went immediately to 230. Message to AGs: you wont’ see behind our
walls. Tired of good intentions; would prefer cooperation.
Q: carveouts for federal prosecution?
Peterson: we work w/DOJ a lot; complement each other. We can
deal with smaller operations where DOJ may not have bandwidth. [Smaller
operations … like Google?] Request to
add states/territories to exclusion is important b/c a lot of these are small
operators. [There’s a lot of slippage here: is there a website that is just one
guy trafficking that isn’t also a content provider?]
Franks: No one is saying Zuckerberg is responsible for murder,
but there is accomplice/collective liability. [So FB is responsible for
murder?] Intermediaries aren’t directly causing, but promoting, facilitating,
and profiting from it. Collective responsibility: it takes a village to harass,
cause a mass shooting, use revenge porn. No need for complete difference from
real world rules.
Q: Encryption and CSAM: even if services don’t want it, they
can’t see it.
Schruers: Volume of reports shows that’s not the case. These
aren’t the only threats: beyond problematic content, fraud, crime, foreign
adversaries mean that other tech tools are required, one of which is encryption.
Safe communications protects user info: 82d Airborne in Iran is using E2E app
Signal widely used for secure communications because overseas communication
networks could be penetrated and transmissions b/t gov’t devices aren’t secure.
Encryption has a variety of lawful purposes: protestors, jurisdictions
w/problems w/rule of law. Balancing needs to be done but encryption is a
critical tool.
Q: FB Messenger could hide millions of reports.
Souras: E2E is necessary for some things but there has to be
a balance. 17 million reports: if we were in E2E environment for Messenger we’d
lose 12 million reports—children raped, abused, enticed undetected. There has
to be a compromise w/encryption rollout, or we lose 12 million children. [Each
report apparently reflects a different child. It is clearly correct to say that
encryption can be used for bad things as well as good. But the whole day I
never heard anyone explain what the balance would be if we have to balance: do
we only allow people we trust to use encryption? How does that work, especially
given what we know about how trust can be abused? Do we only allow financial
services to use encryption? How does that work? I don’t know whether encryption
does more harm than good or how you’d even weigh the bads against the goods.
But “there must be a balance” is not a plan.]
Klonick: PhotoDNA worked for a while; deplatforming means
that groups move and get smaller and narrower. Encryption does allow that. Autocrats
have learned to use platforms for surveillance and harm, and E2E helps with
that too. We need to think about full ramifications.
Q: should 230 be amended?
Klonick: works as intended. Was not just for startups: was
explicitly for telecoms, libraries, public schools. Nor was encryption not
contemplated: 1996 was mid-Crypto Wars I. Lots of sources exist outside of
encryption. These are critical tools for other equally serious threats. Mistake
to amend.
Peterson: Our proposal is simple: give us ability to support
criminal laws.
Schruers: 230 doesn’t prevent law enforcement action by states.
Prevents action against ISPs. If they’re direct actors, states can go after
them too. Fundamentally interstate commerce protection: services should be
dealt w/at federal level. If answer is resources, provide more federal
resources.
Peterson: let us go after bad actors aiding/abetting criminal
acts to clean up industry instead of waiting for industry to clean up itself.
DOJ 230 workshop
Section 230 – Nurturing Innovation or Fostering
Unaccountability? DOJ Workshop
These are copied from my handwritten notes, so will likely
be terser than usual.
Introduction of the Attorney General
The Honorable Christopher Wray, Director, Federal Bureau of
Investigation
Tech is critical for law enforcement. Tech facilitates speech
& enriches lives & poses serious dangers. As our use increases, so does
criminals’. Extremists, drugs, child solicitation. Like much infrastructure,
the internet is largely in private hands, leaving vital public safety in their
hands. Will they guard elections against foreign influence? [I’m
not optimistic.] Will they identify child victims? Can have entrepreneurial
internet and safety.
Welcome
The Honorable William P. Barr, Attorney General
DOJ’s interest came out of review of market leading online
platforms. Antitrust is critical, but not all concerns raised fall within
antitrust. Need for enforcement to keep up with changing tech. Internet changed
since 1996, when immunity was seen as nurturing nascent tech. Not underdog
upstarts any more; 230’s immunity may no longer be necessary in current form.
Platform size has left consumers with fewer options: relevant for safety and
for those whose speech has been banned [because the platforms deem it unsafe].
Big platforms often mondetize through targeted ads, creating financial incentives
for distribution rather than for what’s best for users. 230 immunity is also
implicated by concentration. Substance has also changed. Platforms have
sophisticated algorithms, moderation. Blurs line between hosting and promoting.
No one—including drafters—could have imagined, and courts have stretched 230 beyond
its intent and purpose, beyond defamation to product sales to terrorism to
child exploitation, even when sites solicited illegal content, shared in its
proceeds, or helped perpetrators hide.
Also matters that the rest of the CDA was struck down: unbalanced regime
of immunity without corresponding protection for minors on the internet. Not here to advocate a position, just
concerned and looking to discuss. [Said w/a straight face, if you’re
wondering.]
(1) Civil tort law can be important to law enforcement,
which is necessarily limited. Civil liability produces industry wide pressure
and incentives. Congress, in Antiterrorism Act, provided for civil redress on
top of criminal. Judicial construction diminished the reach of this tool. (2) Broad
immunity is a challenge for FBI in civil envorcement that doesn’t raise same concerns
as mass tort liability. Questionable whether 230 should apply to the federal
gov’t [civilly]. (3) Lawless spaces online: cocnerned that services can block
access to law enforcement and prevent victims from civil recoverly, with no
legal recourse. Purposely blind themselves and law enforcement to illegal conduct=no
incentives for safety for children. Goal of firms is profit, goal of gov’t is to
protect society. Free market is good for prices, but gov’t must act for good of
society at large. We must shape incentives for companies to shape a safer environment.
Question whether incentives need to be recalibrated, though must recognized 230’s
benefits too.
Panel 1: Litigating Section 230
The history, evolution, and current application of Section
230 in private litigation.
Moderator: Claire McCusker Murray, Principal Deputy
Associate Attorney General
Q: History?
Professor Jeff Kosseff, United States Naval Academy: Disclaimer:
views are his own. Misinformation in debate over lack of factual record.
Development out of bookstore cases prosecuted for distributing obscene
material. SCt said that ordinance can’t be strict liability, but didn’t clearly
establish what the scienter standard could be. Reason to know standard existed
in lower courts. Worked for 30 years or so until early online services.
Compuserve found not liable because did little monitoring; Prodigy was found
liable because it moderated other content. Perverse incentive not to moderate;
concern that children would access porn.
Early on it wasn’t clear whether distributor liability would still be
available after 230 or whether distributor liability was a special flavor of
publisher liability.
Patrick Carome, Partner, WilmerHale: Zeran was a garden variety
230 case, buit it was the first. Zeran was the subject of a cruel hoax. Zeran’s
theory: negligence/his communications put AOL on notice. Ruling: distributor
liability is a subset of publisher liability. Absent 230, 1A would be the main
defense. Platforms would still probably win most cases. Smith v. California:
free of liability absent specific knowledge of content, which would create
strong incentive to avoid becoming aware of problems. W/o 230 platforms would
be discouraged from self-moderation and they’d respond to heckler’s veto; would
not have successful, vibrant internet. Would discourage new entrants; need it
for new companies to get off ground.
Professor Benjamin Zipursky, Fordham University School of Law:
Zeran itself ok, subsequent decisions too far. American system: normally dealing
with state tort law, not just defamation, before we go to 230/1A. Common law of
torts, not just negligence, disginguishes bringing about harm from not stopping
others from harming. Misfeasance/nonfeasance distinction. But for causation is
not enough. For defamation, publication is normally an act. NYT prints copies.
Failing to force person to leave party before he commits slander is not
slander. Failing to throw out copies of the NYT is not defamation.
But there are exceptions: schools, landlords, mall owners
have been held liable for nonfeasance. Far less clear that common law of libel
has those exceptions as general negligence does, and not clear that they
survived NYT v. Sullivan if it does. There
are a few cases/it’s a teeny part of the law. Owner of wall (bathroom stall) on
which defamatory message is placed may have duty to remove it. No court willing
to say that a wire carrier like AT&T can be treated as publisher, even with
notice. Not inconsistent with Kosseff’s account, but different.
In 90s, scholars began to speculate re: internet. Tort
scholars/cts were skeptical of the inaction/action distinction and interested
in extending liability to deep pockets. Unsurprising to see expansion in
liability; even dicta in Compuserve said online libraries might be liable with
notice. Prodigy drew on these theories of negligence to find duty to act; one
who’s undertaken to protect has such a duty because it is then not just nonfeasance.
Internet industry sensibly went to DC for help so they could continue to
screen.
Punchline: state legislatures across the country faced an
analogous problem with negligence for decades. Misfeasance/nonfeasance
distinction tells people that strangers have no duty to rescue. But if you
undertake to stop and then things go badly, law imposes liability. Every state
legislature has rejected those incentives by creating Good Samaritan laws. CDA 230 is also a Good Samaritan law.
[Zipursky’s account helped me see something that was
previously not as evident to me: The Good Samaritan-relevant behavior of a platform
is meaningfully different from the targets of those laws about physical injury
liability, because it is general rather than specific. Based on the Yahoo case,
we know that making a specific promise to a user is still enforceable despite
230; the argument for negligence/design liability was not “you stopped to help
me and then hurt me,” but “you stopped to help others and not me, proving that you
also should have stopped to help me”/ “you were capable of ordering your activities
so that you could have stopped to help me but you didn’t.” Good Samaritan
protection wasn’t necessary to protect helpful passersby from the latter
scenarios because passersby didn’t encounter so many of those situations as to
form a pattern, and victims just wouldn’t have had access to that information
about prior behavior/policies around rescue, even if it existed. In this
context, Good Samaritan and product design considerations are not
distinguishable.]
(c)(2) isn’t actually bothering most people [just you wait];
(c)(1) does. Problem is that there was no baseline for liability for platforms,
no clear rule about what happens if you own the virtual wall. Implications: (1) Zeran is correctly decided.
(2) This isn’t really an immunity. (3) If a platform actually says it likes a
comment, that’s an affirmative act to project something and there should be a
distinction. The rejection of active/passive was a mistake. [Which means that having something in search
results at all should lead to liability?]
(4) This was mostly about defamation, not clear how rest of common law should
be applied/what state tort law could do: 230 cut off development of common law.
Carrie Goldberg, Owner, C. A. Goldberg, PLLC: Current scope
limitless. Zeran interpreted 230 extravagantly—eaten tort law. Case she brought
against Grindr, man victimized by ex’s impersonation—thousands of men sent to
his home/job because of Grindr. Flagged the account for Grindr 50 times.
Services just aren’t moderating—they see 230 as a pass to take no action. Also
goes past publication. We sued for injunction/product liability; if they couldn’t
stop an abusive user from using the app for meetings that use geolocation, then
it’s a dangerous product. Foreseeable that product would be used by predators. Grindr
said it didn’t have tech to exclude users. Big issue: judge plays computer
scientist on MTD.
Annie McAdams, Founder, Annie McAdams PC: Lead counsel in
cases in multiple states on product liability claims. Our cases have horrible
facts. Got involved in sex trafficking investigation. Tech plays a role: meet
trafficker on social media, was sold on website, sometimes even on social
media. “Good Samaritan” sites process their credit cards, help them reach out.
Sued FB, IG; still pending in Harris County. Sued FB in another state court.
Still fighting about Zeran. FB doesn’t
want to talk about FOSTA/SESTA. Law has been pulled away from defamation using
languages from a few cases to support theories about “publisher.” Knowingly facilitating/refusing to take down
harassing content. Waiting on Ct of Appeals in Texas; Tex SCt ruled in their
favor about staying the case. Courts are embracing our interpretation of Zeran.
Saleforce case in Texas was consolidated in California; in process of appealing
in California.
If Congress wanted immunity, could have said torts generally,
not publisher, which is from defamation law not from Good Samaritan law.
Jane Doe v. Mailchimp: pending in Atlanta federal court. We
were excited to see DOJ seize Backpage but another US company assisted a
Backpage clone in Amsterdam.
Carone: Complaint on expansion beyond defamation is mistaken:
Congress intended breadth. Didn’t say defamation; wrote specific exceptions
about IP etc that wouldn’t have been necessary if it had been a defamation law.
Needs to be broad to avoid heckler’s veto/deterrent to responsible self-regulation.
Problem here is extraordinary volume of content. Kozinski talked about saving platform
from 10,000 duck bites; almost all these cases would fail under normal law.
Terrorism Act cases for example: no causation, actually decided on that ground
and not on 230. Victims of terrorism are
victims, but not victims of platforms.
Not speaking for clients, but sees immense efforts to deal with
problematic content. Google has over 10,000 eomployees. FB is moderating even
more but will always be imperfect b/c volume is far more than firehose. Need
incentives and policies that leave space for responsible moderation and not
destruction by duck bites. 230 does make it easy to win cases that would
ultimately be won, but only more expensively.
230 puts focus on wrongdoers in Goldberg’s case: the ex is the person
who needs to be jailed.
Kosseff: based on research with members, staffers, industry,
civil liberties groups: they knew it was going to be broad. No evidence it was limited
to defamation. 2d case argued was over a child porn video marketed on AOL. Some
of this discussion: “platforms” is often shorthand for YT, FB, Twitter, but
many other platforms are smaller and differently moderated. Changes are easier
for big companies to comply with; they can influence legislation so that (only)
they can comply.
Zipursky: Even though publisher or speaker suggests basic
concern with libel, agrees with K that it’s not realistic to understand 230 as
purely about defamation. Compromise? Our tort law generally doesn’t want to
impose huge liability on those who could do more to protect but don’t, even on
big companies. But not willing to throw up hands at outliers—something to protect
against physical injury. [But who, and how? Hindsight is always 20-20 but most
of the people who sound bad online are false positives. It’s easy to say “stop this one guy from creating
an account” but you can’t do that without filtering all accounts.]
Q: what changes do you see in tech and how does that change
statutory terms?
McAdams: broad statements about impossibility of moderation,
10,000 duck bites—there’s no data supporting this not paid for by big tech. Who
should be responsible for public health crisis? Traffickers and johns can be
sent to jail, but what about companies that knowingly benefit from this
behavior? May not need much legislative change given her cases. [Big lawyer
energy! Clearly a very effective trial lawyer; I mean that completely sincerely
while disagreeing vigorously with her factual claims about the ease of moderation/costs
of litigation for small platforms and her substantive arguments.]
Goldberg: Criminal justice system is a monopoly. It’s tort
that empowers individuals to get justice for harm caused. When platform
facilitates 1200 men to come & harass and platform does nothing, that’s an
access to justice issue. Not about speech, but about conduct. It’s gone too
far. Need injunctive relief for emergencies. Limit 230 to publication torts
like obscenity and defamation. Needs to
be affirmative defense. Plaintiffs need to be able to sue when companies violate
their own TOS. Grindr said it could exclude users but didn’t have the
tech. Exception for federal crimes is a
misnomer: these companies don’t get criminally prosecuted.
Carome: 230 isn’t just for big tech. 1000s of websites
couldn’t exist. If you want to lock in incumbents, strip 230 away. What’s allowed
on street corners is everything 1A allows: a lot of awful stuff. Platforms
screen a lot of that out. 230 provides freedom to do that.
Zipursky: caution required. Don’t go too crazy about
liability. Don’t abandon possibility of better middle path.
Kosseff: 230 was for user empowerment, market based
decisions about moderation. Is that working? If not, what is the alternative?
Too much, too little moderation: how do we get consensus? Is there a better
system?
Wednesday, February 19, 2020
They chose unwisely: court blows another hole in Rogers by refusing to say that explicit means explicit
Chooseco LLC v. Netflix, Inc., No. 2:19-cv-08 (D. Vt. Feb.
11, 2020)
Explicit doesn’t mean explicit in yet another sign of the
pressure the Rogers test is under.
Chooseco sued Netflix for infringement (etc.) of its rights in Choose
Your Own Adventure in the dialogue (!!) of its film Black Mirror: Bandersnatch.
Chooseco’s registration covers various types of media
including books and movies. Netflix’s Bandersnatch “is an interactive film that
employs a branching narrative technique allowing its viewers to make choices
that affect the ‘plot and ending of the film.’” You know there’s a problem when
the opinion says “[t]he pivotal scene at issue in this litigation occurs near
the beginning of the film.” The main character
is trying to develop his own videogame based on a book also called Bandersnatch.
His father remarks that Jerome F. Davies, the author of the fictitious book in
the film, must not be a very good writer because Butler keeps “flicking
backwards and forwards.” The character responds: “No, it’s a ‘Choose Your Own
Adventure’ book. You decide what your character does.” “Of note, the subtitles
for the film couch the phrase in quotation marks and capitalize the first
letter of each word,” allegedly provided by Netflix.
The complaint alleged that Netflix promoted Bandersnatch with
a similar trade dress as that used by CHOOSE YOUR OWN ADVENTURE books in
multiple marketing campaigns. Chooseco is claiming the “rounded double frame”
as a trade dress. (Its exemplar seems to have a problem in that most of those
look like foreign, not US versions, on which you couldn’t base a US trademark
claim, but good news for Chooseco: the court doesn’t care.)
Thus, the complaint alleges, Netflix created a website for
Tuckersoft, the fictional videogame company where the main character developed
his videogame, displaying multiple fictional videogame covers that have a
“double rounded border element,” a few of which also appear in the film itself.
Netflix also allegedly used images of the videogame covers
while promoting Bandersnatch in the United Kingdom, and used the cover for the
Bandersnatch videogame as one of a few thumbnails for the film on its website.
Chooseco argued that Bandersnatch wasn’t a purely artistic
work, but was also a data collecting device for Netflix, and that “Netflix may
have sold product placement opportunities as a form of advertisement, which
would also suggest the film is not purely artistic.” This argument, at least,
fails. You get to sell art for money and it’s still art. Furthermore, the use had artistic relevance. “Choose
Your Own Adventure” had artistic relevance “because it connects the narrative
techniques used by the book, the videogame, and the film itself.” It was also
relevant because the viewer’s control over the protagonist “parallel[ed] the
ways technology controls modern day life,” so the reference “anchors the
fractalized interactive narrative structure that comprises the film’s
overarching theme.” And further, “the mental imagery associated with the book
series promotes the retro, 1980s aesthetic Bandersnatch seeks to elicit.” Chooseco
suggested alternative phrases that Netflix could have used, but that’s not the
right analysis.
So, was the use explicitly misleading? The court proceeds to
reinterpret “explicitly” to mean not explicitly, quoting subsequent cases that
don’t apply Rogers that say that the relevant question is whether the
use “‘induces members of the public to believe [the work] was prepared or
otherwise authorized’ by the plaintiff.” Louis Vuitton, 868 F. Supp. 2d at 179
(quoting Twin Peaks Prods., Inc v. Publ’ns Int’l Ltd., 996 F.2d 1366, 1379 (2d
Cir. 1993)) (a title v. title and thus a non-Rogers case, because in the
Second Circuit Rogers doesn’t apply to title v. title claims; the court
also quotes Cliffs Notes, Inc. v. Bantam Doubleplay Dell Publ’g. Group, Inc.,
886 F.2d 490, 495 (2d Cir. 1989), another non-Rogers title v. title case).
Then the court says that likely confusion must be “particularly compelling” to outweigh the First Amendment interests at stake, and that “the deception or confusion must be relatively obvious and express, not subtle or implied” (quoting McCarthy, and then the odious Gordon v. Drape Creative, Inc., 909 F.3d 257 (9th Cir. 2018)). The court acknowledges that, “[n]ot surprisingly, in most cases in which a disputed mark was used in the content rather than the title of an expressive work . . . the results favored the alleged infringer, on the basis that the use was not explicitly misleading.” Michael A. Rosenhouse, Annotation, Protection of Artistic Expression from Lanham Act Claims Under Rogers v.
Grimaldi, 875 F.2d 994 (2d Cir. 1989), 22 A.L.R. Fed. 3d
Art. 4 (2017).
Nonetheless, Netflix doesn’t win its motion to dismiss,
because Chooseco “sufficiently alleged that consumers associate its mark with
interactive books and that the mark covers other forms of interactive media,
including films.” The protagonist in Bandersnatch “explicitly” stated that the
fictitious book at the center of the film’s plot was “a Choose Your Own
Adventure” book. [That’s not the same thing
as explicitly, extradiegetically stating there’s a connection with the film—the
court considers the Fortres Grand case to be almost on all fours, but
there Catwoman “explicitly” says that the program she’s after is called “Clean
Slate.”] Also, the book, the videogame,
and the film itself “all employ the same type of interactivity as Chooseco’s
products.” The similarity between the parties’ products increases the
likelihood of consumer confusion. [Citing Gordon v. Drape, so you can
see the kind of damage it’s doing.] And
Bandersnatch “was set in an era when Chooseco’s books were popular—potentially
amplifying the association between the film and Chooseco in the minds of
consumers.” And Netflix allegedly used a
similar trade dress for the film and its promotion; though the court didn’t
think this was “particularly strong,” it “adds to a context which may create
confusion.” How any of this is “explicit” is left as an exercise for the
reader. Implied or contextual confusion is not explicit falsehood.
The court decided to allow discovery. Question: Discovery about what? What evidence is relevant to whether the film
is “explicitly” misleading about its connection with Chooseco?
Unsurprisingly, Netflix’s descriptive fair use defense was also
not amenable to a motion to dismiss. Here, the character in Bandersnatch held
up a book and stated, “it’s a ‘Choose Your Own Adventure Book.’” “The physical characteristics and context of
the use demonstrate that it is at least plausible Netflix used the term to
attract public attention by associating the film with Chooseco’s book series.”
There were allegations that Netflix knew of the mark and used the mark to
market for a different program until Chooseco sent a cease and desist letter. That
could support “a reasonable inference that Netflix intended to trade on the good
will of Chooseco’s brand,” as could intentional copying of “aspects”
[protectable aspects?] of Chooseco’s trade dress. And Netflix could have used numerous other
phrases to describe the fictitious book’s interactive narrative technique,
making bad faith plausible.
That holding makes sense, given the doctrine. But worse is to come. Netflix argued, quite correctly, that
dilution requires (1) that the defendant use the term as a mark for its own
goods or services, and (2) commercial speech, which the film is not. The court
rejects both arguments.
The court quoted the federal definition of dilution by
tarnishment as an “association arising from the similarity between a mark or
trade name and a famous mark that harms the reputation of the famous mark,”
but didn’t explain why Netflix was plausibly using the term as a mark, as opposed
to using it to label the book in the film. Netflix correctly pointed out that “[t]he
Second Circuit does not recognize an action for dilution where the defendant
uses the plaintiff’s mark not to denote the defendant’s good or services, but
rather to identify goods or services as those of the plaintiff,” but the court thought
that didn’t apply here because Netflix used the mark to refer to a fictitious
book. But the important part here is
the first half: it wasn’t using Choose Your Own Adventure to brand its own
goods or services; it was using it as part of a fictional work. The implication—and it is not a good one—is that
if, in my work of fiction, my character disparages a Choose Your Own Adventure
book that doesn’t actually exist, I may have tarnished the CYOA mark. This is defamation
without any of the limits on defamation that the First Amendment has imposed.
Nonetheless, the court found that “Netflix’s use of Chooseco’s mark implicates
the core purposes of the anti-dilution provision” (citing Hormel, which
is not a federal dilution case and which has been treated as superseded by
federal dilution law, see Tiffany v. eBay).
Netflix then, correctly, pointed out that “the Lanham Act
expressly exempts dilution claims based on a ‘noncommercial use of a mark’ of
the type at issue here.” Despite the fact that in discussing Rogers the
court correctly noted that profit-motivated speech is often noncommercial and Bandersnatch
is noncommercial speech, the court still stated that “Netflix’s use of
Chooseco’s mark may qualify as commercial speech” (emphasis added), which
is not the test. And it so reasoned because Chooseco’s complaint alleged that “Netflix’s
motivations in including its mark in the film were purely economic,” that
Chooseco’s product is popular, and that Netflix used “elements” [protectable
elements?] of Chooseco’s trade dress in promotion and marketing. More discovery!