DOJ Section 230 Roundtable, afternoon
Chatham House Rules
Session 1: Content Moderation, Free Speech, and Conduct
Beyond Speech
How do platforms deal w/defamation? Standard practice is to
review the complaint, the content; compare to TOS/code of conduct. Removal if warranted,
contact poster if warranted and sanction at some level if warranted. If no
violation, it can be difficult. Service provider can rarely determine falsity.
Hassel v. Bird is part of a trend of expanding 230 immunities
outside recognizable form. Involved a statement adjudged to be
false/defamatory. Court ordered platform to take it down; platform declined.
Platforms just say they don’t care.
Distributor liability would be consistent w/text of 230.
Zeran ignored 300 years of common law If
platform is made aware, there is a responsibility to do something.
That’s not what distributor liability was. There is not 300
years of distributor liability; 300 years ago people were still getting
arrested for lese-majeste. There is no
common law of internet liability. 230 did cut off common law development of
secondary liability but it is possible that the First Amendment requires
something very much like 230.
There are three levels in the Restatement: publisher,
distributor, and provider of mechanisms of speech, such as photocopier
manufacturer and telephone company. That third level is not liable at all.
There are cases on the latter. The Q is whether internet companies are
publishers, distributors, or providers of mechanisms of distribution. Consider the city: if someone is libeling me
on the sidewalk, I can’t sue the city. Tend to be people who are by law
prohibited from discriminating—broadcasters when they sell space to political
candidates. [Notably, not photocopier providers—at least I can’t imagine that
anyone thinks that their liability for defamation copied on their machines
turns on whether they only sell to politically correct copiers.] The real
innovation of 230 was not to abandon a 2 level structure, but it also said that
service providers get protection even though they do have the power to decide
what’s allowed. Maybe we should abandon this category, or reserve it for
service providers that don’t discriminate, but traditional rules also had
absolute liability, not just notice and takedown. [When communication is not
private one-to-one but one-to-many, nobody wants platforms to not discriminate
against bad content, because that makes them unusable. So the common carrier
neutrality requirement might not be a good fit, though of course that doesn’t
necessarily mean that immunity would be the constitutional rule in the absence of
230.]
The libelous/not libelous problem is very difficult. A lot
of libel judgments in the record are provably fraudulently obtained, not
counting the ones that are outright forgeries. Default judgments, stipulated
judgments—no reason to think they’re trustworthy. Deeper problem: I run a blog,
someone comments negatively on Scientology, which complains. If you impose
notice and takedown, I have to take it down b/c I’m in no position to judge.
Judgment in hand=much smaller set w/its own problems; w/ notice regime, there
will be default takedowns. Maybe that’s fine, but that’s the downside.
No one is arguing against freedom of speech, but there’s a
reality that some platforms with recommendation engines/algorithms have more
power than newspaper over what we will see, amplifying their content. So we
should figure out a category for a digital curator that classifies companies
that use behavioral data to curate and amplify content, and then the
responsibility is not just in allowing the content; did the algorithm amplify it. You’ll have to decide the thresholds, but
there is a missed conversation in acting like all platforms are the same.
230 cut off development of state law to see how we can
develop rules to fit a solution that is not analogous to a photocopier. These are
highly curated, controlled environments they are creating. 230 represents a
tradeoff, and they should give something back in public responsibility. That
was the deal in common carriage. In return, they got immunity from libelous
content.
230 clearly meant to reject notice & takedown b/c of
moderator’s dilemma. Most of these cases would fail; moderator can’t tell what
is true. Anti-SLAPP laws are also applicable. Defamation can’t be extrapolated to things
like CSAM, which is illegal under all circumstances.
If they’d fail on the merits, why have 230? It bolsters the
case b/c it shows the real risk of death by 10,000 duck bites. There may be
businesses w/o the wherewithal to deal with a number of frivolous lawsuits. 230
has been useful for getting companies out of litigation, not out of liability;
removing burdens from court system.
DMCA has notice and takedown. Not just sole discretion of
moderator, right?
It is often abused to take down obviously noninfringing
material. Even if the person responds, you can still have the content down for
2 weeks, and that’s very handy in a political system. People use the system for
non © purposes. 512(f) has been interpreted by the courts in ways that make it
extremely difficult to enforce.
Notice & takedown is good for © but overwhelming which
is why the content owners want staydown. © is always federal so there’s less of
a mismatch. DMCA isn’t about illegal content (CSAM), whereas © infringement is
illegal distribution, not illegal original content.
Where it gets tricky is often where the use involves fair
use b/c it can be difficult to build filters/automatic process to distinguish
lawful/unlawful, which matters for our discussion b/c much of the content isn’t
going to be easy to figure out.
Many, many studies and anecdotal accounts of bad takedown
notices. And the content companies are constantly complaining about the DMCA. The
best regime is the one you’re not operating under.
Notion that 230 didn’t contemplate curation is flatly wrong.
Libraries are curators; Stratton Oakmont was a curator. 230 was intended
to incentivize curation. Ultimately,
what is demoting vitriolic content online to make a community less toxic?
That’s curation.
There is a fourth Restatement model: 581 on distributors;
was almost made up by the Restatement reporters. There’s almost no case law
support for the distributor liability; Dobbs hornbook agrees that 1A would not
tolerate distributor liability. It is just not the case that there were a bunch
of distributor liability cases. But there is a property owner/chattel owner
provision of liability: if you own a bulletin board or something like that if
you’re given notice: that seems far closer than distributor liability, but the
legal authority for that is also extraordinarily weak. Even if as a matter of
principle there ought to be such liability, we don’t have 100 years of it. If
we did, it’s unlikely to survive NYT v. Sullivan. Cutting the other direction: to the degree
there is amplification, at common law, or republication, even if purely third
party content: extraordinarily strong precedent for liability for republication
regardless of whether you know or don’t know defamatory content. No particular
reason to think 1A would cut into the republication rule. Defamation cases that
go over the top [in imposing liability] involve republication. That’s just a
mistake by the courts.
A lot of harm at issue is not defamation. Illicit drugs.
What if $ goes through the payment systems they host? If they know about animal
torture rings, pedophilia groups, Satanic groups are hosting video—these are
not hypotheticals.
CDA was about pornography, not just defamation. Indecent
content is very difficult to regulate, b/c it is constitutionally protected for
adults to access. 230 means that many platforms block this constitutionally
protected speech b/c otherwise their platforms would be unusable. 230 allows
platforms to do what gov’t couldn’t.
Should platforms be encouraged to be politically neutral in
content moderation? Is it a danger we should be concerned about as more
political speech occurs in private forums?
Anecdotally, conservatives think that Silicon Valley is
biased against them. If you made it actionable, it would just hide better.
[Leftists say the same thing, BTW.]
Invites us to have a panel where 18 engineers talk about
what law professors should do better. We haven’t had any numbers here.
Discussions of how companies make decisions are completely detached from how
big companies make decisions. People care deeply, but all moderation is about
knobs. You can invest time & effort, but when you moderate more you make
more false positives and you moderate less you make more false negatives. Never
sat in any meeting where people said “we’re not legally liable so who
cares?” Political bias: rules v.
enforcement. The rules are generally public. Example: Twitter has a rule you
can’t misgender somebody. There is nothing hidden there. Then there’s bias in
enforcement; companies are very aware of the issue; it’s much larger outside of
the US b/c companies have to hire people with enough English & education to
work at a US tech company, and that tends to be a nonrandom subset of the
population. So that tends to be from
groups that may be biased against other subgroups in that country. There are
some tech/review attempted solutions to this but anecdotes aren’t how any of
this works. Millions and millions of decisions are being made at scale. There’s
a valid transparency argument here.
It’s a false flag to say that conservatives feel this way so
it’s true. Did we take down more ads on one side than the other? We don’t know
which side violated policies more, so that counting won’t work. Need criteria
for what is an ad/what is abusive, and we lack such criteria. This political
conversation takes away from the debate we should be having. [Also: If you are sufficiently transparent to
show what’s going on in detail that might satisfy critics, then you get a lot
of complaints about how you’re sharing bad content, as Lumen has been accused
of doing, and you may also provide a roadmap for bad actors.]
Transparency principles: disclose the numbers; explain to
users what the rules are and which one they violated; provide an opportunity
for appeal. Many companies didn’t have appeal options. We’ve seen improvements
on that.
Everthing is biased, but transparency can increase
understanding and trust. Build up to a fullblown standards process where all
stakeholders can be in the room, big and small companies, different users. Not
all use cases are the same. Also, AI is not one tech but a variety of enabling
tech. Its complexity is one of the things that standard developers are
grappling with. Starting with bias, risk, predictability, governance.
It’s a fact that Google held a meeting w/sr executives after
the 2016 election saying it was bad, and conservative engineers were fired for
their views. They should have a transparency report about this.
Of course there’s bias. The policies are deliberately
complex. Not just Google. Executives admit they hire liberal staff. [There is
deep confusion here between the moderators and the executives.] Twitter is the
worst actor. Despite all that, companies should solve the problem themselves.
These hate speech policies are garbage. EU hate speech policy would be much
worse. We have a 1A here that Europe doesn’t believe in. You could ban the Bible
under FB’s code, and Bible-related posts have been removed. Tens of millions of
Americans are sure there’s a problem.
The problem is at scale: every single group in the world
thinks they’re the one that’s targeted. Gay people, Bernie bros, conservatives.
The problem is a massive amount of innumeracy and non quantitative thinking in
this debate. 1000s of examples of bad
decisions exist even if you’re at nine-nines accuracy. Not a single one of
Google’s employees who was marching about the travel ban makes a single content
moderation decision or oversees anyone who makes a content moderation decision.
It is obvious that everyone in the world will not agree what speech moderation
should be, and they will all think they’re the victims.
There are plenty o’ conservatives at Facebook.
Should there be any affirmative obligation on transparency
in 230? New problems appear that you didn’t anticipate: people with white
supremacist usernames. You could say
that adding a new rule is inconsistent/not transparent, but when you didn’t
have the problem previously you have to develop some response.
Common law development is the norm w/in platforms. As FB
scaled, they went from one page of instructions “if you make you feel bad, take
it down” to rules that could be administered based on content. People didn’t
want to see hate speech. This wasn’t conservative bias, but civil society
groups & sometimes informed by European rules. There was no reptilian brain behind it all.
To claim that things didn’t change after 2016 is a fantasy:
big tech responded b/c Donald Trump won an election based on big tech.
No, that’s when you started to care about it.
Transparency: what is it really? Google’s numbers don’t
really tell you how the system works, they just provide numbers of requests.
Need to talk about granularity as well as scale. GDPR is another consideration.
Europeans won’t allow disclosure of personally identifiable information. That
means some gov’t here will have to extract that data as part of transparency.
Speaking of bias, consider the possible bias of gov’t in
determining whether the platforms are biased. You can’t tell a bookstore which
books to stock, and you can’t go to the NYT or Fox and require them to disclose
their editorial policies in the name of appropriate transparency. Assumption
that this is a matter for gov’t regulation instead of letting the market decide
is a mistake, at least in the US where the 1A constrains.
Misgendering activist was kicked off Twitter for tweets she
made before the anti-misgendering policy, and that’s the core of her legal
claim. 230(c)(2)(A) doesn’t say “hate speech” [though it does say “otherwise
objectionable” and it also says “harassing”]. You can’t have it both ways in
not being responsible for 3d party speech and not being responsible for your
own moderation decisions.
Other courts have said that spam is covered; other courts
have wanted something more like what’s listed in (c)(2). This isn’t a 230 issue
at all: the courts are recognizing that platforms themselves have 1A rights and
that they cannot violate 1A rights as they are not gov’t actors. Nothing to do
w/230.
As for mandatory transparency, many companies do have law
enforcement transparency reports and are expanding their efforts. Reporting
numbers may be a pretty dry read, but if you dig into the help pages of any
number of sites, you can get a better idea of what the rules actually mean.
Here is where small businesses would need a carveout; when you’ve built computer
systems to do one thing, it can be very hard to convert it into another (the
data you’d need for a transparency report). There’s been a transition period
for companies to revamp their systems in a way that’s useful for transparency
reporting.
Is the court overreading 230 and treating it as anti SLAPP
statute/MTD stage? It is an affirmative
defense, and the Q is whether the elements are present on the face of the
pleading? Usually there isn’t much question of whether a platform is an ISP,
whether the content originated w/a third party, etc. Discovery, where it
occurred, has focused on whether there was third party content, and that’s
correctly limited discovery.
1A right to discriminate in policy and enforcement?
Platforms, when acting in recommendation capacity like Google search/FB
stories, get to decide what to include and what not to include. Doesn’t
completely answer what happens solely in platform capacity: YT in what it
chooses to host. One way of thinking about it: first scenario is clearly Miami
Herald v. Tornillo; for the second, there’s a plausible argument that content
or viewpoint neutrality rules could be imposed under Pruneyard/FCC v. Turner on
what they host. The traditional model did say essentially that common carriers
got total immunity, while distributors with power to choose got notice and
takedown. There’s room for argument that 1A immunity requires even-handedness.
Not positive it’s constitutional, but not positive it’s not either.
Evenhandedness is impossible to define. What violates the
policy is the key. Let’s talk about real
victims who were connected to abuse via platforms. Conversation about political
bias is a sideshow that undermines search for help for victims.
Session 2: Addressing Illicit Activity and Incentivizing
Good Samaritans Online
Hypo: user posts a pornographic photo of a young woman.
Individual claiming it’s her asserts it was posted w/out her consent. Platform
doesn’t respond for four weeks. Alleged subject sues for damages she suffered
as a result of the photo. Suppose: Anonymous user posts it; alleged subject
claims it was posted when she was 13.
Argument that 230 still doesn’t cover it: there’s an
exception for crimes, including CSAM. If you look at the provisions that are
covered, they include 2255 & 2252(a), both of which have civil liability.
Argument that 230 does cover it: Doe v. Bates: this case has
already been litigated. The statute is
very clear about being about “criminal” law, not about civil penalties that
might be part of it.
Should 230 immunize this content against civil
claims? The platforms are horrified by the material, didn’t know it was there, and
took action when they knew. If you have a rule that you’ll be liable in these
circumstances, you’ll have platforms stick their heads in the sand. Given
potential criminal exposure, this is not a real life hypothetical.
What’s the current incentive to address this? Criminal
responsibility; notification obligation. And being human beings/adults in the
rooms. Criminal incentive is very strong.
Even with FOSTA/SESTA wasn’t about creating federal law; they took down
Backpage w/o it, it was about creating state AG authority/allowing survivors to
sue.
What would FB do in this situation? FB unencrypted: Every
photo is scanned against photoDNA. Assume it’s not known. All public photos are
run by ML that looks for nudity. If classified as such, looks for CSAM. Would
be queued for special content review; trained reviewer would classify it by
what’s happening and what age the person is.
Depending on classification, if there was a new high level
classification they would look for more content from the same user, directly
call FBI/law enforcement.
14 year olds sending their own nude selfies violate child
porn laws.
Sextortion victims are big population of CSAM material.
They’re being thought of as less egregious b/c it’s not hands on abuse but
suicide risk is almost doubled. In terms of 14 year olds breaking the law, feds
wouldn’t look at charging them for that.
But: State law enforcement has charged 14 year olds,
which is relevant to whether we ought to have more state lawsuits against
people that the states blame for bad conduct.
FB doesn’t allow public nude photos. If not marked as child,
would just be deleted. If reported to FB as nonconsensual, deleted and FB would
keep the hash to make sure it doesn’t reappear with a better algorithm than
PhotoDNA. If the victim knew the ex had
the photo, she can submit it to FB and that goes to a content moderator that
can prevent it from being uploaded. That’s a controversial project: “FB wants
your nudes.”
Regardless of company size, we care greatly about CSAM and
NCII (nonconsensual intimate images). Everyone views responding to complaints
as the baseline. Companies take idea of being in violation of criminal law very
seriously. Penalties for failing to report went up significantly in 2008; criminal
piece of this is so important: any state law consistent w/this section (CSAM)
could be enforced.
FB is the company that does the best at finding the worst,
but that’s very unusual. A child couldn’t anticipate that with every platform.
No prosecutions on failure to report. The fine is $150,000 which isn’t
significant for a tech company.
Not every tech company is funded like WeWork was. In fact almost no tech companies are and the
ones that are, are definitely taking action. People who aren’t deterred by
$150,000 and criminal liability are rare, and where you could deter them more
is by enforcement not by increasing the penalty.
Suppose someone sues the platform for damages as a result of
availability: should provisions be sensitive for different kinds of harm? If
someone is threatened and then raped or murdered, that’s different than having
personal information exposed. We might want to focus liability on the type of
harm proved to have flowed from this.
Identity of the user Q: if there were a criminal
prosecution, then the company would have to turn over the information, and also
if there were a civil prosecution you can get discovery. Dendrite/similar
standards can be used to override anonymity & get the info.
Platforms send responses to preservation letters telling sender
they have no obligation to preserve evidence for anybody outside of law enforcement.
They fight the subpoenas even though offenders are usually judgment proof. Costs
$250,000 to fight subpoenas. FB automatically deletes stuff after a few months.
Platforms ought to perform an evaluation of the
validity of the subpoena.
Euro data law has made platforms turn over data on 30/60 day
windows, that’s global.
Deleting content is very serious; many platforms immediately
delete the material reported to them. Helps criminals cover their own tracks.
There needs to be some type of regulation that when toxic content is reported
there’s some curative time you have to save it but not leave it up.
Reporting to NCMEC is a precedent for that. Should be used for war crimes, not just
domestic.
The process of giving us hypotheticals in isolation is a
problem. Each example ignores the problem of scale: you get a bunch of these
each day. And there are problems of error and abuse. E.g., French authorities notified the
Internet Archive that they were hosting terrorist content and had 2 hours to
take it down.
Hypo: terrorist uses online platform to recruit. Algorithm
recommends the group to new individuals, who join in planning acts of terror.
Platform gets paid for ads. Survivors of individuals killed in terror act sue
under Anti-Terrorism Act.
Should look at how banks are regulated for terror and crime
content. Money laundering. Financial services industry argued that they were
just P2P platforms that couldn’t be regulated for storing money for illicit
actors, but the US government imposed regulations and now they have to monitor
their own platforms for money laundering. Terror/organized crime aren’t
supposed to use banking services. You agree that your activity will be
monitored, and if bank suspects you’re engaged in illicit activity, a suspicious
activity report will be filed. That’s not working particularly efficiently. How
can we look at systems like NCMEC or other reporting mechanisms to improve upon
them? [This seems like a basic problem that money is not obviously and always illicit,
like CSAM. We’ve just been hearing about NCMEC’s challenges so it seems weird
to look at it for a model—the system that’s best is always the system you’re
not using!] Many companies that produce
chemicals and electronics have to control their supply chains to avoid
diversion by drug cartels or go to IEDs. Why does the tech industry get freedom
from liability for the harms their products cause? There are 86 designated
terror groups and we find activity on major social media platforms. Not fans
but Hezbollah has an official website and an official FB and Twitter feed. They
do fundraising and recruit.
Interagency colleagues are thoughtful about this—NSC and
other alphabet agencies. They have ideas about financial services, money
laundering, and that would be a productive conversation. But at the end of the
day, there is still a First Amendment, and that’s your challenge. EC is de
facto setting global standards for large platforms. The large platforms would
like something like GDPR because those costs are sunk. EC doesn’t have a 1A
hanging over them; EC is already looking at this at the member level. W/o the
Brits, it could happen within a few years.
On GDPR as a potential model to get big change out of
platforms: It is in fact impossible to comply w/GDPR. The reason it kind of works is that European
regulators sometimes let you fudge if they think you’re acting in good faith,
though that is not without its own political bias. The kind of strict
compliance regularly required by both US regulators and civil litigation is not
compatible with the kind of rules that you can borrow from GDPR type
regimes. Civil liability and especially
class actions are not a significant part of the European model. Having one national regulator to answer to is
very different than having to fend off lawsuits any time anyone thinks you
didn’t act fast enough.
Financial services people know more than internet services:
physical branches, physical IDs, etc. Scale: JPMorganChase has 62 million users
and makes $80/per user; FB has billions and makes $8/user. If you want people
to be charged $80/year, you can apply money laundering rules to FB and FB will
have branches.
As if we know what is a terrorist organization: the platform
knows there is a page. But there are anti abortion groups, anti police groups,
pro Palestinian, environmental groups w/radical fringe. Somebody calls them a
terrorist group. The initial hypo says that they use the service to recruit,
radicalize, and promote: the 1A protects a vast range of promoting violence.
Holder v. Humanitarian Law Project: Restricting interaction w/designated groups
is lawful only b/c people can set up independent promotion. Liability for this would
require platforms to remove content that anyone links to violence.
What if they knew it was a terrorist organization? Knowingly
facilitates, solicits, profits from.
How does Google “know”?
This scenario happens all the time. FB takes down a bunch,
misses a bunch. Consider how many women on the planet are named Isis. Terrorism
is one of the hardest things; FB needs its own list of terrorist organizations,
b/c some countries use their lists to suppress racial minorities. A lot of
speech is currently overcensored b/c lots of victims are Muslims related to
political Islam who are not terrorists and not enough people who matter care.
What if they knew someone was convicted of stalking on a
dating app? Assume the matching service
knew that but failed to warn. Would 230 immunize that? [Again we have just
decided to assume away the hardest part of the hypo: knowledge, as opposed to
accusation.]
There are a number of cases that speak to when an
interactive service is facilitating/participating in development, like
Roommates and Accusearch. You can’t state that b/c the service is being used,
it’s facilitating. It has to elicit the content in some specific way. If the
site is promoting content likely to be of interest to the user, you can’t jump
to that. Having seen OECD process on terrorism: when companies are saying we
need to be reporting on terrorist content, whose list should we use? There is
no consensus on who should be on the list. We can’t just call something a
terrorist organization w/o speaking to definitions and authority.
Agencies can issue guidance if laws are unclear. Can be
illustrative; could be multistakeholder process.
We’re hearing GDPR a lot with things like ICANN. We have to
make a decision about whether we will kowtow to the EU. If Russians/Chinese had
done GDPR, we’d be raising holy hell. Cost per customer is misleading beause the
customers are actually the advertisers [though the advertisers are not the only
people providing content/in need of screening, which is the source of the
problem!]. Google and FB are bigger than banks and making a lot of money. Conduct has to be where we start, not
content/bias. Knowing facilitation/profiting is easier, as w/FOSTA/SESTA.
Didn’t have the ability to pierce the veil w/Backpage b/c of 230. The reason Backpage
went down was a Senate investigation and then the DOJ, but state AGs couldn’t
do it and survivors couldn’t—stopped at the courthouse steps.
Gov’t should have a higher burden of proof for identifying
terrorists, but the blanket immunity is a problem. Tech platforms know that if
all else fails they can fall back on immunity. Hard cases are always extremely
complicated. Blanket immunity can’t be sustainable.
Shady company, MyEx, charged people to take down pictures
that had been posted of them, and sometimes they wouldn’t even take them down.
FTC shut down the site. There are ways to deal w/some of these issues that
don’t involve rebalancing 230. Law enforcement is super important here, and
resources for that are really important.
Are we concerned w/230 invoked against FTC?
State of Nevada was on MyEx case as well. This was §5
deception, not about third party content. We can make those cases. If the
companies are doing something, we can go after that. 230 doesn’t help them if
they’re actually doing the stuff. MyEx
didn’t raise 230, but it wouldn’t have helped.
See also: Accusearch.
Civil suits are by nature anecdotal and not scaleable. Every
individual should have the right to bring these cases. [But if you don’t have
strict liability, then most of them should lose.] Extreme cases w/nothing to do
with speech are getting thrown out of court—P is suing for an offender’s
conduct, like creating fake profiles and sending people to someone else’s home.
Grindr is an important case b/c the product itself facilitated the harm. It
wasn’t encouraging violence. It was the actual mode of the violence. The words
the offender used in his DMs weren’t important to the cause of action. Cts have
interpreted 230 so extravagantly. Companies don’t have to build safer products.
One victim was murdered by a first date on Tinder, and another person had been
raped earlier that week, and yet he still got to use that dating app. How do we
stay safe against these platforms?
Proposed changes to 230 that add words to limit it more to
speech. E.g., add “in any action arising out of the publication of content
provided by that information content provider.”
230 has been interpreted to short circuit all this analysis. Shouldn’t
apply to commercial transactions: a dog leash that blinds someone. If these
actions happened in physical space, they wouldn’t be treated as speech. [In cases
on false advertising, who is responsible for the false advertising is very much
an issue; the retailer is not, according to a wave of recent cases. I actually
think the case for Amazon’s product liability is fairly strong, but 230 hasn’t
precluded it.]
The proposed change doesn’t do useful work. Courts have to
have a sense of what information doesn’t constitute speech. There is no well
defined principle; Sorrell v. IMS makes clear that data is speech. It’s not going to do what you want it to.
Tech always wins; we need some unpredictability.
Unpredictability is not an asset in these situations; find a
rule that predictably yields the result you want. Immunity for Amazon: the
reason isn’t that leashes are treated as info but not as speech; the reason is
that Amazon is a place for people to publish certain information and Amazon
can’t be held liable; the ad for the leash is info and speech. There are things
you can do, but these suggestions aren’t those things. If you require speech to be an element of the
tort for 230 to apply, then people will just assert IIED instead of defamation
[though Hustler says you can’t do that for 1A purposes]. Would leave disclosure
of private facts immune (including revenge porn) but not IIED. If you have in mind some particular kinds of
harm to regulate against, like revenge porn, you can do that, but if you’re
trying to do that based on elements of the offense, the proposal won’t work
very well.
Reasonability could be done a lot of different ways: safe
harbors, best practices.
There have only been 2 federal criminal cases: Google and
Backpage. It’s not enough resources to go after the wide range of criminal
activity. Frustrated by discussion that hasn’t looked at major criminal
organizations that have infested major platforms. They should have responsibility
to take reasonable steps to keep the content off their platforms. Particularly
critical for fentanyl sales, but there are other issues ranging from wildlife
crime to gun sales to antiquities and artifacts in conflict zones. Illicit
networks are weaponizing platforms: hosting chatrooms where organized crime
takes place.
Lack of data about use of 230 is important. There are a
number of 230 cases involving discovery. There are a number of cases rigorously
applying the exceptions. We’re aware of a decent number of cases where the
conduct of the platform was more than sufficient for the court to find
liability. Reviewed 500 most cited 230 cases, and before we rush to change 230
it would be helpful to have a full understanding of how it’s applied.
Is blanket immunity still justified now that sites are using
E2E encryption and thus can’t comply with process to hand stuff over?
EARN IT: Every word other than “that” will be subject to
litigation. Even if you assigned it to a commission, that will be litigated.
Full employment for lawyers but not great for clients. As you think of
proposals, don’t think about FB. Think about the Internet Archive, or
Wikimedia, or newspaper comments section. The individual user is who 230
protects. Encryption: vital to many good things; there’s no way you can build
E2E encryption that works only for certain, good people. It works for everyone
or it doesn’t work at all. People would like that not to be true.
EARN IT: substantive Q of tying immunity to encryption;
procedural Q of how it’s done. Procedural answer is straightforward: EARN IT
Act is terrible way to do it. (1) Can’t give rulemaking authority to DOJ—there
are many equities. (2) This is a high level policy tradeoff that should
properly be done by the legislature. Like nuclear power: big risks, big
benefits. (3) Bad politics. 230 is already so complicated. Adding the only
thing that is more complicated than 230 is bad. (4) Kind of trolling/encouraging
pointless disputes: creates a committee, then gives authority to AG.
Possible for a site to ignore the AG as long as it’s not
reckless; do have to take some liability if people start distributing child
porn on their network if they’re reckless about it. [But how do you decide
whether enabling encryption is reckless?] Just stopping looking for child porn
is the same thing as encrypting all the traffic. [That seems like the wrong causation to me,
which I suppose makes me a supporter of the doctrine of double effect.]
Companies are capturing all the benefit of encryption and offloading costs onto
victims of child porn. If E2E is such a great idea, the costs o/weigh the
benefits and we should put them on the same actors, the guys selling the
encryption. [Wait, that’s not recklessness. That’s a description of strict
liability/liability for ultrahazardous activities, unless you have
predetermined that using encryption is reckless. If the standard is truly recklessness, then
if the benefits outweighed the costs, it shouldn’t be found to be reckless.]
4th Amendment impact: platforms know they should never take
direction from law enforcement. If we act as agents of gov’t, we violate 4th A
rights. If you legally have to look for child porn and have to report it, hard
to argue you’re not an agent of the state doing a nonwarranted search on every
piece of content from the service. Zuckerberg would love this law b/c it would
solve his Tiktok and Snapchat threats. E2E encryption is the best way to turn a
massive data breach that could destroy the company—or the nation—into a bad
weekend. One of our only advantages we have over China is trust, and protecting
private info is how we get trust. Destroy encryption=Longterm harm to
competition and national security. FB has stopped terrorist attacks; handed info
to law enforcement. If you want platforms to do it, has to be voluntary and not
involuntary.
4th amendment is a gray area: have to lean into the tough
area to protect children. 2 issues on encryption: DOJ’s issue is law
enforcement access. NCMEC’s issue is detecting the abuse and being able to make
the report. Encryption works to block both. If you don’t see the child, law
enforcement action is irrelevant. Has heard from technologists that E2E is
wonderful; got to find a way to blend the power of that tool w/child protection
measures. 12 million reports is too many reports to lose.
People are working on options other than law enforcement back
doors.
No comments:
Post a Comment