Content and CDA Immunity
Moderator: Olivier Sylvain, Visiting Assistant Professor, Fordham Law School
Samir Jain, Partner, WilmerHale
Prior to 230, the analogue to ISP was a distributor: a bookstore. Distributors couldn’t be held liable for distribution unless they knew or should have known about that content. The courts initially tried to apply this framework. But Prodigy, which treated the ISP like a publisher because it did some filtering, sent a shockwave through the industry. 230 was a solution.
Who gets covered by 230? Basically everyone on the internet. When? When the information is provided by another information content provider—someone who’s responsible in whole or in part for the creation or development of the content. So can the defendant be held responsible for the creation or development of the content? Finally, does the action at issue treat the defendant as the publisher or speaker of the content? That’s not entirely clear—might only apply if publication or speech is an element of the tort, with defamation being a classic example, but in practice courts have applied it quite broadly.
First appellate decision: AOL v. Zeran, involving ads for obnoxious Oklahoma City bombing T-shirts that gave out Zeran’s phone number. Zeran wasn’t responsible but got a lot of threats and abuse. AOL delayed in taking down/preventing subsequent postings, and Zeran sued. The first two elements are easy: the only issue was would Zeran’s claim treat AOL as a publisher/speaker? Fourth Circuit said yes—holding someone liable for another’s speech is almost by definition treating them as the publisher/speaker.
Recently: Roommates and Craigslist, both about housing discrimination. Roommates: what was different was that all users were required to answer the multiple-choice questions, and any answer they give was, according to the plaintiffs, unlawful—the fact of revealing gender and number of children was a violation of the law. In that situations, Roommates.com was contributing/developing the unlawful content. This is distinguishable from cases in which the answers are unlawful for particular reasons—for example, when the answers are provided by someone who’s impersonating another person (Carafano). So how broad is Roommates? The court purports not to narrow immunity substantially, though there’s some loose language in there. Subsequent district court decisions haven’t read Roommates to work a big change in 230, though.
Craigslist: The 7th Circuit was unclear in its analysis. It didn’t answer the question of whether it was adopting the rule that 230 applies only when publication is a formal element of the tort. Much of its policy language supports a broad reading of 230—the costs to Craigslist of filtering are big. But it’s hard to tell.
Eric Goldman, Associate Professor of Law & Director of the High Tech Law Institute, Santa Clara University School of Law
230 is an incredibly broad and robust immunity that’s survived a host of attacks, 100+ lawsuits, with just a handful making some exception. Internet actors aren’t liable for 3d-party content, period. The period is the problem: bright lawyers think that they can outsmart Congress. But remarkably, they’re largely failing—Congress rarely establishes as clean a rule as here.
Exclusions: Electronic Communications Privacy Act (in his opinion, a null set); federal crimes, when the federal government brings a criminal prosecution (gambling, child pornography, obscenity—but the cases make clear that a state crime is preempted); IP (hot news, misappropriation, other state law claims). 9th Circuit tried to cut off the flexibility in IP by saying state law IP claims were preempted, based on a policy rationale favoring nationwide uniformity. That’s not a popular ruling outside the 9th Circuit.
Another possible workaround: Roommates—the site is responsible for the questions it picks. Possibly that can be extended to marketing representations, even when those representations can be rendered untrue by third-party content. If the site says “we don’t tolerate defamatory content,” and then someone posts defamatory content on the site, is there a cause of action for false advertising that isn’t precluded by 230? eBay v. Mazur: eBay represented that a third-party’s site was “safe.” eBay said: that’s the fault of the third party. The court said that eBay needed to be responsible for its own words.
Problem: plaintiffs are trying to take advantage of that in problematic ways—trying to hold sites responsible for negative covenants (“don’t post anything defamatory”) in their terms of service. Goldman thinks there are analytical difficulties with this, but that’s probably the most promising way around 230 because there are always marketing representations on a site that are not legal representations.
Why all the agita? 230 seems to break tort law, which we learned in law school applies when someone is involved in harmful activity. If you don’t want liability, do as little as possible. 230 breaks apart those principles as we learned them. A website isn’t liable even if it gets a C&D/takedown; even if it does something to manage/prescreen/edit the content; even if it profits from or takes ownership of third-party content; even if it looks like their content. That just doesn’t make sense from a common-law perspective, so we get the very smart Kozinski and Easterbrook thinking that it can’t be the law.
Two quick defenses of 230: (1) What content do people want to excise from the internet? Mostly, negative criticism/commentary. 230 prevents people from taking away negative content. Otherwise, we’d get a lopsided database. (2) Information markets. The job reference market is broken. Employers won’t give bad references for fear of liability. Compare product reviews on the internet. We see lawsuits against Yelp! users for posting, but Yelp! isn’t liable, and as a result millions of reviews are available on Yelp! Stark contrast to other regimes.
Nancy Kim, Associate Professor of Law, California Western School of Law & Visiting Associate Professor, Rady School of Management, University of California, San Diego
Statutory language: no provider shall be treated as the “publisher or speaker” of information provided by another information provider. Have the courts gone too far in interpreting this immunity? Courts have looked at the nature of the injury. But ISPs shouldn’t get a free pass for being socially irresponsible. They’re businesses, not free speech forums.
They should take reasonable measures to avoid harm. But offline and online are different, so reasonableness should differ. The volume of traffic, the size of the company, the difficulty of controlling content, and the problems of anonymity. There should be no pre-screening requirement, and no liability based merely on notice. Craigslist says 230 isn’t a general immunity from all civil liability. Craigslist itself had measures in place, warning posters against discrimination. The company is leanly staffed and gets millions of posts a day. Also, the Lawyers Committee could go after discriminatory posters themselves.
Roommates was different. It was foreseeable that posters would be prompted to post discriminatory ads. Roommates didn’t actually require posters to put in discriminatory information—posters could choose “no preference,” which wouldn’t violate the law. The problem was that they set up the website to prompt entry of discriminatory information.
Doe v. MySpace: A minor met an adult on MySpace then met him offline and was sexually assaulted. Claim: MySpace should have taken reasonable measures to avoid this, because it was too easy for minors to lie. Court rejected premises liability, but didn’t explain why not. It did apply what Kim considers a reasonableness analysis: MySpace had a minors policy, but the minor lied to avoid it. Age verification software too is not foolproof. Given the amount of traffic on the site, MySpace didn’t need to do more. Its business model wouldn’t work if it had to do more, and Congress has favored online business models.
What if it turned out that 15% of minors on MySpace were meeting adults and being sexually assaulted—would we still get the same result? As a society, that’s not a solution we could live with.
Turn to statute: 230 was designed, among other things, to incentivize blocking and screening technologies, and to immunize ISPs from liability for blocking content they don’t like.
How do we apply that to dontdatehimgirl.com, which publishes pictures and names (including pictures of a man’s driver’s license). All postings are anonymous. This kind of posting is likely to be impulsive. But the site is under no obligation to remove the content—even if you make up with the guy and want to remove it. Gossipreport.com encourages you to make up a profile about someone else, not yourself.
Is this really what Congress had in mind?
Rebecca Tushnet, Professor of Law, Georgetown University Law Center
Preliminary thoughts: notice the ideology encoded in the concept of “intermediaries”—I’m just the middleman—the term automatically calls our attention to the acts of compiling and aggregating. Compare this to “the press,” which also transmits the statements and images of other people (sometimes employees, sometimes not) and yet is traditionally thought at least somewhat responsible for what it transmits. On the other side, you can compare “common carriers,” which aren’t even intermediaries and nobody tries to hold them liable in the modern era. It’s maybe not surprising that we don’t know how to treat the man in the middle.
The moves in the argument over 230 are really well known, on the order of “I can’t pay the rent!—you must pay the rent!” (and somebody always ends up tied to the tracks). At this point I’m inclined to say we need housing reform: that is, if we think that 230 is failing to balance harms versus benefits properly, we need to look at other ways of achieving the benefits we want from regulation.
There are things about 230 that make me uneasy. Example: a pending false advertising/Quizno’s case, where Quizno’s asked users to make comparative ads, and a number of the funny ones said really nasty, possibly defamatory things about Subway sandwiches. If Quizno’s adopts the user-made ads as its own, shouldn’t it be responsible for any defamatory content therein?
Another question to be answered: Lack of uniformity across liability regimes: is IP’s difference from other rights sufficient to justify special treatment? Political power is the easy answer for why IP got treated specially, but there are possible defenses if you think IP rights are easier to enforce or harder to abuse than other claims. (If you think that, do you think that just about copyright? Or do you extend that to trademark? What about the right of publicity? Should publicity be treated the same as privacy?) Should we move towards a more uniform, European-style model where all the rules about intermediary liability are the same?
When we talk about changing 230, though, we go instantly to the move, countermove. Move: it’s impossible to monitor all our content; countermove: but you’re hurting innocent people. Possible synthesis? Notice and takedown: works clunkily with copyright infringement, which is easy compared to defamation and related torts, with which most would-be reformers of 230 are concerned. Potential harms to accused users: When users lose posts or accounts, their lives can be disrupted—social networks are valuable to them not just because of particular content, but because of relationships. Privacy issues: anonymity is an important value for many people; a notice and takedown regime would require the sacrifice of that anonymity to defend a statement challenged by someone upset about that statement.
This is also a debate about acceptable business models: what risks ought an intermediary to take as a cost of doing business? 230 says: not many. Speaking in “business model” terms makes it sound as if greater liability would be fine, just a matter of money, but I don’t think that’s right. The landmark NYT v. Sullivan case establishing newspapers’ almost complete freedom under the First Amendment to say things about public figures was also a case about business models. The Court was quite clear that it endorsed the paper’s business model as a means of implementing First Amendment values—if the paper had to do more fact-checking, it wouldn’t run as many political ads or stories.
Sullivan precludes defamation liability for speech about public officials unless there’s clear and convincing evidence of actual malice, which means actual knowledge or red flags about the falsity of the information published. This rule is especially useful for intermediaries.
A printer reproducing his own words can more easily assess whether he has taken reasonable care to verify truth; the real speech-chilling effects of a negligence standard come when he must guess whether someone else who wants to use his press has also taken reasonable care. Moreover, the printer-intermediary is likely to be less committed to getting a message out than a printer-speaker; more inclined to doubt the truth of another’s claims than of his own, and thus not overconfident about his chances of success in a lawsuit; and overall more risk-averse than individual speakers, not least because of the likelihood that the printer has deeper pockets and is a more attractive defendant from a plaintiff’s perspective. Sullivan, though of course protecting individuals as well, removes barriers that disproportionately discourage intermediaries from carrying others’ speech.
Thus, the Supreme Court in Sullivan analyzed what the Times knew about the truth of the statements at issue, not what the individual author of the ad knew. But Sullivan has not generally been understood as a case about intermediary liability. We have 230 because it was unclear how far Sullivan’s rationale—protection for certain speech-based business models—would extend past its rule—no liability for defamation without actual malice. Now, we’re in a better position to say that some business models do produce a more robust speech environment, and the First Amendment has to be an important concern when we talk about reforming 230.
Standard countermove to Professor Kim: the ISP says, if you make a non-immunity rule, I will take down any content about which I get a complaint. Doesn’t matter whether I’m allowed to take the chance and keep up material I’m not sure about. I won’t, because it’s not worth it to me, for exactly the reasons mentioned above.
Where do people engage in free speech? Central Park? Not very likely these days. If we want practical access, it will move through private entities, whether the NYT or the NYT website.
My preference for solving some problems, though not all: Governance solutions: if intermediaries aren’t responsible for user-provided content, then they should have to give up some control over that content—they shouldn’t be able to use contracts to assert absolute dominion over what they allow. If it isn’t their content when it hurts third parties, then at the very least the people whose content it is should be allowed to play a role in governing the community of which they are a part.
Jain: It is true that ISPs will take down any challenged content if there’s any risk of liability. Even if you say there’s only a 5% risk, why will they take the risk, let alone litigation costs? Liability creates a heckler’s veto. If reasonableness depends at all on notice, then rather than expend resources the ISP will take down the content 99% of the time. There may be cases in which social values outweigh the costs (protection of youth), but as appealing as reasonableness sounds, the practical implication is that any content subject to reasonableness will disappear if challenged.
Goldman: Despite 230, you can get content off the internet from a lot of places just by asking. The incentive structure applies even with immunity. Also: has anyone in the room actually personally gotten a C&D? As a blogger, he gets them more than he likes. Being in the sights of someone in the business of suing people isn’t fun.
Kim: Sensitive to 1A concerns, but thinks they’re exaggerated. Most businesses that we think about do act in responsible, reasonable manners under her proposed model, which looks at the front-end procedures a business has in place, not the content after the injury. There was nothing in the NYT’s business model that was unreasonable. An online reasonability model could be found, taking into account things like the way that anonymity encourages defamation. Ratemyprofessor.com requires students to register, but dontdatehimgirl.com doesn’t.
Tushnet: Consider Kim’s front-end theory in light of the Tiffany’s case, to be discussed—how much has it cost eBay to establish that it’s reasonable (pending appeal)?
Q: If someone were featured on the dontdatehimgirl.com site, what would the panel recommend to do about?
Kim: Not sure there’s a good answer.
Goldman: There are ways to push that into the second page of Google results, which makes them obscure/effectively disappear.
A: Can complain to the website, citing the ToS, and generally they take it down.
Kim: If it’s a responsible business—people who make up with their exes sometimes fail to get content down from dontdatehimgirl.com, because it makes the site more popular.
Jain: Analyzing the business model is a dangerous road to go down. What Congress clearly said was that it wanted vibrant and competitive internet—a host of business models, not ex ante judging of what business models are okay. Is it reasonable to say that a 30-employee site doesn’t need to screen, or is it unreasonable not to have enough employees to screen? A detriment to innovation.
Goldman: Does dontdatehimgirl.com invite inappropriate content? The principle: people should be accountable for their choices. There might be people out there who are really bad dates. Not just boring, but bad. We might want there to be information about that circulating. Whether that site promotes the goal is up for debate, but it’s not the wrong kind of information inherently.
A: Recourse—can post response in the comments—this poster is a psychopath. (Though that response may not help a lot.)
Goldman: People can generate information a variety of ways—allow discussion and remove problematic content; other sites don’t remove content no matter what and don’t allow the target to reply—230 allows a heterogeneity of solutions. It is true: If you don’t have a right of reply on the site, you can get stuck.
Q: Juicycampus.com: The New Jersey prosecutors had trouble finding the posters of defamatory content—a specific problem where the website wouldn’t allow more speech. Is this really protected speech? How does society benefit?
Goldman: Thinks the questioner is talking about AutoAdmit, where students had trouble finding jobs. Juicycampus is the market working very well. A site was eliciting a lot of not credible content; it got drummed out of the marketplace. We are coming to a catharsis about sites that are worthy of our time and credit and sites that aren’t.
A: Public interest—one justification for free speech is not about serving the public interest, but about individual autonomy. Even if the speech is worthless to others.
Q: Generational issue, and we’re all on the wrong side in analyzing how distasteful and potentially harmful information that is not attributable to a person may be. Older people regard it as much more dangerous.
Kim: Not sure it’s true that young people don’t care, but assuming it is, they aren’t thinking longterm. Think of the stuff you didn’t care about other people knowing when you were in college.
Goldman: There was a groundswell of students who protested Juicycampus and argued for self-restraint in using it. Students can make some judgments, even as they figure out reputation in the long run.
My thought: Gender makes a big difference too. With AutoAdmit, women were afraid to go to the gym because of anonymous posters about how they looked at the gym. That’s a real cost.
Kim: Consider the chilling effect of this speech on the speech of the targets. It’s free speech v. free speech—what type of discourse do we want to take shape on the internet?
Q: The legal framework may be at odds with social practice. The definition of what is truly defamatory today may be at odds with the Victorian framework of what counts as defamation. Certain criticisms no longer cause the same kind of harm that they did when the law was formed. People are now aware that the internet counts as public—if it’s out there, it’s up for comment. So maybe we need to change the definition of public figure. (I think this is overstated; people still have local expectations of privacy—in fact, I don’t expect 10,000 people to read my blog. The collapse of the interval between globally public and completely private is a problem, and it’s not complete either.)
Goldman: with every tech, there’s a lost generation. The people afterwards learn from their mistakes.
My thought: danah boyd talks about anonymity/pseudonymity as a positive value from teen perspectives, because young people have been taught not to use their real names in order to protect themselves. To then condemn anonymity/pseudonymity seems odd to them.
Reform proposals?
Kim: Likes the 7th Circuit rule on publishers. In general, 230 is okay, but reasonableness should come in. She thinks notice and takedown should exist in three cases: (1) where the poster requests the takedown; (2) a naked picture if there’s no written authorization from the subject and the request comes from the subject; (3) a picture of a minor, on request. (Bye-bye, Star Wars Kid.) If they don’t, they should be open to suit on standard grounds, which means they wouldn’t necessarily be liable but it would depend on the background rules.
Goldman: Likes 230; every system has its costs. What made the internet succeed? He can’t rule out that 230 was a big factor. Tinkering with 230 might undo some of that “secret sauce.” He thinks it’s a huge government success.
No comments:
Post a Comment