First Plenary Session: Current Copyright Concerns and Theory
Matthew Sag, Loyola University Chicago School of Law, Predicting Fair Use: An Empirical Study of Copyright’s Fair Use Doctrine
Beebe points out that there are more law review articles on fair use than there are published opinions. Most work focuses on one area or is theoretical or anecdotal, with a few exceptions attempting empirical categorization—Beebe analyzing what courts say about the factors, Pam Samuelson’s policy-relevant clusters—taking perspectives internal to the case. Here, Sag tries to test claims/intuitions about the fair use doctrine focused on things outside the courtroom: information that could be available to the parties before they step into the courtroom and have a judge decide what’s what. About 280 written opinions at the district court level. Average win rate under 40%; for all you hear about the chaos of fair use, only 16% were successfully appealed on fair use.
Transformativeness, noncommerciality, partial use, no effect on market value: are supposed to favor finding of fair use. Unpublished plaintiff’s work, creative work are supposed to disfavor fair use. His measure of transformativeness: tries to measure “creativity shift.” Also doesn’t adopt judges’ definition of commercial use, which is completely screwy. He just recoded—something is directly commercial if the d used the p’s work in a commercial product or service. Indirectly commercial if the d used the p’s work to prepare a commercial product or service. Noncommercial is everything else. Market effect: prone to circular reasoning, and injury is always disputed. His solution: assume that companies in the same industry are involved in the same market.
Additional hypothesis embedded in some commentary: fair use is a subsidy for the underdog.
Results: taking out things that weren’t statistically significant: creative shift favored fair use, direct commercial use disfavored it, partial copying favored fair use, weak plaintiff (natural person) favored fair use, and weak defendant (natural person) made fair use less likely. The other factors didn’t.
Myth: fair use is only available to noncommercial actors. Not statistically significant; no bias against commercial users. Myth: creative/unpublished works less susceptible to fair use. Myth: fair use favors the underdog. Evidence goes for the overdog.
Another myth: fair use is highly unpredictable. Add in the factors, and you can get relatively solid predictions.
What about selection effects? Yes, they are incredibly important; these results can be explained by your story about selection effects, but which ones are right? E.g., circuit variation. Broadly speaking, though the fair use win rate in the Second Circuit is higher, it mostly tracks the national average. Time effect for change since 2000 doesn’t seem to affect the result. Campbell doesn’t seem to affect the results.
Haochen Sun, The University of Hong Kong, Faculty of Law, Fair Use as a Collective User Right
EFF picture: Fair use has a posse. Photo credit.
Courts have characterized fair use as an affirmative defense. Copyright holder establishes a prima facie case, then the burden of proof is shifted to the user. The user then bears the burden of proof. Fair use, if proved, exempts the user from copyright liability. From this perspective, fair use is geared toward defending personal freedom in self-actualization and fulfilling individual life plans, just like copyright rights are. Civil procedure is designed to protect personal interests. Putting the burden on the defendant, courts conclude, is a way of giving equal treatment to both parties’ individual rights.
This fails to consider the public interest in many fair use cases, where the fair user may have a far-reaching impact on other users who aren’t participating in the litigation. Example: Harper & Row. The Court didn’t examine how use of the work could produce public benefits such as the free flow of information, instead focusing on the market value of the work.
Should also recognize identity-based collective rights, designed to allow users to assert interests based on the groups in which they take part: groups of researchers, educators, journalists. Also society-based collective rights, for the public as a whole living in a free and just society: the right to cultural participation (e.g., parody), and the right to receive benefits from technological development (e.g., reverse engineering). This would require introducing a public interest test into fair use analysis. This is not a call to abandon the four factor analysis.
Q: invoking the rights of nonparty defendants—but then in class actions there’s preclusion, and a requirement of adequate representation. So should defendants have to be adequate representatives etc.?
A: under public trust doctrine, citizens have collective rights over public properties; that entitles the public at large to sue the government or a third party who threatens those rights. We may be able to borrow from that.
Q: but isn’t that endemic in having legal precedents for other people?
A: there is a problem in US culture only recognizing individual rights. (I think a better answer is that the incentives logic inherently posits the plaintiff as a representative of creators generally, but the defendant is on its own. Consider the fourth-factor doctrine that says that we evaluate market effect not by asking what harm if any the defendant did but what harm might happen if the defendant’s behavior becomes widespread. That inherently puts the defendant at a disadvantage and the plaintiff at an advantage. So there’s a disparity in the specific case, not just in the culture.)
Ariel Katz, University of Toronto Faculty of Law, What Antitrust Law Can (and Cannot) Teach about the First Sale Doctrine
Antitrust has an analytic framework for dealing with post-sale restraints. Yet antitrust is being used to undermine the first sale doctrine. Antitrust is suspicious of horizontal agreements, whereas vertical agreements are presumptively benign. Vertical restraints can restrict what a buyer can do: where she can resell, to whom, at what prices, whether she can buy repair/services from someone else. These restraints may increase output through price discrimination, or encourage specific investments in a distribution system and control opportunism, and thus they may be efficient.
Without IP, can be enforced through contract/termination of agreement, but not much you can do about third parties. IP remedies are broader and can bind third parties.
Enter first sale. The argument: first sale inefficiently undermines price discrimination, harming people in poorer countries; discourages dealers in territories from developing services; reduces IP appropriability and diminishes incentives to create.
Not so fast. Some vertical restraints are efficient, but not all. Efficiency doesn’t mean they should be part of the property bundle. Various ways to enforce post-sale restraints differ in associated social costs. Property rules may be easier to enforce, but come with greater costs because of their application to third parties and because of their strength through time.
IP is non-Coasean; people can’t contract ex ante either for the rights or the limits on the rights.
First sale is justified by, among other things, user innovation. If innovation is not producer-centric (producers identify needs, do R&D, develop products), then the model doesn’t work. And growing evidence shows that user innovation is widespread and under certain conditions superior than producer innovation. IP neutrality: when we design a system, we should try not to benefit one model of innovation at the expense of others.
Q: what about technological measures preventing resale?
A: He thinks the question is whether the restraint is an efficient one or not; he wouldn’t attack the tech but put the burden on the person who wants the long-term restrictions.
Sag: Is it enough that the first generation of users benefit from a network effect, or does there have to be user innovation for your argument to apply?
A: a lot of collaborative work increases value for everyone.
Peter Swire: shouldn’t there be another exception for cases where there are lots of follow-on innovations?
A: we often don’t know the sources of innovation—that’s an insight from the user scholarship, that innovation is unpredictable in the sense that often innovation will come from a non-core producer in the field. Why does the producer need to restrict the ability to do follow-on innovation when it doesn’t know where the innovation will come from? If there’s a good explanation, ok.
James Grimmelmann, New York Law School, A Bridge Too Far: Google Books and the Limits of Class Action Law
What’s wrong with a settlement that governs future conduct? Is selling complete books really beyond the scope of the underlying lawsuit? He thinks that Judge Chin’s opinion can be defended on grounds the court didn’t articulate. Releases are dangerous in class actions, and limiting release to past actions caps the risk of what plaintiffs might lose to what they might gain in the litigation.
We’re concerned with releasing defendant from things it hasn’t done yet, not with releasing future claims (health harms that have yet to materialize but will be caused by things the defendant already did). The latter are highly problematic because people with present claims may grab a lot of compensation at the expense of the claims of people who may get sick in the future. But future conduct doesn’t have the same kind of commonality/class conflict issues, because authors all have the same future interests.
This is also a concern about classes; individuals give releases for future conduct all the time, through contract. But releasing a company not just for the asbestos it’s sold, but the asbestos it will sell, there is more at stake. These agreements are therefore harder to get right. The deal may not be good for the plaintiff class, and the class representatives aren’t necessarily accountable to the class. Thus, we should only allow releases for a continuation of what defendant has already been doing—it’s the unpredictable stuff that’s troublesome. What we should allow: Everything that could have been precluded based on claim preclusion, and everything that would count as issue preclusion: if Google wins, it wins the right to keep scanning for snippets, so that’s what the settlement could be allowed to cover. The “identical factual predicate” requirement: may release claims not presented as long as the factual predicate is identical—claims based on the same conduct that’s already going on. Can release state claims even if there would have been no federal jurisdiction, since those are claims that could have been presented somewhere. Not other claims that might develop if the defendant does something meaningfully different.
Past conduct: scanning and searching are plausibly fair use; selling whole books en masse is not fair use. This is exactly what we should worry about: taking legal questions that couldn’t have been touched in the underlying suit and trading them away. Narrower settlement: allowing Google to scan, index, and show snippets for compensation corresponds to what Google risks at trial: if it won fair use, it would have been allowed to continue to do this without payment. Make sure there are no surprises lurking in the future, but such a settlement would be theoretically allowable.
Q: What if tomorrow puts “buy this book” on Google books and the authors amend the complaint and present a new settlement?
A: Skin in the game. This would require close scrutiny, but if Google really was selling books it’s open to immense liability, giving the plaintiffs huge leverage. That would be hard to collude on in advance. We want them to do things to which they think they have a strong legal defense.
Q: is opt in ok for book sales?
A: Yes. They don’t need a settlement to do that.
Sag: where does the 20% preview fall under your scheme? Can argue that would be fair use, though it’s tough.
A: if Google hasn’t been showing 20%, that’s off the table. If they have, close scrutiny on fairness.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment