Panel: Copyright Fair Use
Jacob Victor, Utility-Expanding Fair Use: New technological developments: utility-enhancing fair use. Project: disaggregate it from other forms of fair use and see why fair use might not be the best mechanism for allowing these technologies to succeed. Campbell: classic one-off transformative use adding new meaning, expression, or meaning. Factor one interacts with factor four: making substitutionary market harm less likely, no harm to incentive function.
Second Circuit: uses tech to achieve the transformative purpose of improving the efficiency of delivering content, but how does that interact w/market harm? Second Circuit relied on Sony as supporting the idea that enhancing efficient access is itself transformative. Market harm: in Sony, users had authorized access to the works they were timeshifting, so there was no unreasonable encroachment on © entitlements. This falls apart in other situations, such as Fox v. TVEyes. Court found no fair use despite transformativeness that allowed users to isolate from an ocean of programming material responsive to their interests and needs. No fair use b/c of market harm: deprives Fox of revenues to which it was entitled.
Fair use is the incorrect lens for these types of technologies. Compulsory licensing might provide a more appropriate way to understand what’s happening normatively and practically. Transformative + market harm could be subjected to a similar compulsory license regime, either through judges or statutory amendment. Though rate setting is costly and unpredictable, the prospect of a compulsory license can galvanize private licensing. Might help TVEyes like service.
Xiying Tang: Is your position that courts should set damages that function as compulsory licenses, eBay-like? Or something else? TVEyes wasn’t about affording it; Fox didn’t want its content available for critique and fact-checking. So why would that help the Fox situation?
A: the former: the balancing in fair use should influence the remedy. There’s precedent for an ongoing royalty requirement. Fair use is notoriously unpredictable; Fox probably had a good sense that TVEyes wouldn’t win b/c of usurpation of licensing market. If transformativeness could play a role, that would impact Fox’s expectations, which could encourage them to retreat from restrictive demands. Better than the sq.
Annemarie Bridy: Very hard to get things through Congress without giving a bunch to rightsholders, as with the compulsory licenses for sound recordings. The compulsory license for cable performances came with the addition of the transmit clause. What is the quid pro quo here? Compulsory license didn’t save Aereo, if you consider that utility enhancing.
Victor: good question. Google Books: an example of limiting market harm analysis to discounting the harm from snippet/fact substitution. But there’s a world in which the availability of compulsory licensing shrinks fair use. [Not loving that.]
Eric Goldman: So you want judges to order compulsory licensing—does that just apply to the plaintiffs, or is it the universe of potential plaintiffs? If it’s the former but all plaintiffs would then win if they brought follow on litigation, that doesn’t work very well. And Google Books didn’t like class certification. The whole point of Google Books was the long tail that wasn’t covered by any of the lead/named plaintiffs. There’s a fit mismatch: you have to explain why the long tail’s interests would be served by compulsory licensing.
Betsy Rosenblatt: Permission and zero price are both important parts of fair use. Jane Ginsburg’s work on this is similar. In many cases a price more than zero is practically prohibitive of the utility promoting activity. Any compulsory license/ratesetting process will at least contemplate and possibly require a price higher than zero.
Victor: won’t necessarily be more than zero. In early ratesetting, CARP looked into costs of dissemination and value added, though it has stopped.
Rosenblatt: it would prevent certain kinds of entrants into the activity. Because incumbents can pay and newcomers can’t. Google maybe could pay to digitize. But anyone who wants to make a competing service would then need to be prepared to pay a compulsory license fee that had been set based on Google’s resources.
Victor: pricing variability can accommodate that. © owners should have some control over new dissemination that harms their existing markets.
Jake Linford: what would your test do with the 11th Circuit’s Cambridge Univ. Press case, which suggests that it’s fair use if there’s no license and not fair use if there is. Doesn’t your approach narrow that to zero fair use, because you’re automatically providing a market/lost licensing fee?
Victor: it’s a matter of copyright policy whether [? Maybe whether the market harm is cognizable market harm instead of just created by definition?]
Peter Yu, Can Algorithms Promote Fair Use?
Dan Burk/Julie Cohen arguments against algorithmic fair use. (1) We don’t have the technology. No “judge on a chip.” There is improvement in AI, but those are very narrow areas. Programming is ex ante, fair use is ex post. It finds new situations that are fair use. (2) Change in creative practices will be driven by the algorithms: users will internalize restraints which will then be reflected back into the system as training data—self reinforcing feedback look. (3) Tech shortcomings. Algorithmic biases. Black box (as Eric Goldman has discussed recently and others like me have noted more generally, the demographic profile of © winners differs from that of © losers; that is probably a bad feature but it’s hard to stop algorithms from picking up on it).
Distinguish between machine interpretable and non-machine interpretable fair use. Also, algorithmic copyright enforcement is already in place: Content ID: why not build more fairness in to protect consumers? Companies are trying to do that, but we should think more about how to develop the algorithm. Use big data. Factor three: amount/substantiality is amenable to quantitative and qualitative analysis. Quantitative can be done by computer/counting. Qualitative: how can a computer know what’s the heart of the work? But popular highlights by Kindle and other information about popularity exists—Netflix knows how often people pause/replay film. That is the heart of the work. [No, that’s a complete redefinition of the heart of the work. This is what Balkin and others write about as ideological drift.] The answer to the machine is in the machine: it is impossible to do manually.
We are moving from precision model to probability model, which is uncomfortable. What about change in creative practices? Need for support: we need a court; we need to make decisions about the law/machine interface. What’s the legal status of an automated fair use ruling? Algorithmic audits will be important. Sample data as well as outcomes. Finally, we need legislative reform of DMCA, CFAA, privacy issues that prevent automated systems from working.
David Simon: precision to probability move doesn’t seem right. Now we try to predict what judges will do and we have low certainty in prediction; but you are suggesting a model that produces a higher probability of being right.
A: you know for sure when you’re done with court, but relying on a computer you may not ever know for sure what a judge would do.
[RT: From a user’s perspective, Content ID is not a probability model: it lets the content through or not. YT is not the internet. The only thing they have been able to do in making Content ID more sensitive is duration. Most of the leverage comes b/c they monetize it whether it’s fair use or not. We don’t have big data about what are fair uses because we have fewer fair use cases than fair use articles in law reviews. Now we’re dealing with predictions that we have no reason to think the human coders producing the training sets will be any good at (at the very least, all the energy will be focused on who gets to write the training materials for the human coders)] Argument re: heart of the work. [No, that’s a complete redefinition of the heart of the work. This is what Balkin and others write about as ideological drift; you change what you measure.]
A: only partially agree/disagree. The way judges think about the heart of the work may be different from what readers and programmers think, but they provide us more data. [But they aren’t making the same decisions: when I highlight, it’s not b/c something is the heart of the work for © purposes.] That will provide data that can be helpful. It will be important to leave courts in place for human intervention to be available. But I wouldn’t throw out the opportunity to rely on the algorithm to help more even though it can’t replicate what courts have been deciding. A lot of fair use determinations now are made by non-judges (individual users) [I agree with that].
Rob Walker: a new transformative use won’t be in your dataset from prior transformative uses. The fact that you have a strong hand saying it’s not fair use based on this model is an almost irresistible draw. That fixes the boundaries/destroys the first factor.
A: need to use best practices, not just past cases. Need mechanism of procedure; will be difficult. But could have a certification body to identify the right tech.
Paul Heald: Fair dealing is easier to automate than fair use, and that’s going to happen under Art. 17 in the EU. Might be more productive to focus on that.
A: I want to fight the harder case. But more important, fair dealing still involves multifactor analysis. US has been pushing hard to include the factors.
Fred Yen: Legal realism monster at the side: how do we know that a fair use decision is right in the first place other than by authoritative pronouncement? We don’t really have that. What does it mean to train a system to determine fair use with such a paucity of examples? Yes, we are going to use machines to make all kinds of soft determinations, but stock trading is different from rights retention.
Stephanie Bair: One of Dan Burk’s points was that b/c we have to rely on the data/metrics we have, that changes our idea of fair use in unprincipled directions. Heart of the work is a perfect example of that: it’s not about popularity. Once you incorporate that into the algorithm, that changes our idea about what fair use is, simply because it’s the data we happen to have.
Tyler Ochoa: does this tie in w/Oracle v. Google question of judge v. jury in fair use? Jury requires larger number of deliberations. Maybe we want 12 different algorithms to debate the Q of whether this is fair use?
Sean Flynn, The International Right to Research: Beyond Marrakesh, what are the possible international exemptions? Many focus on libraries, universities, research, but research is undertheorized and so he is working on it. New frontier: digital research, data mining. Mapping the existing limits/openness. Database of user rights that looks at how open © exceptions are: open to all works, all purposes, all users. Ranked, and the US is the highest; developed countries have been more open and getting more open than lower income countries. Correlation with openness and research outputs.
Elements of computational research: you need the right to use AI tools, but more importantly you need the right to create a corpus, and that’s where an exception is needed and the fact/expression dichotomy is not enough. Once you make a corpus in one jurisdiction, can you share that with a researcher in another jurisdiction?
Outside the US, this is done through statutes. We see notable differences: about ½ the laws don’t apply to commercial uses. Brings up a host of underanalyzed issues around public/private partnerships, AI tools with a public purpose done by commercial entities like internet search/AI and climate change. Also, fair use/fair dealing standards tend to allow all uses, but some restrict the rights that are covered but are not theorized—a few apply to storage explicitly, but others don’t. Does that really mean that you can use a corpus but not maintain one? Germany says you can make a corpus available to a close family of researchers, but you can’t transfer it. Does that mean you can’t move it physically? France is also difficult to code: allows reproduction of corpus specifically, but says nothing about storage or transfer. A hodgepodge. The recommendation: be more open. Almost all exceptions apply to any user, except EU DSM one part applies only to research and heritage organizations; it’s assumed that noncommercial purpose is required. A commercial user under the DSM has many of the same rights but has to have an opt out mechanism. “Lawful access” requirement: a bunch say you can mine anything to which you have lawful access. Michael Carroll makes the case that US fair use lacks that requirement.
Towards a new international right to research: there’s not a lot of faith that the int’l environment will produce new exceptions, but there are some places for debate, including in WIPO with the broadcast treaty. Takes Rome Convention, which has a permissive exception for scientific research and the current draft of the WIPO treaty removes that permissive exception. There’s opportunity there. Also WTO digital trade is a potential area. Low-hanging fruit: apply Marrakesh excemption: if you can do it anywhere, you can do it everywhere.
Ochoa: storage is inconsistent b/c storage isn’t an exclusive right. We don’t say what happens to a reproduction in most cases. [Though storage means making multiple copies in a digital world, Ochoa notes that reproduction covers each of those copies.]
Ramsey: storage as a matter of third party liability?
RT: how clear is the definition of commerciality? Varies even in US.
A: not at all. Wants to spend time challenging that as an unhelpful limit.
Heald: a lot of constitutions have provisions on access to information; that might be worth appealing to.
Rosenblatt: what about a right to science? If this really does promote research, then a right to benefit from scientific progress is another source. It’s often used to claim less openness/more control rights, but here you could use it for more openness.
A: agree: the right to produce knowledge. And it can’t stop at borders.
Bridy: we think a lot about jurisdictional boundaries. Future TDM consortium in EU is trying to work on cross-border research.
No comments:
Post a Comment