Panel 3: Copyright
Sarah Polcz, Loyalties v. Royalties: Relational Roots in Joint Authorship
Relative reward (a pun!) and joint authorship. Our satisfaction for compensation is not just about absolutes, but about how fair the share is compared to the share of those alongside of whom we work. Default rules matter more where informality prevails, as in songwriting. Default rule scholarship hasn’t focused much on relative satisfaction. Maybe we have strong intuitions about preferences. People often feel that reward should roughly track contributions: pay proportionality/wage dispersion. German and English courts have awarded proportional shares, but US default is an equal split even with evidence of varying contributions. Proportional division is feasible as shown by Germany and UK. So why default to equality?
One possibility: penalty default. Lets courts avoid difficult judgments about contributions. But if parties are unlikely to be legally savvy, the penalty won’t work. Little actual empirical support for intuition about preference for proportional reward.
Band songwriting is a good case study b/c it’s not under work for hire. Parties are less likely to be thinking of legality at time of creation v. other types of creativity and likely to know very little about other bands’ practices. Joint authorship is common in songs. Songwriting royalties are economically important for bands. Publicly available data exist.
Study: gold record bands, perform mostly their own songs. 750 of them. Coded based on first released album, origin region, genre, number of band members, and members’ contribution to songwriting (done by coding interview data). Binary: split equally or not. Checked for media reports about how the band members think about royalty splits, with 84% agreement w/her results.
Bands have gone from primarily unequal to primarily equal splits. 24% of bands formed pre-1970 split equally, 70% post-2000. Increase in equal splits is true mostly for genres, regions, number of members.
12% of increase was due to band characteristics—decrease in size of bands. Smaller bands are more likely to prefer an equal split. More even contributions also are more likely to have an equal split. 24% increase can be attributed to other factors, possibly industry changes like increased royalties.
Equal split matches current preferences but is perceived as being at odds with them. Courts have therefore perceived it as being unfair and altered the standards to make it harder for minority contributors to receive a share.
A mark against any claim that common law rules are equally efficient.
RT: is there any reason to think that gold record bands are different?
A: interviews w/bands at multiple different stages, similar results across lifecycle. Experiments: found that split preference is generally driven by relationship characteristics. People typically pick their split early in their careers and stick with it; renegotiation is difficult especially when there’s been success.
Tang: you mean groups of at least two members. Why choose bands? If you have that close a relationship, it seems that the split would have to be equal b/c how do you coexist on a tour bus and not split equally? Would seem to cause high tension/turnover. What about looking at it song by song? Those songwriters don’t have the same kind of extended relationship—e.g., songwriter and performer have splits based sometimes perhaps on the power of the performer, v. professional songwriters.
A: that’s an important category of music; for songs that have 3 or more writers in the entire ASCAP dataset, 70% are by musical groups that are typically performing them as well. It can change over time, too; typically bands are about 50%. Maybe what you’re talking about now is another time based trend.
Ochoa: you seem critical of the default presumption, but bands are getting closer to the default presumption. What’s wrong with the default presumption as a default rule?
A: not critical of it! Just trying to figure out if it’s a penalty default or reflects preferences. Showing that it does reflect preferences is an important result for judges to take into account when considering implications of a decision that affects other aspects of authorship attribution, like the intent standard.
Ochoa: then you run into the problem of whether bands are typical of other types of coauthorship.
A: I don’t have that data but I can say that they’re an important group of authors.
Ochoa: no reason to believe that music is typical of other types of coauthorship.
A: relationship is also supported by experiments w/nonmusicians: relationships are driving preferences in other coauthorship collaborations. Trend among smaller groups. Also relevant b/c other areas are more likely to be covered by WFH.
Guy Rub: You’re suggesting penalty default is not an explanation, but it could be one explanation out of several.
A: not making any claim about how often they’re contracting out; doesn’t know whether there is a contract that specifies. But if your theory is that a good reason to have this rule is that people should be induced to specify a share, this isn’t a good inducement for a lot/there are reasons that they do want an equal split.
Xiying Tang, Why Technopessimism Is Bad for Copyright Law: Technopessimism has bled over into © to its detriment.
What did early internet scholars like about the internet? The ability to have many creators with access to audiences, the ability to break down the fetishization of the original and acknowledge that things like sampling have inherent value. Section 512 helped these new technologies proliferate; Google Books decision, anti-SOPA/PIPA activism.
Is the tide turning? Most ominious example: Copyright Directive art. 15 & 17, specifically to address the so called “value gap”—big tech (US) is just too rich. A distributive justice allocation tool: Some of Big Tech’s money should be given to other people. Sunsetting of music consent decrees in the US. Assistant AG Delrahim has suggested pulling the protections of the consent decrees from the online services compared to Equinox or Starbucks. Other examples: Cox jury verdict, one of the largest damage awards ever levied, against an ISP.
We still need technooptimism for copyright because of competition/antitrust concerns. Big Content still has most of the bargaining power/Judge Koh in the SDNY found ASCAP and BMI engaged in anticompetitive conduct in negotiations with Pandora. Without the protections of the safe harbor, consent decrees, fair use, what we will see is shutting down of competition on the internet, not just on Google/YT.
Lemley: what do we do about it?
A: working on it. Tell the DOJ not to sunset the consent decrees. Cox jury verdict. Because public sentiment is anti big tech, it’s bleeding over into these things. Apple’s creepy spying shouldn’t translate into copyright policy.
Bridy: major broadband providers like telecoms/telcos are differently situated from Google, Apple, Netflix, Amazon, FB: the boogeyman tech companies. Telecoms/telcos are often politically opposed to “FAANG.” So is Cox a fit?
Jacob: there is also small tech, not just big content and big tech. There’s more collaboration b/t big content and big tech. Seem to have agreed to increase compulsory licensing rate to shut out small scale competitors. Maybe not that new: player piano company colluded w/© owners to shut out other player piano cos.
A: is it collaboration? They threatened to sue Spotify into oblivion if it didn’t support the Music Modernization Act. Sees that as another example of big content’s leverage: they can force tech companies to play by their rules or shut down.
Jacob: still interesting that Spotify gets a benefit of shutting out/down competition out of the deal.
Rub: Book industry: Amazon controls that industry now. [B/c it’s not as concentrated, even though it’s pretty concentrated! And maybe also b/c the gov’t exercised antitrust authority.]
A: Needs to look further into the Apple antitrust case about book price fixing.
Robert Kirk, Walker Breaking with Convention: The Many Failings of Scènes à Faire
A literary theory about audience expectations turns into a limiting theory alongside idea/expression. “Incidents, characters and settings which are as a practical matter indispensable or at least standard in the treatment of a given topic.” This makes intuitive sense on first look. But consider the use of Nazi imagery in The Producers: it uses the tropes but in a completely revolutionary/parodic way. It’s all both original and highly stock/formulaic. How do we deal with this? What does indispensable mean? Almost no elements are actually necessary; alternatives are regularly available. Courts are trying to get at the idea that some ways of expressing an idea are just more satisfying.
There are a wide variety of genres and microgenres: Netflix has 70,000 ways of grouping works. They’re unstable and change over time: true crime podcasts. Morality plays. They evolve based on audience taste and historical circumstance.
Another issue is aesthetic nondiscrimination tension. Should protect all “original” works regardless of artistic merit. Courts should thus avoid aesthetic judgments to prevent reading their own preferences into the law. But lots of places, they have to make aesthetic judgments and scenes a faire is one place they do: are similarities due to stock elements? What is the genre of the work? What are the standard elements of such a genre?
Also a Feist problem: scenes a faire doesn’t protect common elements even if minimal creativity is present. E.g., unconventional uses of conventional materials may not be protected.
Result: underprotects satire and unconventional uses. Underprotects expression that becomes standard over time: Sherlock Holmes was a novel character in English literature, but is now a standard character type. May underprotect works from marginalized communities misclassified as folk or traditional—history of appropriation from musical artists. Encourages over-claiming because it’s murky: No methods for deciding what conventions are relevant.
Time should matter: scenes a faire should consider what the standard was at the time the work was created. Claimant provides genre examples at registration as prima facie evidence, creating an ontological web.
RT: I feel like you’re not giving weight to scope v. protection. No one thinks that your book about a dinosaur island where dinosaurs made from fossilized amber roam around is unprotectable; we just think Jurassic Park doesn’t infringe it. It’s substantial similarity the standard that is our problem, forcing us to determine what parts of a work are protectable b/c protection goes beyond pure reproduction. And timing won’t help b/c the idea of reconstructing dinosaurs from DNA predates the dinosaur park book; you shouldn’t get a monopoly over that scenario by being the first to market that book.
A: he’s trying to create better guidance for scenes a faire, which right now operates like judicial notice, and isn’t as well defined as short words & phrases or idea/expression. Wants something more rigorous. If we allow certain types of creativity to be discarded as standard, it needs more heft. Not even convinced it should be a separate doctrine from idea/expression. But since we do have it, I want it to be better.
Ochoa: it’s not a separate doctrine, it’s an idea that helps make idea/expression more rigorous as a distinction. Cliches should no longer be protected. Tying a man to the railroad tracks was considered protected; now it’s a cliché. [RT: The first to do it should not get any monopoly over it—there was, objectively, a first work that had a person unable to make a phone call because of lack of cell service, but they should get no rights from that, an achievement made possible only by technological change.] Feist was not a 102(b) case, it was an originality case; there are original ideas that are unprotectable, like not being able to get cell service.
A: judges aren’t particularly trained in knowing it when they see it. This is an art historical question, and that’s what I’m asking for. Bring some rigor to the idea of genre conventions, just as courts try to specify merger.
Tang: it doesn’t make much sense to me to introduce rigor by having the claimant herself be the one to provide examples. It’s a prior art requirement of sorts but the CO is nowhere near as rigorous in examination as the PTO, and patent terms are much shorter. Applications are nearly rubber stamped; they are not good people to be policing the scope of protection. Rigor seems misplaced.
A: the point is not to give them a monopoly, but prior art is a useful concept. It’s not designed to prevent patentability but to present context for the new work, which would supply evidence for the court. Once you have genre change over time, and maybe Ochoa is right that this is a feature and not a bug, it becomes harder to stabilize the boundaries around the © work.
[But if you have substantial similarity as a standard, then there are a bunch of different works that might infringe—stabilizing the boundaries can only be done partially, by comparison with different works. This is Jeanne Fromer’s work on central claiming v. peripheral claiming. The proposal for comparable works isn’t even peripheral claiming, though.]
Alfred Yen, Copying in Copyright: Res Ipsa, Expert Testimony, and the Creative Mind
Infringement is like res ipsa loquitor in tort: barrel hits you on the head, the only way that happens is negligence. Dark Horse case: look at these similarities, the only way that would happen is if Katy Perry copied. The analytical problem in tort cases: Gore v. Otis Elevator: an automatic door closing like a vice bespeaks negligence/product defect, but more is requried under REL: the evidence must support a reasonable inference that the D was at fault.
What if we said: we agree that similarity bespeaks copying, but we need evidence to support a reasonable inference that D copied. If negligence is more likely than no negligence, then an inference is justified. How do we get there? Copyright: done by lay intuition and by expert testimony that similarity makes copying more likely than not. Res ipsa: done by lay intuition and expert testimony that defendant’s injury means that negligence was more likely than not. Sometimes we doubt that the jury has enough information to construct a reasonable probability space—that’s where expert testimony matters/is vital.
In REL, courts are not necessarily receptive to expert testimony. It’s not necessarily good enough for a doctor to give the opinion that injuries were caused by D’s negligence. Instead, the required basis for REL is that the medical community recognizes such a result doesn’t occur w/o negligence. Especially useful in cases where there is evidence tha the type of result experienced by P occurs in a small percentage of cases, and with no known case.
What works: establish background probability from medical literature associating injury with negligence, or even with personal experience w/the phenomenon: I have supervised 100s of these operations and seen that this untoward result is associated with this kind of negligence. Or we could alter the background by ruling out benign causes: show that instrument fails less than 1% of time, or directly examine instrument to rule out failure of instrument as cause ofdamage.
In © expert opinions: we have no such literature studying instances of surreptitious copying being able to identify this level of similarity with dishonest copying. Nor have we people who’ve personally supervised copying and found that this kind of similarity typically results from copying. It’s impossible to examine the inside of Perry’s mind. The grave risk of overestimation and handwaving, e.g., Selle v. Gibb where the expert just asserts that the striking similarities were such that they couldn’t have been written independently.
RT: base rate issue: how often do similarities of this type pop up between songs in general? We have no idea about that, but we actually could get information about it.
A: agrees that appropriate expert testimony would not be about the expert’s analysis of two texts, but of large numbers of texts. What about the next step: even given the similarities, what is the likelihood that they result from copying?
Tang: the access prong has always bothered her. In most cases, it will be accepted that there was access (these days/for popular songs). Perry swore that she never heard the song, but we still go to the substantial similarity prong because the jury gets to disbelieve her.
A: purpose is neither to criticize nor accept the present division b/t access and copying. Wants to critique treatment of similarities that could be the result of accident/innocence or could be the result of plagiaristic/wrongful copying.
Lemley: what role does REL play here? Wouldn’t it be basically the same argument even without comparison to tort? The question is whether expert testimony is about a relevant thing, so does tort comparison even matter? Relatedly: Are you comparing this to an idealized vision of what experts do in other areas? We don’t often have an expert who’s studied 1000 cases; we often have a doctor with anecdata. Maybe we need higher standards across the board.
A: going to common law doesn’t necessarily tell you something you couldn’t have found from inside ©, but the journey outside makes it easier to see. © doctrine is very favorable to bringing experts in. Looking at other areas where the problems are analytically similar but the treatment of experts is different can give insights. Re: idealized vision of experts, yes, that’s a reasonable question. But he does think that even if other kinds of experts are not as well grounded as we would hope, having a more serious discussion of where you get expertise from is worthwhile. Forensic musicologists who diagnose copying have less to stand on than the doctors, in his opinion.
Lemley: what is the alternative? Expert testimony has been viewed as relatively defense-friendly. Are you thinking courts will dismiss the cases w/o a good enough plaintiff expert? What if the alternative is just punting the whole thing to the jury?
A: that might be right, which would be consistent w/any number of other cases saying the expert can’t testify about the ultimate issue. If the focus of expert testimony was on base rates of commonality/similarity, that might actually give juries the ability to make better decisions.
Chris Buccafusco: small changes in base rates make a huge difference in REL cases: a 40 year old paper on this. If you think REL is hard to prove and Ds too often escape liability, the answer in tort liability was strict liability. If we can’t actually determine probabilities, maybe we should just stop.
A: could imagine a © system that was like patent, and we didn’t care about actual copying. [aka subconscious copying] But © has never been willing to go there.
Q: aren’t you just saying that current experts shouldn’t be allowed in under Daubert.
A: maybe that they can be qualified and render an opinion on something, but not on the thing they’re currently being allowed to tell the jury/it’s not enough for the jury to render a verdict in favor of the plaintiff.
Ochoa: This is a special case of circumstantial evidence: we don’t have direct evidence. We talk about access & probative similarity, but it’s a continuum. The ultimate case: access seems impossible, but it’s the exact same work, what then? We have an intuition about the scope of © based on our experience; the only thing an expert has that a jury doesn’t is exposure to more works in the genre.
No comments:
Post a Comment