Monday, April 20, 2026

"higher standard of safety" is puffery even as to child car seats

ElSayed v. Columbus Trading Partners USA Inc., No. 25-cv-01347 (FB) (TAM), 2026 WL 1042209 (E.D.N.Y. Apr. 17, 2026)

ElSayed alleged that CTP’s infant car seat were faulty and defective in violation of NY consumer protection law. The court dismissed the complaint because “safety” claims were too vague to be actionable.

CTP advertised that its car seat conforms to a “higher standard of safety” because it was “engineered in Germany—where safety standards are among the highest in the world,” among other claims. But it was voluntarily recalled because one of the harness system anchor pins tended to break. It also offered a free remedy kit, though that wasn’t available when the complaint was filed, at which time CTP advised consumers that they should check the anchor pins for damage before every use until the remedy kits became available.

CTP argued that “New York law requires a manifested defect for a plaintiff to recover on any claim.” But unlike the products described in the cited cases, the car seat didn’t perform satisfactorily:

The recall explicitly instructs caregivers to check the Aton G’s harness pins before every use, because they were prone to bend or break. This is not a situation of theoretical harm caused by a potential defect; at issue here is an actual defect manifested in every Aton G subject to the recall. Accordingly, Plaintiff did not get the benefit of her bargain, instead finding herself saddled with a faulty and dangerous CRS which she could not use as expected and which she had to manually examine before every use. This is not how a car seat is supposed to be used, and it is therefore defective by definition.

However, the false advertising claims failed because they were too vague. Along with the phrases above, CTP also said that the car seat had “advanced safety features;” “combines advanced technologies with luxurious details to deliver an exceptional first car seat for your child”; “marries the highest standard of safety with a focus on child comfort”; and “[o]ffer[s] maximum convenience and safety without comprising on design.”

But general statements about a product’s safety “do not create an enforceable promise.” The court pointed to judicial divisions over whether Uber’s claims to have “the strictest safety standards possible” and “the safest rides on the road” were puffery—some said they were actionable because superiority over other methods was verifiable and others said they were “too boastful, self-congratulatory, aspirational, or vague to amount to misrepresentation.” Under this “vague and inexact” standard, the plaintiff failed to state a claim. CTP’s “highest standards of safety” claim was not paired with any superlative statements and stayed general and vague statements. The court also found a “meaningful difference between a company claiming that they offer the safest product and claiming that they set the highest safety standards. Standards in the abstract are necessarily aspirational, as they describe a policy or plan and not the actual outcome or product.” [Requiring consumers to read like lawyers always goes well!]

phthalates could be "ingredient" for purposes of falsifying "only natural ingredients"

Wysocki v. Chobani, LLC, --- F.Supp.3d ----, 25-cv-00907-JES-VET, 2026 WL 926713 (S.D. Cal. Apr. 6, 2026)

Wysocki alleged that Chobani’s Greek Yogurt had dangerous phthalates in it. Phthalates are “a group of chemicals [the U.S. Food and Drug Administration (“FDA”) has deemed to be used safely] in hundreds of products, such as ... food packaging, pharmaceuticals, blood bags and tubing, and personal care products.”  But plaintiffs alleged that they were bad for people.

The court rejected various challenges to the pleadings, including that the cited testing didn’t show that the actual product Wysocki purchased actually contained phthalates because the tested products differed in size (32 oz vs. 5.3 oz), which could reasonably affect phthalate levels, as each size container calls for a different amount of #5 plastic. That is, under Wysocki’s leaching theory, phthalate levels in the 5.3 oz product would likely be lower than those detected in the 32 oz product. Moreover, half of the cited tests detected no phthalates and the testing entity’s own caveat was that results “may not be representative of actual product contents.” These were all factual disputes, and plaintiff pled enough to get past Rule 9(b), with the exception of one phthalate that was not specifically mentioned in the allegations about testing. Allegations that phthalates readily leach into surrounding surfaces and food and are commonly used as a catalyst to make the # 5 plastic container that Chobani predominately uses for its products also helped.

The court rejected the argument that Chobani’s “only natural ingredients” claims weren’t misleading because there was no allegation that phthalates are used, or act, as ingredients in the products. But Wysocki plausibly alleged that allegations of “only natural ingredients,” while affirmatively disclaiming the presence of any “artificial flavors,” “artificial sweeteners,” or “preservatives”, represented to her and other reasonable consumers that the product is free of unsafe, unnatural, toxic substances, such as phthalates. At the motion to dismiss stage, a reasonable consumer could understand representations that use terms such as “100% natural” or “natural,” modified by other terms connoting that it is “all natural,” to mean “that a product does not contain any non-natural ingredients.” And “only” was just such a modifier.

A reasonable consumer was also likely to interpret the meaning of the term, “ingredient,” by its ordinary definition: “something that enters into a compound or is a component part of any combination or mixture.” If phthalates’ presence in the yogurt was shown, that would plausibly lead a reasonable consumer to find that the yogurt’s ingredients include phthalates, rendering “only natural ingredients” false.

It didn’t matter that phthalates aren’t on the ingredient list; reasonable consumers don’t have to cross-check the ingredients list when a claim is clear on the face of the product. (And here, the ingredient list wouldn’t help!) Given the “only” representation, “even trace amounts of a non-natural substance, like phthalates, would exponentially alter the previously stated percentages, which in turn results in a misleading ‘natural’ claim.”

Chobani also argued that Wysocki failed to allege that the levels of phthalates in the products render them unhealthy or unsafe to consume. While some courts have required plaintiffs to allege the presence of the alleged harmful substance, at a particular level, to support a misrepresentation claim, that was a question of fact. Wysocki alleged that “natural ingredients are one of the most important aspects of healthy food,” and that, when food packaging does not contain the word “natural,” over half of reasonable consumers assume the product must contain chemicals.” And she alleged a risk of “unsafe levels” of phthalates, and that disruptions of the endocrine, respiratory, and nervous systems can result from both high and low dose exposure.

However, Wysocki’s partial omission theory failed: she alleged literal falsity, not that a representation was misleading absent further disclosure.

Chobani’s argument that it was insulated by Proposition 65’s warning thresholds was premature. Prop. 65 provides that “no person in the course of doing business shall knowingly and intentionally expose any individual to a chemical known to the state to cause cancer or reproductive toxicity without first giving clear and reasonable warning to such individual where the amount exceeds the [agency-established] no significant risk level.” But, pursuant to a statutory safe harbor, this duty to warn does not apply to business operators when Prop. 65-regulated chemicals exposure levels are equal to or less than the “no significant risk level.” And private plaintiffs who sue to enforce its private right of action have to give pre-suit notice, an unwaivable requirement.

But Wysocki argued that she wasn’t bringing claims under Prop. 65, even though two of the alleged phthalates in the products are on the Prop. 65 chemical list. Though Prop. 65 is concerned with cancer or “reproductive toxicity,” she alleged endocrine disruption, developmental harm, immunological and renal harm, and hormone disruption, “outside the scope of Proposition 65.” Resolving this would require more factfinding than appropriate at this stage.

However, equitable relief and express warranty claims were dismissed.


Brita's clearly qualified filtration claims couldn't mislead reasonable consumers as to lack of qualification

Brown v. Brita Products Company, --- F.4th ----, 2026 WL 1028347 No. 24-6678 (9th Cir. Apr. 16, 2026)

Unlike 800-thread count sheets (see previous post), a reasonable consumer would not expect a fifteen-dollar water filter to “remove or reduce to below lab detectable limits common contaminants hazardous to health” in tap water, notwithstanding clear disclosures to the contrary. Brown brought the usual California claims against Brita.

The Standard Filter, Brita’s lowest cost filter, is certified to reduce five contaminants—copper, mercury, cadmium, chlorine, and zinc—to below the levels recommended by the NSF and EPA. [At least, for now; I assume those recommendations will soon be lifted.] The Elite Filter, a more expensive model, reduces more than a dozen other contaminants to less than or equal to NSF/EPA recommended levels.

The package advertises that the filter “reduces” certain harmful contaminants. The Brita Everyday Water Pitcher, which includes the Standard Filter, claims: “Reduces Chlorine (taste & odor), Mercury, Copper and more” and directs consumers to “see back panel for details.” The back label likewise claims to “reduce” “Copper,” “Mercury,” “Cadmium,” “Chlorine (taste and odor),” and “Zinc (metallic taste).” The product labels offer links to additional sources of information known as “Performance Data Sheets,” which provide more information. Performance Data Sheets contain more detailed information on exactly which contaminants are filtered by Brita’s Products, and to what extent. For example, the Standard Filter’s Performance Data Sheet discloses the following information:

Brown bought the Brita Everyday Water Pitcher with the Standard Filter and alleged that he received the misleading message that the product “removes or reduce[s] common contaminants hazardous to health ... to below lab detectable limits.” He pointed to the claims: “BRITA WATER FILTRATION SYSTEM”; “Cleaner, Great-Tasting Water”; “Healthier, Great-Tasting Water”; “The #1 FILTER”; “REDUCES Chlorine (taste and odor) and more!”; “REDUCES Chlorine (taste and odor), Mercury, Copper and more”; and “Reduces 3X Contaminants.” He alleged that the filter didn’t reduce to below lab detectable levels various hazardous contaminants, including arsenic, chromium-6, nitrate and nitrites, perfluorooctanoic acid (PFOA), perfluorooctane sulfonate (PFOS), radium, total trihalomethanes (TTHMs), and uranium.

Material omission claims: Absent a contrary misrepresentation, a duty to disclose arises under California law if either (1) a product contains a defect that poses an unreasonable safety risk; or (2) a product contains a defect that defeats its central function. The omission must also be material. The reasonable consumer standard is not satisfied where plaintiffs allege only “a mere possibility that [the] label might conceivably be misunderstood by some few consumers viewing it in an unreasonable manner.” Even if there was an unreasonable safety hazard or defect in central function, Brita lacked a duty to disclose that its filters didn’t completely remove or reduce to below lab detectable levels all of the alleged contaminants. “Such a disclosure would not be important to a reasonable consumer in light of Brita’s other disclosures on its Products’ packaging and the objective unreasonableness of such an expectation.”

“As a matter of law, no reasonable consumer would expect Brita’s low-cost filters to completely remove or reduce to below lab detectable levels all contaminants present in tap water, particularly in light of Brita’s extensive disclosures to the contrary.” Brita discloses that its filters “reduce” contaminants from tap water, not that they remove contaminants entirely, and specifically discloses the contaminants that are reduced. It also provided “easily accessible information” (the Performance Data Sheets) about the extent of the reductions. Thus, “[b]ecause a reasonable consumer has been made aware of the Products’ limitations, we cannot say that a reasonable consumer would have been misled by Brita’s omission of these limitations on its Products’ packaging.

an impossible claim is literally false and actionable if believing it is reasonable

Panelli v. Target Corp., --- F.4th ----, 2026 WL 1042441, No. 24-6640 (9th Cir. Apr. 17, 2026)

Something that I don’t yet have a full handle on is happening in 9th Circuit consumer protection cases around literal falsity v. ambiguity. It could be good, but I’m nervous about the potential for weird Lanham Act interactions since “literal falsity” and “ambiguity” sound like the Lanham Act concepts but currently have important differences. FWIW, the emerging consumer protection approach has some things going for it—and if Lanham Act cases started to recognize that consumer surveys shouldn’t rigidly be required in cases of “ambiguity,” that would be a very good thing indeed.

Anyway, Panelli alleged that Target sells some of its “100% cotton” bedsheets with claimed thread counts of 600 or greater, but that it is impossible to achieve that high of level of thread counts with 100% cotton textile. The court of appeals held that the district court erroneously concluded that Panelli could not be deceived as a matter of law by an impossible claim under the usual California consumer protection laws.

Panelli alleged that independent testing showed the sheets he purchased had a thread count of only 288—not 800, as claimed on the sheet’s label. Indeed, he alleged, “it is physically impossible for cotton threads to be fine enough to allow for 600 or more threads in a single square inch of 100% cotton fabric.” The district court relied on Moore v. Trader Joe’s Co., 4 F.4th 874 (9th Cir. 2021), a badly reasoned case holding, in this opinion’s words, that “a reasonable consumer would be dissuaded by contextual information from reaching an implausible interpretation of the claims on the front label of the challenged product.” If it was physically impossible to achieve 800 thread count, the district court reasoned, then no reasonable consumer would interpret the ad as promising an impossibility.

The court of appeals distinguished Moore because there, “100% New Zealand Manuka Honey” was ambiguous: it didn’t necessarily mean that the bees making the honey fed only on the manuka flower. (This is not the poorly reasoned part, which is the stuff the court says a reasonable consumer should know about honey grading and pricing.) As a result, “reasonable consumers would necessarily require more information before they could reasonably conclude Trader Joe’s label promised a honey that was 100% derived from a single, floral source.” And “(1) the impossibility of making a honey that is 100% derived from one floral source, (2) the low price of Trader Joe’s Manuka Honey, and (3) the presence of the ‘10+’ on the label [which apparently signifies a relatively low manuka content] … would quickly dissuade a reasonable consumer from the belief that Trader Joe’s Manuka Honey was derived from 100% Manuka flower nectar.”

Here, the district court “skipped a step by not analyzing whether the label was ambiguous and therefore required the reasonable consumer to account for outside information to interpret the label’s claim.” The challenged claim here was not ambiguous. It “purports to communicate an objective measurement of a physical aspect of the product.”

Target argued that there are multiple possible measures of thread count—but it doesn’t produce consumer protection law ambiguity, which asks only whether a substantial number of reasonable consumers could think their questions about the feature had been answered without further information, not whether all reasonable consumers would necessarily think that. Note that the multiple possible measures of thread count would produce Lanham Act ambiguity, if the non-false possibilities are reasonable. Here, “it is unlikely that a reasonable consumer would know there are multiple thread-counting methodologies.” Indeed, consumers are not “expected to look beyond misleading representations on the front of the box” to discover the truth of the representations being asserted, and are “likely to exhibit a low degree of care when purchasing low-priced, everyday items,” “like bed sheets sold by a mass-market retailer.”

A reasonable consumer is “unlikely to be familiar with the intricacies of textile manufacturing.” [Moore said that reasonable consumers know how honey is made; its error was to assume that knowledge “bees collect pollen” would somehow translate to “and therefore they’d likely collect lots of different kinds of pollen” when people generally don’t give that much thought to that kind of background information.] “Realistically, a reasonable consumer’s knowledge of textile manufacturing is likely limited to the fact that a higher thread count listed on packaging indicates a higher quality sheet.”

The court added: “Allegations of literal falsity are the most actionable variety of consumer protection claims on California’s spectrum of actionability.” True, some claims can be so clearly false as to avoid deception. But Panelli’s claims weren’t unreasonable or fanciful:

While a vast majority of consumers are, for instance, familiar with the biological nature of bees so that it would be unreasonable for a consumer to think honey was sourced from a single type of flower, they likely would not have that same kind of baseline knowledge about textile manufacturing. Neither common knowledge nor common sense would cause a Target shopper to question the veracity of the claim on the bed sheet’s label that the product was of 800 thread count.

The court declined to create a situation where “manufacturers would face no liability for false advertising so long as the claims were wholly false—regardless of whether this falsity is generally knowable to consumers.”

Friday, April 17, 2026

Panel 6: Unanticipated Consequences of New Technologies and Practices

29th Annual BTLJ-BCLT Spring Symposium: Origins, Evolution, and Possible Futures of the 1976 Copyright Act

Jennifer Urban, UC Berkeley Law (Speaker and Moderator)

Daniel Gervais, Vanderbilt Law: Copyright act as undergirding licensing architectures for AI. © rights are inert without exchange. A reproduction right is sterile if the transaction costs of licensing exceed the value of any license. Ghost architecture of the statute: the licensing machinery built around it by antitrust enforcement/courts, and extended by subsequent legislative initiative. Why a mix of compulsory licenses, court-supervised blanket licenses, CMOs, and congressionally sponsored organizations? Reflects judgments about when markets will work to create licensing regimes on their own and when they won’t.

Congress understood that certain uses would produce market failures if left entirely to the private system—difficulty of advance licensing millions of daily transactions, supervising individual uses. Compulsory licenses are not concessions to users at the expense of rightsholders; they are a mechanism to have market activities occur when otherwise they’d be unlikely to occur at all—tech would be frozen out of the market or rightsowners would be uncompensated. ASCAP, BMI, SESAC allowed for licensing without compulsory licensing.

The initial compulsory license was created to prevent monopolization, not to subsidize record companies. The streaming eras revealed some weaknesses, including “address unknown” filings to the Copyright Office, demonstrating a systemic breakdown. The MMA in 2018 tried to address that failure with a mandatory administrator of a blanket license, reducing the loophole and creating a matching database to find authors & deal with unclaimed royalties.

SoundExchange is neither voluntary nor a traditional intermediary—does not require opt-in. The compulsory license is one half of the architecture. The other is voluntary licensing in text & images, showing judicial calibration of licensing market. This played out with the CCC and fair use litigation—the early fortunes of CCC were modest without a judicial determination that licensing was important. Texaco (2d Cir. 1994) changed that landscape by holding that systematic copying of journal articles was not fair use.

AI is a stress test b/c of the scale of reproduction beyond any existing licensing system. International system: no national licensing scheme can avoid the possibility of arbitrage. The licensing system is starting to respond for high-value sources like NYT. CCC has expanded to cover AI uses. Other countries are introducing AI specific licenses. Voluntary arrangements can try to fill that space even before legislation.

History in US: incremental expansion of compulsory license as scale increases. American experience counsels against using a levy to respond: AHRA’s statutory royalty on digital audio recording devices and blank media seemed designed well but the tech passed through the market like a comet.

How can a system built on territoriality deal with cross-border content? Reciprocal agreements, through voluntary licensing. Each adaptation is slower and imperfect but it does happen. AI: most demanding test b/c of scale, speed, and international complexity.

Matthew Sag, Emory Law: Nonconsumptive uses. © is built on the metaphor of the printing press. Copyright provides incentives to authors whose works would otherwise be reely copied on first publication. Thus, reproduction is the locus of exchange b/t reader and author, where the toll can be imposed. But what if there are no readers?

We have seen a series of copy-reliant technologies—search engines, plagiarism detection, machine learning, generative AI. They necessarily copy works but usually don’t deliver prior original expression to any human reader. This issue wasn’t anticipated in 1976, even if AI authorship clearly was.

Should hidden intermediate copies be permissible if no one ever reads them? Tension b/t 2 intuitions—copying (the technical act) is infringement versus copyright’s purpose is to protect expression communicated to audiences—consider how we judge substantial similarity, or give rights over public performance.

His solution: nonexpressive use is fair use. When he started, he mostly had software reverse engineering in mind, then plagiarism detection and Google Books. Gen AI produces outputs that might compete with human-made expressive works, which changes the politics entirely, if not the law.

Courts have generally held that technical copying is fair use when the copying isn’t communicating to the public. Bartz & Kadrey both found model training to be highly transformative fair use; Ross Intelligence disagreed and currently under review by 3d Circuit. If that case goes the other way, it may be on narrow grounds related to the 4th factor.

Where is this heading? Courts have done a pretty reasonable job with the nonexpressive use cases. But we don’t have to rely on courts. Netcom: an analogous issue; court did a great job recognizing insanity of holding infrastructure providers liable for passive passthrough, and articulated volitional conduct requirement. Congress also stepped in and gave us 512, modeled on Netcom but more predictable than the volitional/nonvolitional conduct line. A functional Congress could provide additional clarity.

To that end: proposes revising 107 to recognize that copying works to extract unprotected information or enable nonexpressive computational functions is highly transformative—not fair use b/c there should be room for courts to evaluate the whole picture.

Lots of people perceive licensing as a solution for LLM training. ASCAP is amazing, efficient, but they don’t pay anyone a check for less than $100 or direct deposit for less than $1. It works b/c the authors w/ works of negligible value don’t get paid. But we have no way of tracing which individual works are important to the system. We’d have to divide revenues among a lot of people, not just songwriters, book authors, but everyone who ever posted on social media or commented on Stack Overflow. That’s billions—a very large sum of money divided by billions turns into a lot of transaction costs. You could still send checks to large content owners, but those are precisely the folks who can do deals w/large companies. This would just be a tax system. If you want to tax LLMs and redistribute $ for worthy causes, that’s a great idea, but tax!

Rebecca Tushnet, Harvard Law School: And now for something completely different!

When I started my career writing about fan fiction, which involves fans writing, for example, the further adventures of Kirk and Spock from Star Trek or Mulder and Scully from the X-Files, people in the legal community were often surprised that I cared—wasn’t this a bunch of infringing derivative works? Now, when I talk about fan fiction, people in the legal community are often surprised that I care because noncommercial fanworks seem obviously transformative and fair, or at least obviously not going to come under legal threat. Chloe Zhao directs movies for Marvel and talks about her fan fiction; the actress who plays Dr. Javadi on The Pitt says that her character is a regular girl and gives as a key example that she’s on AO3, which she expects you to know means the Archive of Our Own. My students have never known a world in which fan fiction was hard to find. I’m more pleased to be in the latter situation, but it does make me feel a bit old! And given that noncommercial fanworks were not on the radar of the drafters of the Copyright Act—even if some of them almost certainly knew about science fiction fan culture—my placement on this panel makes sense.

A bit about my relationship with fanworks: a founder and presently co-chair legal committee of the Organization for Transformative Works, or OTW. Mission, to support and defend noncommercial fanworks, explicitly framed as transformative both in the legal copyright sense and in the broader sense of being different in exciting ways. One of the ideas was that we’d try to show up in the rooms where it happens to give fans a voice in policy and legal discussions as creators, the way the EFF does for general internet freedom.

Today, the OTW’s Archive of Our Own hosts over seventeen million fanworks—works based on existing media. We’re a Library of Congress American heritage site. The OTW also supports a wiki, Fanlore, dedicated to fan-related topics; a peer-reviewed open-access journal named Transformative Works and Cultures; and a legal advocacy project to help protect and defend fan works from legal challenge and commercial exploitation. The OTW routinely submits amicus briefs and policy comments to courts, legislatures, and regulators regarding copyright, trademark, and right-of-publicity issues.

One of our most longstanding projects has been seeking and obtaining exemptions from 1201 for noncommercial remix videomakers—vidders or fan editors. Our exemption currently allows noncommercial remixers to rip clips of video from DVDs, Blu-Ray and streaming video in order to make their own transformative works.  In the 1201 exemption process the Copyright Office perceives its job to be narrowing your requested exemption as much as possible. Still, we showed that noncommercial fan videos were regularly fair use and that 1201 hampered fans’ ability to make those fair uses. We’ve obtained renewal of those exemptions several times.

Some lessons:

First, there is no substitute in the modern state for organizations that can speak the language of regulation. Citizens must organize or they will be ignored. But a small group of people can effectively do that! Very few of the more radical anti-copyright, anti-capitalist people who think the OTW is a liberal (derogatory) organization are in this room, but I think we’ve had a productive effect on the overall conversation that includes them.

Second: It is not good for everyday practices to get fundamentally out of sync with formal law. If the everyday practices are acceptable and even good, the formal law ought to recognize that, and we can use fair use to do so.

There are those who say that fanworks are tolerated infringement. Some of those people are probably in this room. This is at best an argument that the formal law sweeps way too broadly under any justification you want to give for copyright rights—yes, the main “tolerators” are big conglomerates, simply because as we heard yesterday they’re the source of most of the widely disseminated for-profit copyrighted works we have today, but there’s a reason that even the individual authors who say they oppose fanworks haven’t actually sued over noncommercial fanworks.

In addition, the “tolerated infringement” argument is a profound indictment of statutory damages specifically. If damage to the exclusivity in a copyrighted work is both infringed by a noncommercial, nonreproductive work and subject to up to $150,000 in damages, that damage ought to be bad, not just an annoyance. Pam Samuelson has always had the right of it and we heard yesterday various forms of agreement with her position that statutory damages have been harmful to the rest of the copyright scheme.

Third and More broadly, noncommercial fanworks are good because they offer a distinct field for creative endeavors, separate from the copyright-enabled commercial system. They are both artisanal and widely distributed, making them an important alternative form of expression. Noncommercial works are fundamentally different in the aggregate from commercial works. They can be Poetry; 100-word drabbles; short stories; 20,000 word stories; million-word stories; other things there’s not much commercial market for. This is part of what makes fanworks worth preserving and protecting: they are part of the background of a thriving modern creative ecosystem.

Noncommerciality complicates questions around blanket licensing: don’t want money, don’t want to participate in the commercial system.

In addition and relatedly, fan cultures have a long connection to queer writing: fan fiction is inherently about difference/the fact that the story could be different/possibility—encourages both repetition with difference and experimentation, which allows some people to open themselves to various possibilities in the rest of their lives. If you want to cry about the power of creativity, read the stories we collected for our submission to the NTIA’s inquiry into the legal framework for remixes: the power of making stories and other creative works within a community that is excited to hear everyone speak has literally saved lives.

Beyond its transformative effects on people, noncommercial fandom is a huge boon to creativity generally. Professors Andrew Torrance and Eric von Hippel have identified “innovation wetlands”: largely noncommercial spaces in which individuals innovate that can easily be destroyed by laws aimed at large, commercial entities, unless those individuals are specifically considered in the process of legal reform.   Their description fits remix cultures well:

The practice of innovation by individuals prominently involves factors important to “human flourishing,” such as exercise of competence, meaningful engagement, and self-expression. In addition, the innovations individuals create often diffuse to peers who gain value from them …. 

Innovation requires that individuals have rights to make, use, and share their new creations, collaborating with others to improve them, as remix authors do.  Given the small scale and limited resources of most individuals, “[a]nything that raises their innovation costs can therefore have a major deterrent effect.” 

Things I have personally been around for: the adoption of curated folksonomy/AO3-style tags in publishing. New story types and tropes: five things that never happened for exploring different scenarios for characters that together illustrate something about the fan author’s view of the characters; the fan-invented “omegaverse” tropes about humans with certain animalistic characteristics.

If you forget about noncommercial works in your creativity policy, you enable the destruction of vital diversity and seed corn for the next generation.

Finally, a coda with another view of internationalism: The US was at the time of the OTW’s founding, nearly twenty years ago, the only place we could count on a strong and flexible fair use defense. This has somewhat changed, including by adoption of fair use in several other jurisdictions, Canada’s noncommercial user-generated content exception, and most recently by greater European flexibility on pastiche, but fair use’s impact is still really notable. American hegemony meant that we didn’t even need a term like “the Brussels effect” for the effect of American fair use and safe harbor laws, but it really did seem like the internet was another American territory. That’s changing, more every day, but we are probably going to miss it when it’s gone.

Jennifer Urban: In-formalization, term extension, and orphan works. Although there was a near-consensus and energy to address it, c2004-2015, efforts were ultimately not a rousing success.

Orphan works: policy questions are related to your sense of who is an author & what authors generally want. Orphan=owner can’t be identified and someone wants to make use of a work in a manner that requires the owner’s permission. 76 Act increased the number of orphan works by removing the formalities.  Widespread agreement thus on the definition and scope of the problem

Solution space: limitations on remedies of injunctive relief, especially when a significant amount of original expression was added; limitations on damage remedies (US proposals); statutory exceptions (EU directive w/r/t making available and reproduction rights); compensation to later-appearing © owners (reasonable compensation, extended collective licensing).

Conditions on relief: proposed: reasonably diligent search; identify use as orphan work on the use itself (notice requirement); register use, potentially with waiting period before use; takedown/stop use upon appearance of © owners; pay compensation to later-appearing owner; provide attribution to later-appearing owner; categorical limitation on type of users (e.g., EU © Directive: educational, library, & public heritage institutions & public broadcasters).

Why so complicated? Different uses are different: archive/library digitization are sensitive to search costs; takedown on notice is more feasible; licensing fees may be prohibitive at scale. Derivative works/smaller scale: more extensive search may be more feasible but takedown/removal not feasible and injunctive relief is prohibitive. Where you were willing to compromise depends on where you sit.

Similarly for copyright owners, © owners like photographers/illustrators were worried they’d be hard to find & usually don’t need to use orphan works themselves. Filmmakers are easier to find and more likely to want to use orphan works.

Limited effectiveness: administrative/centralized licensing adopted in Canada, Japan, Korea, Hungary, UK—fewer than 1000 licenses total by 2015 since 1999. Expensive, not productive. [CASE Act looks better than that!]

2021 EU directive followup found very limited use of EUIPO database and very limited use overall by most eligible organization. 70% of entries in database registered by British Library, and number dropped hugely after Brexit. Lots of complaints about strict search requirements.

Fair use case law also developed to allow a lot of the big data uses; a risk management question. People worried about orphan works protection cabining fair use, even with a savings clause, and that slowed momentum.

Where are we now? Substantial strides in digitization of Office records, which is helpful. But records remaining are in the “sour” spot of 1945-1978. Later-appearing © owner can still register and then sue. Risk aversion is still an issue. Gatekeepers for small creators, libraries—people making decisions about risk aren’t necessarily fully economically rational but have practical effects. Same things with fair use. Occasionally, courts have considered market unavailability in the fair use analysis, but that brings in gatekeepers/risk aversion, leading to “clearance required” policies. And the definition of an orphan work is that it can’t be cleared.

AI raises similar but maybe harder problems.

Urban to RT: how does AI training compensation come into this?

A: it’s incommensurable. It’s like offering me payment after I had you over for dinner at my house. There’s nothing immoral about restaurants but that’s not the kind of relationship I was seeking.

Q about 103(b) and fanworks: if they're fair use, then 103(b) doesn't come into it. Fan authors sometimes worry about commercial misappropriation: they have a copyright in their fair use fanworks, so they can try to shut down unauthorized commercial uses, and they also aren't responsible for such unauthorized uses. Goldsmith even makes this a bit clearer by establishing that the analysis goes use by use; a fanwork created for noncommercial purposes is fair regardless of whether deliberate monetization by the creator would be unfair.

Urban to Sag: how does international nature of training affect this?

Sag: the international scene is quite complicated. Peter Yu & Sag survey the global scene—different jurisdictions take very different approaches, but each trying to (1) make a pathway for legal text data mining, (2) have some protections for © owners. What you see is difference in regulatory style. EU is far more prescriptive in DSM directive. There’s clarity there; some others go further than fair use, but may require, e.g., not just noncommerciality but affiliation w/a library or university. People who think we can put the genie back in the bottle are likely wrong, but even if that’s what you wanted to do, a lot of this activity is portable—you can go to other jurisdictions to train. And that fact of int’l competition should be recognized. Hard to see how a licensing system or tax & redistribution system could work on an int’l basis. We don’t have the political competence to do it here on a national basis, but they might be able to do it in the EU. Only a handful of jurisdictions have TDM protections, but it’s 52% of the world’s GDP. The fact that we allow it in the US isn’t an outlier among our peers.

Gervais: voluntary licensing can deal with crossborder issues. Collective or individual licenses can say something like “parties don’t agree on current scope of fair use” but contracts can manage that risk up to a point, waiting until there’s more coherence in the courts.

RT: maybe we should bring Kalshi in and just use prediction markets. [joke!]

Urban: if there’s nobody to pay, then the orphan works schemes involving collection don’t support the © system.

Q: about licenses b/t major copyright owners and AI companies: will they narrow the scope of fair use?

Sag: I don’t think those licenses should narrow the scope of fair use, though the editor of the Atlantic did say that he entered into one such license to prove the existence/validity of the licensing market. A few notes: most of the licenses, as far as he can tell, are not just for AI training but for retrieval-augmented generation—the economics and copyright implications of sending an AI agent onto the web and assemble them into a report are quite different from the AI training cases and it makes sense to license that activity. Mostly they’re licensing access, which you can see most easily with Reddit, which doesn’t own © in content but charges $60 million/year for firehose access. That’s fine, though it shows need to update robot.txt protocol, but they don’t prove that licensing is a general training solution. We’ll see more of those licensing deals and they’re good, but hopes courts don’t jump to “market for training.”

Litman to Sag: instead of amending fair use to presume training highly transformative, consider moving away from fair use and avoid “transformative,” which attracts additional political, emotional, religious opposition that you don’t need.


Panel 5: Copyrightable Subject Matter and the Special Problem of Software

29th Annual BTLJ-BCLT Spring Symposium: Origins, Evolution, and Possible Futures of the 1976 Copyright Act

Pamela Samuelson, UC Berkeley Law (Moderator and Speaker): discusses history (in which she was intimately involved as an intellectual powerhouse). From uncertainty over whether software was protectable to Whelan which gave very broad protection; took 6 years for the Second Circuit to respond and start with Baker v. Selden to keep functional elements out of © protection. Merger, scenes a faire, 102(b), fair use—doctrinal cocktails, in the words of Molly van Houweling.

Samuelson initially thought sui generis protection for software would be better, but admits error: © did a really good job and gave an international standard that’s enabled some stability.

Jule Sigall, former Microsoft: CONTU was doing its work as Microsoft was just getting started. Trade secrets, patents, and copyrights do different work at different eras of software. 1980: PC era—rapid rise of copyright’s relevance. Business model: product licenses. Practical control: EULA, shrinkwrap, key disc/dongle. Copyright’s salience for executives was high for how they were going to recover fixed cost investment. This was the model CONTU had in mind when it decided to embrace software ©: you make a product & send it out through distribution channels not unlike books.

1990s: WWW. Easier to send software as bits. Business model if people won’t necessarily pay for copies: hardware bundling (Apple; PC with independent OEMs); ad supported. Practical control: B2B contracts. Copyright salience: medium.

2000s: cloud and OSS: business model: subscription/SaaS/consulting. Practical control: server access control/OSS license; not much a pirate copy will do for you. Copyright salience: medium. Antipiracy efforts shifted to antifraud—scammers would purport to sell subscriptions. Open source was a different path—add consulting services to OS or build services using OS. That does depend on © but the most prevalent ©-based model was making software as accessible as possible and using © to ensure it was only used/redistributed in certain ways.

2010s: mobile era/app ecosystem. Business model: app store sales/subscriptions—you can, as in the 80s, get paid for a copy. Practical control: platform control/cloud services. © salience: low.

2020s: AI. Business model: ?? Practical control: ?? Copyright salience: None? [Real underpants gnomes vibes.] More software will be developed by more people than ever before. The tools allow people of all kinds to make software, and they allow software to make software. Maybe we are back where we started before CONTU with unclear © coverage.

Clark Asay, BYU Law: reasons for concern, but countervailing forces/reasons for optimism. FOSS licenses presuppose copyrightable code: copyleft, attribution, etc. W/o © the governance architecture becomes much less reliable. In the context of other developments that threaten open source—MongoDB and Elastisearch have abandoned OS; monetization has always been a question for companies that can’t directly monetize software. AI agents: those agents are creating tons of software and making pull requests/contributions to OS products w/o human review, which are being overwhelmed in some cases. Some projects are closing off in response. Open collaboration norms may be eroding from multiple directions simultaneously.

Might push us more in direction of trade secrecy and possibly patents. A more closed, fragmented software ecosystem and possibly AI system. But developers desire to influence the AI stack, which is likely to keep the ecosystem at least partially open.

A. Feder Cooper, Yale University (co-author Mark Lemley, Stanford Law School): Model weights that give a possibility but not a certainty of generating infringing output: is that a “copy”? Relates to memorization debate. It’s common to describe models as learning statistical correlations or patterns: that’s not wrong but it oversimplifies how info is represented. Another important part: how the LLM is used. Some methods of selecting outputs are deterministic—same input, same output; many are stochastic. Variability in outputs doesn’t derive from model but how the model is used in decoding.

Memorization is, when based on training, the model produces really high concentration of probability on particular sequences. The model is still probabilistic, but the distribution is so sharply peaked that one sequence (or small number of sequences) dominates. This is related to compression: memorization means that Ted Chiang’s “blurry jpg of the web” is sometimes not blurry at all for certain chunks. Memorization is pretty mysterious still—keeps giving new insights about LLM behavior. Not a bug; it’s far too interesting and complicated.

What is a copy? The statute’s answer is pretty incoherent: copies are material objects in which a work is fixed. (The “by or under the authority of the © owner” can’t be taken seriously for infringement by copying. We used the same definitions for protectability and infringement, so courts just ignore that part for infringement.) In litigation, parties take extreme positions—no memorization, or models are just a collage. Neither of these are right and sometimes not even partially right.

We can extract a near reproduction of Harry Potter from a short prompt from Meta’s Llama: that prompt is deterministic. That’s an extreme result—extraction is possible from some models for some works and not others. Most of our experiments measure whether verbatim memorization is occurring; we can get more if we accept small changes like extra spaces or commas in place of semicolons. Sometimes we needed adversarial strategies but sometimes not. None of that work changes model weights, but you can also do that to extract more works.

Jane Ginsburg et al. have shown that fine-tuning on public domain works can reveal memorization from previously-trained-on © works.

So is a model a copy fixed in a tangible medium of expression? That’s still complicated! You can make a copy by storing parts in ones & zeros. But you can’t say that Microsoft Word encodes War & Peace. Models aren’t like either of those things. Some of the memorization isn’t deterministic—you might only get a memorized copy one in 1000 times. Are the other 999 “stored” in the model? That would involve more copies stored than there are atoms in the universe.

Closest examples in existing law: Kelly v. Chicago Park District—garden isn’t fixed b/c it isn’t deterministic; video games where content is generated from a number of fixed options. Micro Star: the new levels aren’t really “in the game.” Nor would we say that all the possibilities currently exist. So maybe the answer is predictability: if the model weights can easily generate the work, functionally there’s a copy in the model. If it’s merely possible to extract the work through effort, it’s not a copy. Why it matters: if there’s a copy in the model, then copying the model is making a copy of the work. Maybe that’s fair use (via intermediate use) but we’d have to figure it out.

Doesn’t love the conclusion, but this is where the empirical evidence leads.

Samuelson for Sigall: you didn’t say much about patents—Whelan might be affected by the idea that patents weren’t available; then patents started becoming available, making thick © less attractive.

Sigall: late 90s was a marriage of two historical trends: if you want to go the IP route for software, patents might be more efficient/useful b/c there’s also a risk with seeking ©. Patents and © come with embedded strategic choices about your business. Book: Capitalism w/o Capital: many of the most successful companies today have intangible assets, not tangible assets—a lot of the benefit is taking advantage of synergies and spillovers in intangible assets. IP can interrupt and interfere w/those synergies & spillovers so it might not be optimal—businesses can capitalize on other aspects instead of IP.

Samuelson for Asay: what do you do w/the Office’s policy requiring you to ID the parts that are AI-generated and disclaim authorship? Will people do that or just pretend that they authored the whole thing?

Asay: Unworkable! Possible that developers will just continue as usual and ignore © complications, slapping license on even if code is AI-generated; that’s somebody else’s problem.

Sag: how do you deal with misuse of your work as evidence that LLMs don’t learn, they copy?

Cooper: Not great feeling! The research I do is careful and the papers are long; that’s not an accurate gloss of what models are doing. But it’s important to do the work to show information about model behavior that we didn’t know before.

Q: is Harry Potter an outlier given how many copies there are online?

A: It’s astonishing still to get a book from a fragmentary prompt; not all models do this and certainly not all the time, but other books can be derived; it’s hard to connect the dots from training data. Tried to do it with Coates’ “The Case for Reparations”—also got that from the same model—it’s very famous but not HP famous.

Cathy Gellis: isn’t © a background assumption for these business models even if you aren’t “relying” on it? If © didn’t exist, would these business models work?

Sigall: it’s a behavioral Q—what behavior is © shaping and it’s certainly possible that affects what businesses do with particular software. It’s there, but the Q is how do you use that fact as a business in your strategic choices? Microsoft housed its antipiracy department in the marketing department, not legal, because the goal wasn’t really to stop piracy but to get them to use Microsoft software. Other industries put antipiracy efforts in legal. Trying to understand actual behavior of users of their works and adapt to that. [This may also be relevant to the shift to streaming video/music!]

Brauneis: suggests that Office’s disclosure form isn’t onerous; doesn’t require you to ID which lines are AI-generated, so you should disclose and figure it out later.

Asay: may be true, but issue in the industry is norms/perceptions about copyrightability—that’s more important to behavior than technicalities of registration. [So what he’s saying is that coders have … always gone on vibes?]

Samuelson: A bit of an old problem with SaaS. Oracle started with a PD work and then made a derivative work from it; trying to sort which parts were protected from which weren’t was already a task.

Bracha: you said that you were wrong about sui generis protection for software because after that didn’t happen, courts rolled up their sleeves and did their job of developing relevant principles. Do you think that courts would do the same thing today?

Samuelson: good point—we sort of got sui generis protection w/in copyright.

Nimmer: works that incorporate works from the USG should in theory disclose that, even if it’s a paragraph quote; they don’t and it’s been a nonissue. So it could also work for AI.  

Thursday, April 16, 2026

Copyright Act Panel 4: The Shifting Line Between Federal and State Protection

29th Annual BTLJ-BCLT Spring Symposium: Origins, Evolution, and Possible Futures of the 1976 

R. Anthony Reese, UCI School of Law

Fundamental change: eliminating common-law copyright for unpublished works and unifying the regime at creation. Contemporaries like Ralph Sharp Brown saw it as a huge, pivotal change. Now we take it as easy background principle.

Federal law did provide a cause of action before 1909 for unpublished copying of unpublished manuscripts. But it was a procedural door, not substantive. More significantly, 1909 Act dropped that but did allow certain types of unpublished works to obtain federal © by registration. Categories where works were commonly performed/exhibited rather than being published. Most people think of this as a footnote, but this new option turned out to mark a significant shift in the state/federal protection divide: for lecture, dramatic/musical composition, motion picture, photograph, work of art, drawing.

Registration data 1929-77: 25% of total nonrenewal registrations were for unpublished works. That’s a big deal! Understates importance because of the limit on classes of eligible works. 28% of all registrations were musical works, and 83% of those were unpublished works by 1977 though it took many years to get there. Similar story with drama registrations (88% of those were unpublished). For other classes it was less significant (except for lectures, at 100%); 46% of scientific drawings and 29% of photographs from 1954-1977 that were registered were unpublished.

Office turned down lots of requests to register unpublished books. There was also uncertainty about what constituted publication. And perpetual state law protection through broadcast, performance, and even potentially distributing phonorecords was a worry: owners could economically exploit their works in front of millions w/o a © bargain/any duration endpoint.

Considered alternatives: extend voluntary registration to all classes; make public dissemination not publication the dividing line (“communicating a work to the public visually by any acoustically by any method and in any form); or eliminate state protection for unpublished works and provide federal © from creation. There were various views. Learned Hand favored the middle alternative for undisseminated works, provided there was some time limit on state law protection. Concern w/infinite duration was supplemented w/concern that there was no fair use under state law (which no case ever held but there was speculation), and concerns about evading the compulsory mechanical license.

When they chose the final option, they did not apply any national origin rules; 105 was adjusted to extend to both published and unpublished works of the US gov’t. And of course duration rules had to change; added to the push for life+50 once dissemination could no longer serve as the universal starting point for a fixed term. Had to figure out what to do with pre-1972 sound recordings too; didn’t get taken up into federal © then b/c of larger lack of certainty about the topic.

Unchanged substantive law: © attached automatically on creation, w/o formalities, but publication would enter public domain unless formalities were complied with. Transfers also changed: divisible, writing required, recordation, subject to termination. Improved nonmonetary remedies. Every unpublished work by every person who died long enough ago now enters the public domain—the initial group in 2002 was the largest ever expansion of the public domain. All the statutory limits now apply to unpublished works. Statutory damages and attorneys’ fees for post-infringement registration too. The rights are 106 rights despite suggestions in common law that “any use” would infringe; idea/expression applies despite suggestion in English law that describing the Queen’s unpublished drawings would infringe; transferring the only copy would no longer transfer the ©.

There are still a lot of registrations of unpublished works: 38% of all registrations from 1978-2022. Mostly monographs, 27%; 65% of performing arts, 38% for visual arts; 64% for sound recordings. These registrations are no longer necessary but provide the advantages of registration.

Subsequent developments: clarified that fair use covered unpublished works; resolved split about whether sale of pre-1972 phonographs was publication, for musical works first and then literary/dramatic works. Finally brought pre-1972 sound recordings into sui generis protection, closing the circle/finally removing the last bit of rubble from the 76 Act’s destruction of the wall between published/unpublished works—nothing left for common law copyright to protect in fixed works.

Marketa Trimble, UNLV William S. Boyd School of Law: Would we expect state law diversity? Legislative laboratory?

Some preempted, often after a long time: Cal resale royalties enacted 76, held preempted 2018. NY standardized testing act; PA Feature Motion Picture Fair Business Practices Law. State statutes protecting rights to unfixed performances ok. Also, gov’t edicts doctrine matters to state law—state can’t claim © of materials produced in course of duties of judges and legislatures.

State laws are very outdated. Typically often list “copyrighting,” “causing to be copyrighted,” “acquiring copyright,” or “securing copyright” as distinct acts, rather than “registering.” Plenty of state laws assert “copyright” to state laws, and other types of works such as works developed by a county board of education (CA), data processing software created by gov’t agencies; and geological/topological surveys of PA. Also an Arkansas history textbook; official insignia for MD farm products; highway maps of Ohio; and the Oregon Blue Book—insight into what legislators feel a need to claim.

Since Ga v. Public.Resource.Org, some states have eliminated their provisions—NY, MD, ME, and Montana and OK already didn’t. VA authorizes releasee of all potentially ©able materials under a CC or Open Source Initiative license.

17 USC 1401 specifies that the person who has the exclusive right to reproduce a sound recording under the laws of any State as of the date before the date of enactment is the federal owner of pre-1972 sound recording rights. Recognizes that state laws varied on ownership. But what if the state laws conflict? There is no choice of law provision, and state laws do vary. Mostly it’s the label, whether framed that way in state decisions (CA) or statutes (AZ). WV has a different rule: label unless there’s no written contract, in which case it’s performers.

State law can also enhance protection of authors: Cal law purports to require generative AI developers to post a summary of the datasets used to develop a system, including sources/owners. MD bill prohibits inclusion in contract for state public art a requirement that artist waive moral rights under VARA. Another bill limits admissibility in criminal/juvenile proceedings of uses of “creative expression of a defendant or respondent.” Court is supposed to figure out whether expression was literal or fictional. Not admissible for mens rea, but could be used to decide on referral to mental health services/diversion programs.

Could protect users too, for example by preventing © as means to prevent access to public records: a series of new state bills require, e.g., public access to “learning materials.” Digital lending of e-books; MD’s bill was invalidated as preempted, but CT adopted a new bill prohibiting libraries from entering into contracts or license agreements for ebooks and digital audiobooks that contain certain restrictions.

Some states have considered laws on what demand letters should look like—abuse of rights provisions. And NV has an act creating a requirement for law enforcement agencies to adopt written policies and procedures governing performance of © works by peace officers while on duty. (B/c of a bizarre technique used where cops blasted music in the false belief that this would cause videos of their behavior to be unpostable online.)

Generative AI legislation: owners of the model training/generated content by the person who provides input; NY’s AI transparency for journalism act: disclosure obligation for content accessed by AI developer crawlers, for identity of crawlers.

Last example: help enforcing copyright outside of the US. Wash, LA, Utah—not clear how they fare after recent SCt decisions.

Are state legislators becoming more ambitious? Public becoming more attuned to ©

Guy Rub, Temple University School of Law: Contracts—“breach of contract” was removed from the text of 301, which now needs interpretation. Leg. History says that “equivalent to ©” was intended to be “the clearest and most unequivocal language possible, so as to foreclose any conceivable misinterpretation.” Whoops.

What is so hard about “equivalent”? © is entangled w/state commercial law. Contracts can ignore subject matter, rights, and defenses. Also depends on view of purpose of ©: delicate balance of competing interests of society and authors, v. exclusive right for benefit of authors.

Most common litigated pattern is idea submission; others are B2B transactions. After that there’s a lot of variety—contracting around fair use is extremely rare and mostly limited to reverse engineering. Is a promise an extra element? It’s a formalistic test. Courts also say that not every technical difference will suffice—must create meaningful distinction. Has to be about the nature of the cause of action.

Two approaches: majority: contracts are bilateral and thus not equivalent to property rights; versus minority approach: no, contracts can’t regulate actions that are exclusive rights under ©. ProCD is the most famous majority approach case, but not the first. Most appellate courts adopt it one way or another; only the 6th Circuit explicitly rejected until recently.

Why so popular? It’s easy to apply and there are no great alternatives. But then the Second Circuit decided the Genius/Google case where Genius had browsewrap saying you can’t scrape; Google won. The contract limits reproduction/public display and is thus equivalent to © and preempted. When Genius sought cert, the SG argued that browsewrap was different (implicitly, not real consent).

Why does it matter? B/c of plenty of other attempts to limit scraping with browsewrap.

Conflict preemption might be the answer! Section 301(a) might ask the wrong question; interference w/the (c) system/obstacle to the goals of Congress. In re Jackson (2d Cir. 2020) (citing yours truly and Jennifer Rothman). Look at what the state is trying to promote: privacy or creativity/commercialization of information? Look at whether there’s harm to (c) law.

X Corp v. Bright Data (ND Cal 2024) found claims based on mining and sale of data are conflict preempted, b/c the interest is monetization, which is the same as ©; the harm is clear b/c it prevents users from commercializing their posts and b/c it circumvents fair use.

Conflict preemption is better than formalism: you can ask whether the contract was individually negotiated; you can ask about market power; you can ask about the purposes of the contract and of the alleged breacher.

Shyamkrishna Balganesh, Columbia Law School (Speaker and Moderator): Hot news had an outsized influence on 301. A misleading account continues to influence how courts talk about misappropriation to this day.

Misappropriation falls into disfavor after INS v. AP: effect of Brandeis’s dissent/Learned Hand’s refusal to expand or adopt the decision along with Erie v. Tompkin’s rejection of general federal common law. But states either through statute or common law began to absorb it. Legislative history suggests that misappropriation is structurally different from © and would allow equity to go down new paths. But the enacted version of the Act doesn’t contain the list containing misappropriation referred to in the legislative history.

Register’s 1965 report saw misappropriation as “the virtual equivalent of ©.” So proposed exclusions for contract, breach of trust, privacy, defamation, and deceptive trade practices, but not misappropriation. Then the Dep’t of Commerce intervened, in the form of a PTO rep, and said misappropriation was important b/c it allows courts to anticipate new areas for development and retain equitable flexibility. But Commerce hadn’t consulted w/other departments. “we have the Dep’t of State disagreeing w/everybody except on the manufacturing clause and now we have the Dep’t of Commerce that takes a different view. Does anyone purport to speak for the administration?” A staffer asks the DOJ to weigh in, and the DOJ is firmly opposed to misappropriation. It would neutralize the logic behind preemption. He says DOJ misrepresented misappropriation as creating antitrust concerns, simulating property rights, and too vague. [Honestly I don’t see that as a misrepresentation!]

Striking the provision’s list in full then eliminates the record of the logic.

There is a fundamental mismatch b/t the House Report and the actual law. Different courts of appeal seeking to resurrect INS in the 80s and 90s unfortunately rely on the old House Report saying that “based on legislative history, it is generally agreed that hot news survives preemption.” NBA v. Motorola. Only corrected in 2011 in Flyonthewall, but it moved to other jurisdictions like Ohio in the meantime, without being corrected there.

If I’d had time, I had a Q for Trimble: what are your thoughts about criminal anti-camcording and anti-copying rules for people who don’t own the masters, including laws making it illegal not to put the name of the actual copy maker on the copies? OCGA § 16-8-60 prohibits transferring “any sounds or visual images … onto any other … article without the consent of the person who owns the master” and separately makes it unlawful to sell any article on which sounds or visual images have been transferred “unless such … article bears the actual name and address of the transferor of the sounds or visual images in a prominent place on its outside face or package.”


Panel 3: The Scope of Exclusive Rights and Modes of Enforcement

29th Annual BTLJ-BCLT Spring Symposium: Origins, Evolution, and Possible Futures of the 1976 Copyright Act

Erik Stallman, UC Berkeley Law (Moderator)

Christopher Sprigman, NYU Law: Restatement of ©: assumes perspective of common law court, attentive to and respectful of precedent but not bound by precedents that conflict w/the law as a whole. Reporter is supposed not just to go w/greater numbers of cases but with the better principle, with explanations.

Statute in many central provisions is far from clear or fully prescriptive—Congress left ample room for judicial interpretation: 102(b), 107, rules of secondary liability, standard by which infringement is judged: these are not peripheral, but Congress said next to nothing or even nothing about them. Courts have built an intricate architecture of common law doctrines around the skeleton of the statute over the past 50 years.

It’s ©’s hybrid nature, enabled by spaces left open by Congress, that has kept © vital through huge changes.

Restatement makes use of legislative history as a window on the meaning of statutory language, principally where courts have disagreed. The window can be clouded, and we approach the enterprise w/caution—not the same as rejecting legislative history. Fair use is a good example: commercial/nonprofit distinction is just one facet of analysis of purpose and character, inviting judicial development.

Fair use is “not an infringement of copyright.” What’s the burden? Legislative history: Statutory presumptions/burdens of proof are not justified—the intention was to allow the courts to make individualized decisions. The courts have uniformly held, w/o analysis, that the BOP on fair use is on the defendant. Seems inconsistent w/ statutory language and expectations. Personal view: Reconsideration of this is overdue. Fair use is a scope doctrine about the © owner’s rights in the first instance. It’s ordinarily, for good practical reasons, the defendant’s burden to raise the issue, just as with idea/expression. But the scope issue should then be decided by the court as a matter of law.

Claim about fair use tension w/derivative works have always seemed to him to prove too much. © owner’s exclusive right to prepare derivative works is subject to fair use. Goldsmith: They aren’t mutually exclusive, but neither do they always overlap. To reconcile the statutory provisions, the Court held that the degree of transformation must go beyond that required to create a derivative works, noting that some transformations occur in purpose w/o physical alterations to content of work. Goldsmith: W/transformations that do alter content, to qualify as “transformative” for fair use, the D’s use must involve more than distinguishable variation but must rise to a level to distinguish its purpose & character from that of the purpose & character of the original. Broadly correct.

Oren Bracha, University of Texas at Austin, School of Law: For 40 years, the infringement test has been eroded and dilution, vacated of most of its substantive content, most importantly its central conception that supplied coherence, which existed beforehand. That was the idea of substitution: to infringe, a defendant’s work had to expressively substitute for the plaintiff’s work. Of course there were hard cases but this was an organizing principle that provided meaning.

Courts hollowed out the test from that notion of substitution, leaving us w/a nebulous, frictionless idea of substantial similarity that means very little. Once that happened, the test became unpredictable, arbitrary, etc. It is us, meaning the case law, that did this, and therein lies the problem.

Once that happened, additional unfortunate developments: (1) the very confused and unfortunately more widespread tendency of some courts to describe the infringement test as having an exception for de minimis uses. A confusion of relevant concepts. Once that happened, other courts very quickly understood this as a criterion of unrecognizability: no substantial similarity when it’s de minimis, and it’s de minimis when one can’t recognize the P’s work—a race to the bottom. (2) Alternative infringement tests develop: the quality/quantity 2d Circuit test that applies when the regular test is inapt, that is, when it doesn’t produce the outcome “infringement.”

After we hollow out the meaning of the infringement test, what steps into the vacuum is the fair use doctrine. The idea is simple: if infringement means very little/subjective, you don’t need to worry about it, b/c we can always fix it w/fair use. Don’t even have to do the infringement test! The back end becomes the front end. Warhol v. Goldsmith: substantial similarity became a footnote—ignored by the dct, a sentence at the 2d Circuit, abandoned at the Supreme Court.

Not an enemy of fair use, but this is abnormal and ungood. We’re putting too much burden on the too-narrow shoulders of fair use. We’ve shifted a lot of the burden of scope analysis to fair use. Basic mismatch between its concept and the burden we’re making it bear—at the end it won’t work very well.

Relatedly, courts ignore the difference b/t reproduction & derivative works and don’t bother to tell us which is which. Derivative works is a freewheeling concept—any secondary valuable use of the work—so the boundary is unclear.

Fixes: meaningful infringement test. His would be along the lines of expressive substitution. Once we’ve done that but only once we’ve done that, we should cut fair use down to size. And we should fix the derivative work/reproduction situation: the derivative work right should be a right of making adaptations, not a freewheeling boundaryless idea.

Justin Hughes, Loyola Law School: Contributory liability after Cox: What the hell?

Assumptions inherited from 1909 Act & FRCP—many secondarily liable parties should only be liable for damages. That has produced a major divergence b/t US and other developed economies w/sophisticated © schemes. W/exposure to damages in mind, there were 2 distinct branches of secondary liability—vicarious & contributory. 2d Circuit’s 1963 Shapiro Bernstein case crystallized the vicarious standards outside the employer/employee context: right and ability to supervise plus obvious and direct financial interest in the exploitation of copyrighted materials, even in the absence of knowledge. The House Report added “indirect” to its description of vicarious liability. Meanwhile, contributory liability came from a case about a preparer of a motion picture held liable for the exhibitors’ public performances.

Cox v. Sony: ignored previous case law; contributory liability requires intent that can be shown only by inducement or that the provided service is tailored to the infringement. [FWIW I think that a court could easily find that continuing to host a particular piece of content is tailored to the infringement once there’s been notice of a claim. I think it’s fundamentally different to deal with a series of possibly continuing infringements versus one ongoing infringement.]

Legislative history: used “to authorize” in 106 to avoid any questions about liability of contributory infringers. For example: A person who lawfully acquires an authorized copy of a motion picture could infringe if they engage in the business of renting it to others for unauthorized public performance.

Litman is right: Statutory damages for single infringement is not palatable applied to contributory infringers, and that’s a problem for our system. Big mistake either in the Act or our understanding of it. Other systems permit injunctions against third parties w/o holding them financially responsible.

Fascinated by SCt’s obsession w/making patent & © into kissing cousins. But what about TM and Inwood? [I think it’s bigger than that—the Court wants a trans-substantive rule about equitable doctrines including contributory liability, which is why everyone in Cox was citing Taamneh.]

Laura Heymann, William & Mary Law School: Dividing lines b/t doctrines: Dastar and Star Athletica both don’t give great guides to distinguishing ©/TM and ©/design patent respectively. Maybe that’s the result of unusual facts. Wal-Mart specifically invited rightsowners to use © and design protection while building up secondary meaning required for trade dress rights. Doesn’t favor election, but use of doctrines on back end to deal with end-runs around limits on other rights and use of remedies, such as the apparently revived interest in disclaimers.

Congress could also try a more intentional positive description of the public domain to tell us how that interacts w/other IP doctrines. Another way: Court’s own reasoning when it borrows from patent law. Is Court looking at purposes behind the doctrines it borrows from? We call the field “IP” and try to generate unifying themes, but not clear that Court or Congress keeps those in mind. Is “staple article of commerce” a phrase used to indicate a core doctrinal concept or just a convenient borrowing? Sony v. Universal took pains to distinguish Inwood, rejecting kinship b/t © and TM. Blackmun’s dissent disagreed with the comparison to patent. Path dependence: that borrowing in Sony now goes unexamined in Cox. Not every member of the Court is committed to the project of legal explanation. [That’s one way to say it!]

Q: re burden of proof.

Sprigman: Law is clearly established until it isn’t. We have to prepare the world for what might come next. Our Court is not respectful of precedent and very bound by text. They’re unashamed to disrupt settled expectations [of certain kinds].

Bracha: it would be an improvement; fair use started life as part of the infringement test. It still wouldn’t be a good fit to have fair use as the central mediator of scope.

Discussion of Cox/AI. Heymann points out that questions of who is responsible for infringing outputs will be key, and the Court’s opinion isn’t helpful in categorizing responsibility for direct infringement.

Sprigman: © should stay in its lane (even if Congress is dysfunctional); we’re testing the limits of what courts can do. Maybe labor law, products liability, tax law are more important for AI.

Stallman: maybe patent & © were more convergent before 1976 Act which gave protection on fixation; before that both regimes were oriented around disclosure.

Pam Samuelson: we’re in a bit of a muddle b/c inducement is a separate doctrine in patent law but the Court said, in order to rule as it did in Grokster, that inducement was part of contributory infringement.

Cathy Gellis: Secondary liability is related to the architecture of the internet, where we depend heavily on intermediaries for speech—the First Amendment is therefore quite relevant. Fear of secondary liability has important deterrent effects. If that’s right about underlying concerns, a switch to TM will not change the dynamics.


Panel 2: The Role of the Author and the Acquisition and Duration of Their Rights

29th Annual BTLJ-BCLT Spring Symposium: Origins, Evolution, and Possible Futures of the 1976 Copyright Act

Panel 2: The Role of the Author and the Acquisition and Duration of Their Rights

Molly Van Houweling, UC Berkeley Law (Moderator)

Tyler Ochoa, Santa Clara School of Law: why do we have formalities? Path dependence is one big explanation for registration, notice, deposit. If © is designed to incentivize exploitation, then formalities make sense—if you’re willing to create the work regardless, no point in ©; you should do something to claim that © was important to the creation (or distribution). Utilitarian view expressed in 1909 Act. Deposit and registration required for lawsuit in 1909, not pre-publication as in early republic. Domestic manufacturing clause.

1976 Act didn’t make huge changes in its initial form: notice was required for all copies published anywhere in the world, not just in the US, to avoid public domain: an expansion of the notice rule. UCC: notice substituted for other formalities. Failure to affix proper notice placed work in PD, subject to a cure provision. Manufacturing clause was kept, but sunsetted. Deposit/registration required to sue. Biggest single change in 1976 Act was duration. 85% of registered works went into public domain after 28 years, though musical works and motion pictures were heavily renewed (about 2/3). Books: 7%.

No works that received life+75 under the 76 Act have expired. The only works to enter the PD under the Act are pre-76 works that hadn’t yet been published or registered and were still unpublished as of 2002. And there’s still 45 years to go b/c of term extension.

Post-76 change gets most rid of the formalities that were preserved. BCIA: mandatory notice eliminated Mar. 1, 1989. Manufacturing clause expired; registration no longer required for foreign works to sue, though statutory damages/attorneys’ fees still require registration but there’s no technical violation of Berne b/c neither of those are required. VARA gives us something couched in the language of moral rights for the first time, but it requires single/limited edition (requires it to be signed/consecutively numbered, which is a formality). Automatic renewal; copyright restoration; CTEA term extension.

David Nimmer, UCLA School of Law: we have never known which works will still be popular several decades from now. Congress wanted to eliminate the Fred Fisher doctrine allowing assignment in advance of renewal rights. But it wanted to handle contributions to collective/joint works. That required more drafting.

How did that work? Winnie the Pooh termination: The current owner did a recission and regrant, all in one transaction. Nimmer argued that, before they got into that room, there was a termination right, and the agreement was an “agreement to the contrary” that was ineffective to override that termination right. He lost (but thinks he was right, and I definitely see his point). Congress could have allowed this, but the statute is categorical (w/the slight exception of renegotiating with the current owner once the termination notice has been sent). It’s not that he didn’t have authority to sign a contract. But the Supremacy Clause says that termination may be effected notwithstanding any agreement to the contrary. Fred Fisher has been resurrected.

What should we do? All we need are 2 changes: (1) voluntary nature of termination/all of the hoops to jump through make it practically impossible without counsel, and not easy even then. Termination should become automatic. (2) Congress should re-pass the provision about “any agreement” and say it really means it.

Robert Brauneis, GW Law: When does creative work get recognized as authorship? Most obvious exclusion doctrine is WFH; direction of 76 Act is really complicated there. It could be read as author-friendly only in relationship to the “instance and expense” test developed after the grand bargain was penned.

Other doctrines about recognizing creative work as central: fixation; derivative work authorship; joint/co-authorship. Under 1909 Act, phonorecord wasn’t fixation b/c not human perceptible, nor was choreographic work fixed in a visual record. The Office then adopted that requirement for deposit—registration had to be by visual notation and that forms the boundaries of the registered work. Many composers who don’t notate end up never being recognized as authors of the musical works they composed, especially in blues and folk genres.

That changes in terms of using recordings for fixation. But recordings are easier to pass back and forth b/t coauthors; it’s still possible for composers to lose out, but the average number of composers credited has grown a lot—Glynn Lunney says it’s more than doubled in a few decades; Emma Perot said it starts taking off in the 1990s and picks up speed. Possibly some is performers added as a condition of performance; others added because of fears of liability; sampling may also have added composers. Lunney suggests that each songwriter is becoming less productive.

More optimistic possibility: when he investigated “A Little Bird Told Me” from 1949, there was a composer of basic melody/lyrics, there was a lot of collaboration—singer Paula Watson and backup singers went over to his house and worked up new lyrics, a new bassline, a new arrangement & hummed introduction. But the “composer” was singly credited, leaving nothing but session payments for the other collaborators. Today, the others who contributed might have gotten a composition credit. A vanishingly small percentage of revenues today comes from sheet music; the authorship of the recorded version is much more collaborative; the rise in credits may not be less productive songwriters or overreaching of performers, but at least partly that more of the creative contributors are being recognized as authors, and the musical work recognized is “thicker” in that it contains more of the elements that make the work a hit.

1978 is when the legal change of the Act takes effect, and then there’s a change in form of deposit: by the 80s, 80% of applications are accompanied by phonorecord deposits. [What an interesting story!]

Copyright in unauthorized derivative works: the creators are doing something that authors do, but may not be recognized as authors. Protection doesn’t extend to unlawfully used material, and arrangements made for cover version can’t be ©d without permission of original © owner. Melville Nimmer argued that the first provision was inherited from 1909 Act; current edition of treatise says that the statute was ambiguous and the decisions contradictory, and Silbey/Samuelson argue that the text didn’t provide for forfeiture of © in newly added content, making the 76 Act an innovation.

Joint work: Intent to combine is required; designed to overrule precedents allowing music publishers to create joint works by combining music and lyrics written independently, but read much more broadly. Comment in legislative materials: Desirable to reduce as far as possible the situations in which a work is a joint work. Courts seem to have taken that to heart, including both in the 9th and 2nd circuits. Resulting problem: dominant creator gets sole ownership when other creators were consciously and intentionally creatively involved—denying authorship status to the creative work that authors do. Litman: courts erase the contributions of “inconvenient” co-creators.

Is the 76 Act to blame, or the courts resisting something outlined in the statute? More of the blame is on judges than on the language in his view.

Peter DiCola, Northwestern Pritzker School of Law: Reforms of the Act did not, and could not, meaningfully help authors given what else was about to happen, that is, consolidation, here in the music industry. Most conversations he has, people think he’s talking about Taylor Swift. But he thinks of the bell curve: a distribution of musicians—Swift is not representative. The industry has literally made sure that some things she’s done will never be possible for any other musician again.

Market demand determines copyright payout. But market demand is also shaped by concentration among companies that deliver content; in the US 4 have 97% of market share in streaming music. 3 major publishers and 3 labels, down from 7 even in past decades. Independent sector claims are often misleading (Spotify claims to send a lot of revenue to them) b/c an artist’s vanity label can be claimed as an indie but distributed by a major. And they’re parts of larger conglomerates. Composers and songwriters/recording artists face very large entities with lots of bargaining power. Recording contracts have become more structurally exploitative—shift to 360 deals where labels “participate” in revenue of artist in other endeavors like tours & t-shirts. Labels have moved to contractual agreement against re-recording, so Swift will be the last unless we restrict those contracts.

Biggest story now: consolidation of entities that retail or deliver music. Wal-Mart used its power to control both pricing and content. Now big companies are negotiating with big companies: oligopolies selling through oligopolies. That’s why Spotify, YouTube, Apple and Amazon are subject to increasing scrutiny and discontent by musicians. © can only deliver economic benefits based on structure of markets into which authors sell. Demand isn’t enough: the market structure is categorically different now from the market structure in 1976: twin oligarchies; tech companies may be willing to sell music at a loss to keep people on the platforms, which hasn’t happened before; Act wasn’t designed w/that kind of music in mind.

Along with antitrust, you could have more default rules prohibiting contracting around. Draft legislation: allowing sectoral bargaining for musicians against streaming companies. Could create authors’ rights to access data about how their works are exploited; transparency in accounting practices.

Biggest new hole in authors’ rights: Spotify’s policy starting 2023 is that they don’t pay royalties on tracks that get less than 1000 streams in the last 12 months. Perlmutter referred to streaming as a huge success compared to the litigation against filesharing, but now we see what happens: having music on Spotify means agreeing that your less successful tracks won’t get any royalties. How do we know what’s happened to the © system? It’s opaque! Don’t take Spotify’s word for it.

Q: are these mostly music-specific?

Brauneis: On coauthorship, there are field-specific practices and many are affected by the law/not in line with the caselaw.

Nimmer: ProCD v. Zeidenberg was bad—circumvented © law; contracts prohibiting soundalikes are systematically trying to defeat the right to make soundalikes, and we should also be hostile to them.

DiCola: we could elevate the negative space of 114(b) into the right of the public.

Ginsburg: who deserves the blame for the exclusion of inconvenient co-authors? It’s the judges! The statute only says intent to merge contributions, not intent to be co-authors. Master mind reasoning is nonsensical! Doctrinally, this is wrong, but it does have the merit of getting rid of the inconvenient co-contributors. If you apply the statute as written, how do you decide who is enough of a contributor to be an author? Should ideas be enough? [I think editing is the classic difficult case unless you bring in reasoning about the social meaning of various roles, and that just helps you with the editor, not with the dramaturg in the next case.]

Brauneis: the courts don’t feel competent to modify the rule that co-authors get equal shares to allocate ownership, so they need to find a rule to prevent people like Jefri Aalmuhammed from being authors. We would have to confront unequal shares—or say that if you didn’t plan for the situation you have to live with the default rule. Both Aalmuhammed & the Second Circuit case are about motion picture companies refusing to do things that they should have done (getting WFH agreements in place).

Ochoa: could have said that everyone in the credits is a co-author, but everyone else signed a WFH agreement, so Aalmuhammed only gets a 1/1000 share, but it made no sense for the Second Circuit to say that the director isn’t a coauthor.


29th Annual BTLJ-BCLT Spring Symposium: Origins, Evolution, and Possible Futures of the 1976 Copyright Act: Origins of the Copyright Act

29th Annual BTLJ-BCLT Spring Symposium: Origins, Evolution, and Possible Futures of the 1976 Copyright Act

[apologies—seriously delayed flight means my notetaking will be bad.]

Panel 1: Origins of the 1976 Copyright Act

Peter Menell, UC Berkeley Law (Speaker and Moderator): Copyright revision was extensive process of negotiation, occurring alongside movement for racial equality. Some view this as a product of back rooms (shows a picture with only white guys in it). 60s and 70s weren’t the kind of “swamp” we have now. Many studies on historical, philosophical, comparative, economic, and other aspects of ©. Recommendations for broad rights in order to deal with the possibility that “a particular use which may seem to have little or no economic impact on the author’s rights today can assume tremendous importance in times to come.” Importantly, removed general limits on nonprofit uses despite concerns of librarians and educators.

Hollywood didn’t “run the table”—there was some movement to protect authors. Work made for hire renegotiated, and termination of transfers made inalienable, but not a huge shift in favor of authors. What about device manufacturers, users, and consumers? Complex interactions, during which computer software emerged. Not overbearing powerful interest groups running the show so much as carefully constructed balances that didn’t substantively favor content owners as much as we might think—no public performance right for sound recordings, because user groups—radio stations and cover artists—got the better of them. Likewise, the glass is half full for device manufacturers, libraries, scholars and teachers on fair use. Scientific publishers were able to block more generous photocopying rules. Jukebox manufacturers got a favorable rule on device royalties—a dying industry, which was part of their argument. 601: printers/organized labor got the manufacturing clause, though it sunsetted. And on cable, cable operators did pretty well. [This is pretty much Jessica Litman’s account in Digital Copyright, framed differently: interest groups that were organized and showed up got exceptions for themselves, but the “balance” was only for those groups; if they didn’t show up then their rights/interests weren’t considered.]

1976 Act was built for a gatekeeper/clearance ecosystem that was quickly disrupted by the Betamax and then the internet. Sometimes the system worked, sometimes not.

Jessica Litman, Michigan Law: Menell describes a continuous sequence of Copyright Office research and review, and then 9 years in Congress during which its proposal mostly survives. She sees more discontinuity. Office sought to do initial drafting relatively insulated from pressure from copyright bar and came up with what it believed would be wise; the © bar hated it and the Office pushed the restart button, encouraging negotiation between groups on the substance and in many instances the language of the revised proposal. The Office kept pretty good control of the drafting pen most of the time during the initial years, and the records of that process, including meetings hosted by LoC, are extremely useful for figuring out what the language was supposed to mean at the time.

Beyond academic interest, what does the intended meaning have to do with the meaning now? There are lots of reasons why that intended meaning might not be much help. Legislative history was seen in the 60s and 70s, and even 80s, as a crucial statutory interpretation tool, and now that’s not the case. Even ignoring that: Congress has made more than 70 amendments over the past 50 years; the crafters of those amendments paid little heed to what the earlier language meant/was intended to mean. And the malleable meaning of particular words is important: “copy” means something in 2026 than in 1966.

What we learn about original intended meaning of the words is of limited use if we’re trying to figure out what the statute means today. These explorations are useful especially for illustrating legislative process, and the history/structure of music, publishing, and consumer electronic businesses as well as teaching in schools and churches, but they don’t necessarily yield citable authority on what the statute “really” means.

Argument: the drafting process is hostile to outsiders who weren’t invited (sometimes b/c they didn’t exist yet). More recently interested in what happened among insiders and whether they got what they wanted. Assumptions about how the law and the world worked that didn’t necessarily hold: future-proofing did require assumptions. There were assumptions about how the parties would treat each other going forward; broken expectations can radically change the balance of what the insiders believed they were agreeing to do.

There were assumptions about law & world that didn’t prove out. They assumed the future would not be sharply distinct—didn’t anticipate breathtaking consolidation of entertainment industry; assumed that computers would continue to be niche devices used by scientists/students; didn’t anticipate consequences of networked computing. Even though they talked about the terrible danger of unlicensed personal uses, they figured they’d be a continuing annoyance rather than existential threat. Thus they didn’t have a statutory instruction on contributory infringement. One reason Sony came out the way it did was the prospect of making Sony pay statutory damages for every third-party infringement.

Unanticipated change in law: the definition of WFH and details of terminations of transfer were settled in April 1965 as the result of hard bargaining among book publishers, music publishers, songwriters, and author group. At the time, courts didn’t treat independent contractors’ works as WFH; creation of work was subject to implied agreement to transfer © to commissioning party. Didn’t need a signed writing b/c federal © had not yet attached on creation. Only significance was then that after the original 28 years applied, the creator could apply for renewal; but since most works weren’t renewed, this wasn’t all that important. The grand compromise on termination and WFH definition was negotiated against this background, adding categories of WFH made by independent contractors to definition. They insisted that any change by Congress would require renegotiation from the ground up, so very little changed.

But then the 9th and 2nd Circuit decided cases about independent contractors and found they should be WFH by confusing or conflating the line of cases holding that employers were the legal authors w/the line of cases holding that independent contractors implicitly agreed to copyright transfers. Implicitly, publishers promised that if authors gave initial period of exclusivity and waited and jumped through all the hoops they’d get their rights back. But the promises made in negotiations were never binding or enforceable—no legal authority to bind publishing companies years later. So if statutory language was susceptible of more than one interpretation, it’s hard to blame a publisher from exploiting that ambiguity. So they did! When served a termination notice, publishing companies asserted that they could keep collecting royalties for all previous versions of a song using the derivative works exception to termination. And the Supreme Court agreed. Publishers then argued, against proposed amendment, that the statute had created vested rights from the beginning that couldn’t constitutionally be taken.

It’s hard to think about moral enforceability when the individuals who need to keep the promises are different from the individuals who made the promises. In practice, an author who wants to exercise her termination rights has to jump through hoops and then be prepared to engage in litigation.

Other examples of broken promises exist. What’s the point? We can find out how various interests believed the statute would work, and compare that to our current world, but it’s hard to leverage our understanding of intended application to persuade courts to agree with it today given all the amendments and broken promises.

Useful process lessons: Even when it was new, the 76 Act looked better on paper than it turned out to work in the world. Latent assumptions/promises were weak points that could be and were exploited. Authors aren’t doing well: earning less and fewer choices than they once had. It may be that the late 1970s/early 1980s was a halcyon era, but things have changed. If we want authors to have a stronger hand to play, that’s a really difficult problem. EU takes this seriously and has made unsuccessful efforts in that direction; it’s definitely not a fix it and forget it problem.

The negotiations that gave rise to the revision bill were intense but respectful—trying together to build a workable © act even as they sought advantage. Hard to recognize that world now in the viper pit that is the current © bar.

It turns out to be really important who the Register is. True even though we don’t have much control over who that will be. Barbara Ringer: it’s important for the Register to be able to stand up to all the copyright lawyers for the various interests.

Jane Ginsburg, Columbia Law School

Influence of international law: American exceptionalism was the pre-history, but international norms pervaded the drafting of the 76 Act—Barbara Ringer was very proud of this—shifting from publisher to author as focus.

Universal Copyright Convention/formalities. We made our own international order allowing our 28 year initial term and formalities, but agreeing to reciprocal protection and restricting manufacturing clause so it didn’t preclude copyright for foreign authors’ works. Remaining outside Berne was annoying though.

Comparative law/duration/formalities. Each step made it easier to move toward compliance with the broader Berne regime. UCC revised: Registration, deposit, and domestic manufacturer were no longer required as long as the foreign proprietor complied w/a simplified notice requirement—a two-tier system allowed draconian formalities at home so long as foreign works had the easier system. But that’s unstable: why restrict domestic authors that way while the burden on foreign authors was much lighter? Rule of shorter term also created grumbling: foreign reciprocity was limited by our shorter term.

Misunderstanding about foreign law might have affected WFH status. Claim was that movie studios needed status of author to own all rights abroad. Assumed that foreign nations would accept this divestment of individuals; but France’s highest court spurned our characterization of film producers as authors and gave indefeasible rights to directors against colorization. s

Persistence of American exceptionalism: jukebox exemption; mechanical rights; moral rights; formalities; works made for hire.

Menell: how do we incorporate interests of people who weren’t in the room b/c their industries didn’t exist/how do we have a rule of law without a rule of interpretation? Judges who aren’t experts will struggle w/©.

Litman: in the 70s and 80s judges found legislative history useful. To the extent it wasn’t read by legislators voting, it’s a bit odd to consider it, but we now have a statute that very clearly wasn’t written by legislators—maybe that’s the result of delegation to the Office; doesn’t matter for this purpose. Judges today figure that their job is to look at the text and judicial decisions construing the text. That’s a rule of interpretation; she may not think it’s the wisest rule, but that’s up for debate.

Ginsburg: The rule was that you looked at legislative history when the statute was unclear; the statutory text was never irrelevant! Even aware of its corruptibility/process issues, the text isn’t always clear and judges need help; one place to find that help is the rich legislative history of the Act, possibly less corrupt than some others. But the legislative history of subsequent amendments might be a bit dodgy.

Menell: even strict textualists say that they interpret the statute as of the time the words were put into law, so understanding context remains important. Cox will cause more confusion, but that goes to how our democracy is evolving. Strict textualism is being used by judges who weren’t that sympathetic to the project in the first place.

Tyler Ochoa: © is not the best tool to make up for the near-complete lack of enforcement of antitrust law in the entertainment industry. What were the assumptions about competition and antitrust in the 70s?

Litman: we had many record labels, publishers, studios competing with each other. Black was on the Court and antitrust was almost a constitutional imperative. That’s all changed, so the power dynamics aren’t what they expected.

Menell: network economic theory wasn’t yet developed. Concentration today has benefits to consumers through network effects that weren’t perceived at the time: Spotify is concentrated but has big benefits to consumers.

Litman (in response to Q): International compliance is an excuse. If we’d wanted harmonization, we’d have reduced the term of WFH rather than doing what we did.

Ginsburg: it’s true that the rule of the shorter term kept the 20 years from US authors in Europe, but Litman is right that our term for WFH (75 from publication) was already as long as the extended term in Europe. There were plausible int’l trade reasons for the extension, but zero copyright reasons.