Thursday, August 06, 2009

IPSC, first plenary session

IPSC

Abstracts and some drafts available at the website. As usual, an embarrassment of riches. I will miss plenty of stuff I’d like to see. And more standard disclaimers: I don’t do patents; my notes are idiosyncratic; these are all works in progress.

First Plenary Session

Thomas R. Lee - An Empirical and Consumer Psychology Analysis of Trademark Distinctiveness

Stands as a trademark expansionist: a traditional limiting doctrine ought to be abandoned, allowing protection of descriptive word marks so long as they’re presented in the context of a TM use. Consumer psychology is taken to support trademark restrictionist ends—McKenna: most of TM’s limitations come from premises about consumer understanding; Tushnet: cognitive science is attractive to TM law because it provides a psychological basis for it.

Some restrictionists saw psychology as a way to abandon intuition and stereotypes. More recently, restrictionists are retreating from the lovefest. We shouldn’t be so naïve as to assume that the psychology of consumer science will generate clear answers for such complex questions. But empirical uncertainty is no reason to abandon research—instability of the law ought to motivate our attempts to understand the mind of the consumer.

His research looks at the Abercrombie spectrum and in particular the role of descriptiveness. Wal-Mart: consumers are inherently predisposed to see an inherently distinctive term/symbol as a mark; it’s not reasonable to assume that consumers perceive a descriptive term as a mark. But how do judges know how consumers react to complicated stimuli such as a package?

In practice, the distinction between descriptiveness and suggestiveness is vague and arbitrary: “Chicken of the Sea” for tuna; LA for low alcohol beer; Pizza Rolls for snacks—depending on where you live, these are either descriptive or suggestive terms.

Thesis: descriptive marks, when presented in “trademark use” context, are inherently distinctive—likely to be perceived as source indicators. Law overvalues lexical meaning and undervalues semiotic meaning. People have “trademark” schemas that are based on location, size, typeface, and other non-lexical cues. What Lee calls a “trademark spot”: the place where the TM goes on a package.

Lee’s empirical work: looked at consumer perceptions of packages with descriptive terms and generic/suggestive/arbitrary terms put in the same place on all the packages. So: Fudge Covered Cookies tested for cookies, along with Chocolate Abundance, Celebrate, Map, Coriren and other terms. Results: only the generic term was perceived as non-source-indicating. There was no statistical difference between the other marks, whether they described ingredients or were laudatory or fanciful or anything else. Consumer perception is affected by non-lexical cues.

Second study looked at descriptive/suggestive distinction, using similar methods. Again, no statistically significant differences: all kinds were highly likely to be perceived by consumers as source indicators. Third study tried to identify a point at which the non-lexical cues might tip study participants to perceive source indicator v. descriptor. Placement, font, size, and other non-lexical cues about the terms: results show that semantic cues matter, but it takes a very tiny use before consumers stop perceiving terms as source indicating. (Note that the study didn’t seem to include a second, obvious TM, which may change the calculus—consumers probably do look for a TM, and I expect you’d get different results if the package already had “Duncan Hines” on it.)

Implications: we ought to look at tradmark use instead of the descriptive/distinctive line, which can be evaluated based on rules of thumb. Simpler, less administratively costly. If we believe the basic story of TM: Greater protection against consumer confusion. Greater protection of producer goodwill.

Principal objection: competitive need. He thinks there are answers to this. If you do think that’s driving the law, then we should make that the focus of the doctrine: instead of dividing between inherently and noninherently distinctive marks, we should ask directly the competitive need question: are these terms/marks essential to competition. That would look very different from the “descriptive” category—many marks like “wonderful” are not likely to be competitively essential. Also this is an occasional problem that doesn’t justify a blanket solution. It should be like genericide: when necessary, available to competitors.

Mark Lemley: Is this a challenge to the framework or can it be used within the framework? If Abercrombie sets up a framework of presumptions, maybe this shows it should be easy to show secondary meaning. Second: interested in finding that unquestionably generic term still gets 26% source significance response from consumers—does that have implications for how much recognition we should require in order to find secondary meaning?

A: Working on first question. His main reaction on point 2: a lot of stupid people out there. We need to net out some level of reaction. That’s a better question than throwing up our hands.

Lisa Ramsey: Free speech is a separate concern/cost. What about slogans? “Fair and balanced,” etc. Are those perceived as trademarks? Would your theory apply to services?

Lee: Hasn’t tested it. (This is related to my question of how ‘secondary’ marks are perceived on an already-branded package.) Also comes up when the mark is used aurally.

Q: What exactly are you testing?

Lee: The package, not the word.

Q: So did you vary the images? If you have Sun-Tost on a jar of jam, you’re making a marketing association of words and image—is there a confounding factor with the images?

Lee: We did use the same image for all of the cookies, and other products tested. He’s not trying to say anything about the Abercrombie spectrum with respect to likely confusion: may well be that descriptive marks are weaker when it comes to competing uses/likely confusion.

Comment: This is where the gap between empirics and doctrine shows: even if all this is true, the reasons that we care about leaving descriptive marks open for others to use are important—what happens to ITUs under this scheme? If we allow initial appropriation of a descriptive term by “trademark use,” what happens when someone uses the same descriptive term in the text of a keyword-triggered ad? If we accept this thesis, we need to reconfigure the doctrine so that uses that aren’t on a package aren’t “trademark uses”—which will make things very interesting to the trademark use debate, which in some ways overlaps with this thesis and is in some ways quite contrary to it.

Molly Shaffer Van Houweling – Author Autonomy and Atomism in Copyright Law

Users empowered by tech may be copyright casualties. Concern: misfit between copyright law made for sophisticated large entities and new creators with limited legal expertise: problems of complexity, expense.

She is interested in newly empowered creators as copyright owners: they don’t want to assign their copyrights—publishing themselves and retaining control over the copyright—CC, Columbia’s “Keep Your Copyrights” project for scholars and others, musicians who won’t sign with record companies.

She’s nervous about this: copyright might be too easy for creators to make stuff that’s automatically copyrighted and then retained by people all over the world. Adding a lot of complexity to the copyright environment: copyright atomism. Lots of people, worldwide, holding little pieces/microworks that really complicate the copyright landscape. Hypothetical: Wikipedia, an amalgamation of contributions from lots of individual copyright owners. Even when they’ve contributed minor changes they might be copyright owners. Wikipedia’s license allows many uses, but not all: imagine you want to go beyond the license—imagine the difficulty of negotiating with the multiple anonymous owners to do so.

Copyright doesn’t have to be atomistic, held by lots of individual copyright owners. Example: Star Wars mashup contest, inviting people to remix, but individual creators were not then atomistic owners, because the terms of service said that Lucasfilm would have exclusive rights to everything, solving some of the problems of atomism—if you want to mash up the mashups, don’t have to seek permission from all the owners.

(Boy, fair use is starting to look really good right now.)

This solution hasn’t been all that popular: Lessig called it “digital sharecropping.”

Problems of atomism have a historical pedigree: project explores that history. Atomism and responses to it. (Sounds a lot like Carol Rose’s Crystals and Mud (link goes to book version), the single best property article I’ve ever read.) Responses conflict with authorial autonomy, which is especially apparent in the digital age, where unfairness seems to abound, but atomism is a problematic alternative.

Atomism’s dimensions: (1) Proliferation—how many works are subject to copyright ownership? A function of number of works created and of protection requirements—elimination of formalities makes a huge difference. (2) Distribution: are the works owned by only one entity, as with Lucasfilm, or are they widely distributed among lots of people, raising transactions costs? (3) Fragmentation: as to any given work, into how many parts is it divided? Turns on questions like, what is the minimum size of creativity that qualifies for copyright protection—microworks. Also how customized and idiosyncratic the sticks in the copyright bundle are.

Stationers’ Company era: state-sanctioned printing monopoly in connection with licensing act. Rights allocated to limited pool of publishers who traded with each other and fragmented the rights in interesting ways. Not atomized because on the distribution score it was very limited—they were a small group of guys in London and information costs were not high. Control over atomism was solved by consolidation, which was a threat both to competition and to freedom of speech.

Next era: Statue of Anne/18th Century England. Important innovation: initial copyrights allocated to authors, who could subsequently assign them to anyone. In practice, ownership was redistributed through private ordering to the same cozy club of Stationers. One lesson: law may not change atomism depending on conditions on ground. Eventually, authors started to retain some rights, chipping away at consolidation.

US: Ownership initially in authors, but generally assigned to publishers. But the publishers weren’t as consolidated as the Stationers. Also, rise of new types of highly collaborative works like encyclopedias.

Anxiety about atomism produced changes: work for hire doctrine; joint works were subject to undivided rights; “indivisibility” doctrine, creation of private groups to aggregate rights like ASCAP.

Authorial autonomy backlash in next major amendments: work for hire and joint work doctrines limited, indivisibility abandoned, formalities gradually eliminated, and termination of transfer right created (temporal fragmentation).

Internet age: proliferation of subject matter, no formalities. Distribution of ubiquitous authorship and retention of copyrights. Fragmentation: massive collaboration outside of work-for-hire context.

What does history reveal? This problem isn’t new. Recurring tension between autonomy and atomism. Law doesn’t always have impact on the ground. Private ordering and institutions are important to manage information and transaction costs. Strong norm of authorial autonomy may limit which types of legal and institutional solutions are available, so we have to look for solutions that won’t generate that (digital sharecropper) backlash.

So: instead of consolidation, coordination. Public licenses used by individual users—but that gives the problem of license proliferation/incompatibility. So we need some further coordination—license standardization.

Registration/notice as a solution—more information about license terms and owners could reduce confusion and transaction costs, if we deal with the information costs of atomism itself rather than trying to fix atomism. Need technological tools to make registration/notice requirements less onerous.

History is cautionary: hard to balance competition, free expression, and authorial autonomy with desire to avoid atomism.

My thoughts: I found the concept of atomism a little unspecified at the outset: I need more justification for distribution of rights as a concern. Why is individual ownership of a book atomistic compared to individual ownership of a house? In other words, for what purposes are books problematic units, and under what circumstances do we want it to be easy to collect the rights for the class of “books”? Probably not for the purpose of making films of them.

A: These problems come up in the tangible property context. Heller would say that atomistic ownership of a house is ok, but not atomistic ownership of a shingle.

Me: Yeah, but you also say that atomistic ownership of a book is a problem (that’s the point of the Stationers discussion, right?).

A: Yes, there are distinctions to be made.

Jim Gibson: Proliferation and formalities are hugely linked: bring back formalities, and many of these problems disappear entirely—get rid of a huge swathe of copyright owners at the outset.

A: That’s right. But what if we made it so easy to comply with formalities that they didn’t decrease the number of owners? Even so, formalities would help fix information problems.

Q: Free Software Foundation projects require assignment of copyright, and aren’t being accused of being digital sharecroppers—why not? Because their interests are aligned with those of individual contributors. And they’ve given assurances—e.g. in bylaws—about how the rights will be used. Good solution?

A: Yes. Part of the motivation for the paper was the “digital sharecropping” trope: isn’t the consolidator performing a useful function? Can we have consolidation consistent with authorial autonomy?

Elizabeth Rowe – Contributory Negligence, Technology, and Trade Secrets

Trade secret cases these days usually involve some sort of digital misappropriation—misuse of emails or other computer tech.

Courts second-guess whether putative owners did enough to keep the alleged trade secret secret: did the putative owner take reasonable efforts to keep its secret? Query: should the greater risks to trade secrets in a digital world change the way courts evaluate reasonableness? In other words, should reasonableness be pegged to a “should have known” standard? Now that anyone can walk off with 900 pages of documents in a USB drive, we need to reexamine what security measures are reasonable. This indirectly places a higher duty of care on owners because the risks of misuse of tech are foreseeable. (I see an analogy here to the TM cases that say likelihood of confusion is simply easier to find in the Internet age because everyone now uses the same marketing channel: online.)

Uniform Trade Secrets Act: the information must be the subject of efforts that are reasonable under the circumstances to maintain the secrecy. Restatement of Torts: intent to protect trade secret is insufficient; actual effort is necessary. In any case, the inquiry is fact-intensive.

Currently hard to predict outcomes. Some courts say password protection is enough; others say it isn’t, nor is firewall protection or other technical measures. Courts need to pay greater attention to technical measures, rather than traditional facilities-based measures. It is foreseeable, now, that people will use technical measures to extract trade secrets. So there needs to be risk analysis and steps to address risks to claim a trade secret, rather than a claim after the fact.

Relevant factors: nature of business/industry. Size matters. Nature of secrets: can’t be one size fits all. If it’s source code, maybe only developers should have access; customer lists have to be handled differently.

Justin Hughes: Given tech changes, won’t precedent about what tech works be outdated within a few years?

A: This isn’t about a checklist of passwords, firewalls, etc. Rather, she is proposing an approach: start with risk analysis, then do something to address the risks worth addressing. Facilities based measures, tech measures, human factors—all must be considered.

Q: But the implication seems to be you need a case by case analysis, depending on the circumstances. One relevant consideration would have to be the value of the trade secret.

A: Yes. Value is tied to reasonable efforts. Some courts say, rightly or wrongly, “if it were that valuable to you, you would have taken steps to protect it.” Her approach is already consistent with trade secret doctrine. But businesses aren’t doing enough and courts are letting them get away with it, jumping straight to misappropriation.

David Fagundes & Jonathan Masur – Costly Screens, Value Asymmetries, and the Creation of Intellectual Property

Patents are expensive to get, $25,000 for a typical patent, and two years’ wait. And yet there are still plenty of bad patents. What to do? Increase fees and spend more money increasing quality of examination? Decrease fees and allow easier registration because we’re not getting enough bang for our buck?

We think of patent as a costly screen: it weeds out potential patent rights that holders don’t believe are more than $25,000—we’re talking orders of magnitude, not real precision here. (I’d think you’d have to factor in optimism bias as well, and other factors that may lead people to push for patents—Pam Samuelson on patents as signal to investors.)

Private value: what the right is worth to the holder; social value: what the underlying invention is worth to the world. Second cut: high value v. low value; the line is determined by the cost of getting the patent. So you can have a 4x4 matrix of values. Desire: have high social value inventions/works and deter low social value inventions/works. The problem is that the cost of the screen works on the private value, not on the social value.

So, the canonical products of a patent system—pharmaceutical patents, etc. are high private/high social value. Blocking patents and valid but non-novel patents are high private (you can get big settlements from them)/low social (they just threaten other people) value. Low social value/low private value patents: canonical patent thicket; nuisance patents. If patents were easier to obtain, there’d be more of these; and the thesis is that there are essentially no high social value/low private value patents that we’d gain in return, so the screen is a good idea.

Copyrights by contrast arise often costlessly and inadvertently: easy to create. Have the potential to restrict future production, though.

There are a lot of high private value/high social value copyrights: reasonably commercially successful works of authorship, including Harry Potter. Win/win: we enjoy a work of authorship and it makes money for the author. Are there high private value/low social value copyrights? We think there are essentially none. Because copyright is relatively thinner and weaker than patent, a work generating only low social value cannot allow owners to extract high private value. (Ask Google about the judgment it just lost in Argentina for allowing people to find a model’s picture.)

Low private/low social value works: insignificant or inadvertently protected works, like doodles on cocktail napkins. Low private/high social value works: lots of these, they posit, exist—thinly copyrighted works like directories, influential commercial failures that are generative of other works.

Meaning: this is a reason that costly screens are as bad an idea in the copyright setting as they are good in the patent setting. Counterfactual: if copyright looked like patent, costing $25,000 for an examination. You’d screen out a lot of works that copyright is designed to create—lots of spillover-creating works of low private/high social value.

One difference between the grids: easier case in patent, because there were no members of the low private/high social value quadrant. Here you might ask about the benefits of getting rid of the junk in the low private/low social value. But we think those are mostly insignificant—doodles, shopping lists (blog posts like these?). But members of this set could be problematic, but we’re not concerned because of copyright’s thin nature—lots of ways to engineer around possibly blocking members of this quadrant: fair use, idea/expression. (This response strikes me as having trouble with a key issue of copyright today, dealing with corpuses—as Molly van Houweling’s presentation indicates.) We tolerate these because we want things like news photos, copyright treatises, and other things that don’t generate lots of monetary return but do have a lot of public benefit.

What if copyright only cost $10,000, like a trademark? The losses would still be too high. What if it were trivial? E.g., make registration a prerequisite for vesting copyright. But this would still have an impact—a risk not worth taking.

Takeaway: pushes back against certain substantive proposals for copyright and patent. Underappreciated advantages of costly screens for patent; underappreciated advantages of cheapness for copyright. Might also help point to a unified theory of IP process (which just happens to exclude TM, that red-headed stepchild).

Lemley: Leaving the bottom left box empty in patents presupposes a perfectly functioning capital market for patents. Small inventor needs to raise money; can s/he do that? See that by pushing on question of how high screen should be. $100,000 fee—how sure are you there’s nothing in that box. Second, distinguish between low social value/irrelevance and negative social value. If it’s just that they were irrelevant, we’d be indifferent. There has to be harm for us to care.

Masur: True, we believe that there is a strongly effective and well-functioning capital market for patents. Second, true that framing matters: we do mean that there are negative-value patents, and few negative-value copyrights.

Comment: The problem I have with the argument is that it assumes that low private value, high social value copyrighted works would not be created without copyright as an incentive. Both history and logic counsel against this conclusion.

There’s some conceptual confusion over what’s getting screened out by barriers to exclusive rights: the invention/work or the exclusive right. Compare the paper draft at p. 31, “screenlessness generates social welfare by assuring the creation of works with low private value but high social value; but what if that welfare is overborne by the social costs of permitting the creation of countless low private value, low social value copyrights” (emphasis added)—clearly that’s about the rights and not the works—with p.33, “our thesis is about the incentives of authors at the moment of creation,” which is about the works and not the rights. See also p. 35: “The costlier the screen, the more likely it is that authors will decline to create works whose low private value they deem too low. … In copyright, … erecting costly screens raises serious concerns about precluding the creation of works ….” (emphasis added). The part that hasn’t been proved, and indeed that is counterintuitive given the account of how low private value works get created—scribbles on cocktail napkins, emails, and the like—is that copyright has any significant incentive role to play in the creation of such works.

Counterthesis: costlier screens like notice and renewal would get us the valuable works without the deadweight loss of the rights.

Fagundes: It’s always true that people will create for nonmonetary reasons. Our paper speaks only to individuals who create works for profit, which isn’t the universe of creators.

Sprigman: We’re talking about $35. That’s pretty low. You make relatively weak claims about social costs once we’re down to $35. It’s difficult to predict private value ex ante—this may make the loss of works less than you expect from a fairly low screen—people are making a bet. Second, when we get down to works that have low enough private value to make the $35 not worth it, they’re created as gifts anyway. We’re down to a category where monetary incentive reward expectations break down. Final question: orphan works as a costly screen? Whole class of works whose creation requires finding lots of unfindable owners; that operates as a costly screen.

Fagundes: Orphan works is orthogonal—we’re interested in why people choose to create in the first place. A work may migrate from high private value to low private value over time, making it an orphan. We think that even at the low level it’s still important.

Masur: We agree that $35 doesn’t change the mix of works. But we’re not weeding out anything we’re desperate to weed out—we’d need a justification to change the screen.

Q: Doesn’t registration give you the best of both worlds? Can take a wait and see attitude.

Fagundes: we agree. Registration now is fine; changing to increase formalities would be problematic.

Masur: We think the current copyright screen is zero because copyright vests automatically on creation.

No comments:

Post a Comment