Intellectual Property – Interpreting the Scope of IP Rights
Moderator: Zahr Said, University of Washington School of Law
Margaret-Jane Radin, The University of Michigan Law School:
Patent scope. The problem of describing innovation (thing in the world) in
words. Philosophical problem of extensive lineage; judges often don’t know it’s
a problem. Related: problem of big
picture economic efficiency. If that’s what we want, what are patent
rationales? Incentivizing/ex
post—coordination of future innovation, signaling market. Recent discussions about notice: notice to
public & competitors is important, but patent claims are in words and thus
tempting analogy b/t notice in words and fences is flawed. Longstanding puzzle; can’t be solved with
analogies or advice to be clearer in claim drafting.
Costs/Benefit balance of large rights may change over time,
and that flexibility may be a good idea.
Dilemma b/t calibrating rights properly and trying to make rights clear,
and both are parts of the efficiency calculation. Some judges gravitate intuitively to one pole
or the other. Even with philosophy the
dilemma would still be there.
Philips v. AWH: Fed. Cir. en banc. Interpretation focused on entire text: claim,
specification, and prosecution history.
They arrive at unified guideline, but interpretive guidelines are only
useful for notice if they generate more predictable outcomes, and it
didn’t. Judges don’t all get the same
result—the majority agreed on the interpretive standard, but disagreed on the
mandated result in this particular case.
Suppose something was indescribable at the time of the patent, but not
at the time of the litigation—is it covered in the patent? Can be.
But who is doing the describing?
Usually the attorney, not the inventor. But that’s an issue the SCt
doesn’t go near; we don’t want anyone to think that drafters are
co-inventors. Festo introduced the word of describability, but older cases often
understood the idea of emergent inventions for which language develops
later. The idea of the essential nature
of the invention persists; a survival of central claiming, which is supposedly
gone.
Mark A. Lemley, Stanford Law School: co-authored paper with Mark McKenna. We generally divide IP
into validity doctrines, infringement doctrines, defenses. Result: We apply different rules at different
times and sometimes different actors: judge, jury, PTO. Fact that we’ve divided IP into different
pieces creates the “nose of wax” problem: you say at T1 that a patent is really
broad when it benefits you to do so and at T2 that it is really narrow when it
benefits you. It produces bad result
because courts don’t at any time take account of the scope of an IP right. We ask “is this IP right invalid b/c it’s too
broad” and “is the thing D is doing sufficiently similar to what P is doing”
but generally not in an integrated proceeding “is the thing D is doing that P
is doing the thing that can be controlled under this right?”
Part of this: (1) we
generally allow fragmented infringement.
We can point to some sub-piece of product or book that’s sufficiently
similar. (2) We’ve also expanded subject
matter to cover things that are supposed to be protectable only in
part—utilitarian articles for ©; product configuration for TM. But (3) jury has taken on increasing role in
resolving IP disputes. Often delegate legal decisions about proper scope of
right to the jury in the guise of fact questions about similarity, and often
with no guidance to the jury about that.
Example: design patents now allowed to cover prior art and
functional aspects. We used to look at prior art, design patent, and accused
infringer together, but now we’ve separated them. Said functionality is extremely narrow;
limited/narrowed prior art. But then allow the patents upheld under these
standards to go to jury under ordinary observer standard. That’s fine if the thing that makes the thing
that makes it sufficiently similar is the thing we want to protect. But if the
thing that makes it sufficiently similar is prior art or functional, that’s
bad. Most recent example: Apple/Samsung
design patent case, where the key similarities are also in prior art, but jury
doesn’t see them at the same time. Same
problem occurs in trade dress cases—Reynolds v. Handifoil case. They are different in many respects, but
court finds striking similarity: both boxes say heavy duty, nonstick, Made in
USA, and list square footage in the box. Those are similarities, but not
similarities in what the law is supposed to protect.
© has similar problems, even though it at least purports to
filter out unprotectable elements. Right
now we filter for actual copying, but allow jury to load them back in to
determine whether copying is so great as to be unlawful, even though that risks
jury relying on those unprotectable elements.
“I need somebody to love” as lyric whose reappearance is enough to go to
the jury in Bieber case; also infects jury in Blurred Lines case where jury hears songs even though it’s supposed
to focus on the musical work and ignore intentional but unprotectable stylistic
similarities in performance.
Court plays blackjack: IP owners want to get as much as
possible, but if they claim too much the right may be invalidated. This is good for gaming, but not good for IP,
where we really want to get scope right.
Markman has problems, but it’s
good b/c it makes parties go into a room and come up with a scope—what the
patent covers and what it doesn’t—and that limits the nose of wax problem. We
could do more explicit scope proceedings.
That would cause the court to focus on what we should be thinking about
when we think about IP: how much protection does the law intend to provide, and
are you seeking more than that?
Eva E. Subotnik, St. John’s University School of Law:
Derivative works: what are they? For a
time, appellate courts have avoided interpretations bearing on scope that ask
for anything quantitative, as opposed to qualitative. A sea of discarded tests. Examine the roadkill to see the path we’re
on. (I have questions about this
metaphor.)
How much of the plagiarist’s work the plagiarist didn’t
pirate: that would have suggested a brick by brick method.
Feist: Downplays
both sweat and binary copied/not copied distinction in favor of a more general
approach. Feist decides validity but is still relevant; QP was about scope,
and also case sent the tone that clear-cut metrics were out and qualitative
metrics were in.
Similar issues about what’s required for a derivative
work. Gracen: 7th Cir. required sufficiently gross difference
b/t derivative and underlying work to render the derivative work protectable to
avoid entangling subsequent artists in problems. “Gross” signals a quantitative method. But
more recently, in Schrock, the 7th
Circuit capitulated and said it would apply a unitary standard. Now we look for sufficient nontrivial
expressive variation to make it distinguishable from the underlying work in
some meaningful way. Reaffirms qualitative analysis. Puts a lot of pressure on infringement
analysis to ensure the scope of rights in underlying work isn’t unduly
curtailed.
Fair use: Cambridge U.
Press v. Georgia: DCt used a rule of thumb about amount used: 10% or one
chapter where book had more than ten chapters was presumptively fair use. Could wrongly signal to authors that use of
more than 10% would always be protected, but that wasn’t the 11th
Circuit’s problem—held that improper even as a starting point, b/c case by
case/work by work approach was required under Campbell. Cariou v. Prince also downplayed
quantitative approach/intention of artist and said transformativeness had to be
judged by reasonable observer. But the basis for remand of remaining 5 works
was incredibly murky and provided no guidance to judge.
We’re seeing some pushback in quantitative thinking about
scope. Garcia v. Google en banc: Fleeting performance on film, which would
bear on filmmakers’ © scope. En banc
majority referred multiple times to brevity of performance, but also referred
to smallness of claims in its policy analysis—cast of thousands would become ©
of thousands. Cariou also did refer to intent in terms of Prince’s “drastically
different approach,” and also did remand on 5 works which did seem to be
influenced by how much he “took” and didn’t change. 7th Circuit’s retreat from
transformativeness in Kienitz also
may represent a retreat from qualitative considerations.
This retreat is a good thing: more transparent. [Not sure the decided cases bear out this
transparency. E.g., the Georgia State
district court approach was much more predictable as a rule. The Garcia
approach is unpredictable if considered as a rule about amount rather than
as a rule about performers versus filmmakers.
Cariou is, as Subotnik rightly
notes, “murky” in its rationale for drawing a line between different prints (a)
at all and (b) where it does. And Kienitz, ugh.]
Amy M. Adler, New York University School of Law: Why
transformativeness has proved so disastrous in the realm of contemporary art,
even though hailed as a savior elsewhere. Requires a court to adjudicate new
meaning/purpose, which is a failed enterprise.
Three different ways to find meaning: intent; aesthetics; the
“reasonable” viewer—each is deeply problematic in assumptions about
contemporary art, b/c the assumptions are rejected by contemporary art itself. Copying is now a basic tool of art.
Against intent: freeing art from the artist. Prince (for whom she consulted) was a perfect
case where an artist disclaimed any intent to transform the work. Intent rose in importance out of the Koons cases; Jeff Koons learned how to
testify in a way that courts liked about his intent to offer “new
insights.” Intent is bad, among other
things, because of the difficulty of describing images in words—familiar in
First Amendment, cultural theory. WJT
Mitchell: “Whatever images are, ideas are something else.” Richard Serra, Tilted Arc case—one reason he
lost was his inability to describe the meaning of the work in a digestible
way. Courts like words b/c it’s
familiar, old-fashioned, romantic idea of art.
SCt cited Jackson Pollock as artist—but we understand the work as an
outpouring of the artist’s soul in a moment of expression, but that’s an
old-fashioned way of thinking about the relationship b/t intent and meaning.
Many artists reject it. Andy Warhol,
asked about meaning: “Why don’t you ask my assistant Gerry some questions? He
did a lot of my paintings.” Embracing
artist’s lack of control over his own meaning.
Technology may have co-authorship.
It’s very hard to figure out intent given that authorship is
quite multiple: documentary as example.
Famous image won Pulitzer Prize of naked girl on fire from napalm; it
was cropped carefully from the image he actually took (cutting off a
photographer whose presence in the frame raised uncomfortable questions). Photographer didn’t know this was a key
photo; an unnamed editor picked it out of the roll of film and cropped it. He still doesn’t know why this image was
picked. Picking images is now a key
skill in our digital culture. Shepard
Fairey: say what you like, but he knew what image to steal.
Also has similar arguments against aesthetics and against
the reasonable observer in the broader paper.
Has begun to believe in Kienitz:
look strictly at the market, which would be more protective of contemporary
art. [The racial and gender politics of
this move trouble me, given who is more likely to get recognized as an artist
by the market.]
Kevin Emerson Collins, Washington University in St. Louis
School of Law: Patent scope should be both/and not either/or: scope is a
philosophical issue, and understanding that can allow us to attune patent scope
to the world of commerce.
Philosophy of language: what is meaning of “meaning”?
Distinction between denotational meaning/reference and “sense.” Meaning found within word-to-world
relationships is denotational/referential meaning. Look at all the things referred to when we
say “dog” and the word “dog.” Ideational
meaning/sense: words gain meaning through another mechanism, the concept of the
mind in anyone who understands the expression. Meaning is found in word-to-word
relationships (dogs have four legs, sense of smell, member of class mammals).
You don’t actually have to connect words to things in the world to get
meaning. Identification requires a
two-step process: determine meaning of descriptive language, then determine
whether the thing in the world meets that description.
Everyday usage: these meanings are interdependent; don’t
need to clarify. But that’s not true in claim construction. In patent we fix meaning on a particular
date: as PHOSITA would understand it on the date of filing (or invention,
there’s confusion). This is an artifice
that doesn’t exist in everyday meaning.
So are we fixing the denotational or ideational meaning? That affects the ability of later-arising
tech to fit in. To fix denotational
meaning, you’d idnetify the set of possible objects/actions that PHOSITA can
imagine on that date; then if new tech arose, it wouldn’t be within the set;
that would change the “meaning” of the fixed-time term. But if we choose ideational meaning, you
stabilize the network of linguistic constructs. You don’t have to fix the set
of things to which the words refer—ideational meaning remains rigidly fixed as
scope of things in the world expands.
Courts usually use ideational meaning, but they switch to
denotational form time to time, when they believe that the after developed tech
shouldn’t infringe. Should this be a
policy lever for the courts? He thinks
yes, at least if it’s explicit.
Temporal paradox: enablement requires full scope of claim to
be enabled by specification as of th etime of filing. But Merges argues that
meaning that determines infringement isn’t fixed until time of infringement.
Fully enabled claims thus grow in literal scope over time. Thus claim meaning isn’t fixed at time of
filing. Difficulties: Creates instability, lack of ex ante notice. Contra black letter law of claim construction. If we understand ideational/denotational
meaning, we can resolve this paradox more simply. Enablement requires
disclosure commensurate w/denotational meaning as of time of filing, but we
allow ideational meaning to control infringement. Claims remain fully enabled
even as they grow b/c denotational meaning didn’t change over time.
Said: what’s the expert’s role in these various
approaches? If it’s a policy lever as
Collins says, who determines that?
Adler notes dramatic difference in what expert would think
of contemporary art and what ordinary viewer would think. Two images that are visually identical have
dramatically different meaning according to art experts, but “reasonable”
viewer might be different (reasonable compared to what?). How much do we want to defer to standards of
the art world? Issues of elitism—over
and underinclusiveness. People who
aren’t famous: how do we deal with them?
But she prefers experts b/c art world is pretty unreasonable.
Lemley: didn’t recognize in either description by Collins
the way he makes meaning—if he sees a new animal, he uses an expansive or
relational view closer to denotational meaning—does this look enough like what
I know to be a dog? Central
claiming-ish: how far is this from lodestar? Rather than is it within the
universe of things that are already known in some ways to be dogs.
Collins: There is discussion about whether the set is
determined by prototypes or full descriptions. But it’s unclear what we do with
prototype/archetype theory of meaning when we’re trying to fix meaning. The only way to do that is to figure out what the criteria
were that we were using at that time to identify members of the category. (Which may be a complex probabalistic
assessment where high conformance with some criteria can be enough even when
other criteria are not satisfied, e.g., what is a “game”?)
Lemley: or give up on the idea of fixing meaning.
Lemley expresses concern that under Adler’s market standard,
the winners win and the losers lose—if I want to win, I’d better be recognized
in the art market before I get sued.
Adler: the art market is brand-driven. Lemley won’t sell b/c he’s Lemley, not
Prince. [I think that’s his point.]
Lemley: if filtration a la Altai were applied across the board instead of just to computer
programs, the world would be a much better place.
Fred Yen: Isn’t the decision to look at the market value of
a work an aesthetic decision? And in any
event, don’t market participants have to use aesthetic concepts to set a market
value on the work, so we’re just going back to their aesthetic theories? [I
agree.]
Adler: the intertwining of economics and aesthetics is a
definitional feature of the current art market.
That’s a very deep issue; probably right.
No comments:
Post a Comment