Friday, October 24, 2014

Empirical IP Research Conference: trademarks

Plenary Session: Measuring Consumer Confusion in Trademark Infringement

Facilitator: Barton Beebe (NYU)
Lanham Act: confusion is vaguely defined.  Used to include “purchasers” but Congress deleted that phrase.  43(a)’s language is even broader.  Flexible and slippery.

Panelists: Joel Steckel (NYU Stern School of Business)
Topic: Consumer Confusion Surveys Used in Litigation, Commenting on: Robert H. Thornburg, Trademark Surveys: Development of Computer-Based Survey Methods, 4 J. Marshall Rev. Intell. Prop. L. 91 (2005)

Wants to talk about dilution.  Thornburg paper: not a very good paper, but that’s because it does two things—a catalog of different survey types/principles, then talks about difficulty of getting internet surveys into evidence. Much of what he said in 2004 is obsolete in 2014, when internet surveys are by and large admitted with some exceptions.  Drilling down into one aspect: dilution surveys. If confusion is slippery, dilution makes that look like sandpaper. Harm the reputation/impair the distinctiveness of a mark: what does that mean?

Evident from usual surveys that courts accept that courts don’t really know what they’re trying to measure.  Exxon survey: Nike v. Nikepal, where respondents were simply asked what if anything came to your mind when I first said the word Nikepal, 79% said Nike.  Makes you wonder, what did the other 21% say?  That is considered evidence; it’s evidence of association, but doesn’t correspond to impaired distinctiveness or harmed reputation.

Associative network memory model of brands.  Knowledge is a network of bits of info: TM or associated logo, slogans, etc., and associations—anything that comes to mind when you think of a brand name. When the word Coke is presented to customers, they think of “tradition,” “nostalgia,” etc. and Pepsi is more excitement/use/taste.  Dilution offense occurs when a junior brand enters and has another set of brand associations.  Tarnishment if a new association is negative and somehow gets attached to the senior brand: if Victor’s Little Secret causes consumers to associate Victoria’s Secret with pornography.  Test it by seeing if test group associates VS with porn.  Blurring: distinctiveness would be impaired if link between Nike and sports was weakened; reaction time to associating Nike with sports would be a measure of blurring.

Takeaways: dilution has been difficult to measure because it wasn’t defined well.  He argues that his definitions are suitable theoretically based definitions.  Marketer’s perspective: a sensible definition would include the potentially dilutive power of mixing and adding brand associations. Measuring the degree to which brand asociations are held and how easily they are recalled can provide measures consistent w/the language of the law.

Lisa Larrimore Ouellette (Stanford)
Topic: Cognitive/Psychological Approaches to Modeling When and How Consumers Get Confused, Commenting on: Thomas R. Lee et al., An Empirical and Consumer Psychology Analysis of Trademark Distinctiveness, 41 Ariz. St. L.J. 1033 (2009); Thomas R. Lee et al., Trademarks, Consumer Psychology, and the Sophisticated Consumer, 57 Emory L.J. 575 (2008)

There is a fair amount of disagreement about TM’s goals and therefore about what facts matter.  Take courts at their word: consumer mindset/confusion is what we care about.  Multifactor test for likely confusion.  Buyer sophistication: there’s a whole literature about consumer care in buying choices and courts aren’t paying attention to that, which we should bring in. No clear direction for likely confusion; another paper looks at likelihood of bridging the gap.

Sophisticated consumer paper: courts have ID’d many factors as relevant to this: low price of goods; purchase complexity, frequency of purchase; education, age, gender, and income of buyers; professional buyers or hobbyists. But little attention to consumer psych literature.  That literature discusses consumer motivation as well as consumer ability to exert effort to make distinctions; they need both.  Motivation can depend on the realm and on the person (some people have greater needs for cognition than others).  Longer time to make decision without distraction can make difference.  Courts make generalizations about low price, but that’s not the relevant question.  Low financial risk may be high physical or social risk, creating more motivation.

Consumers who saw Mercedes Benz computers were more likely to think that Cadillac would bridge the gap to consumers too.  Those test subjects with more experience buying consumers or more education were more likely to be confused—cuts against judicial doctrine.

Secondary meaning survey: created fictitious marks with no secondary meaning and mimicked typical trademark use on products—Chocolate Abundance, Party Hat, Fudge Covered Cookies etc. for chocolate covered cookies.  If you believe the Abercrombie spectrum means something you’d expect some linear progression.  What they found was that consumers were unlikely to see generic terms as indicating source, but for the rest it was all the same.  And even generic term, over ¼ of consumers saw it as indicating a brand name.  Presentation matters: large font/ordinary TM place makes TM perception more likely, small font/placement makes TM perception less likely.
boxes of chocolate cookies with different words in large oval on front of package

Abercrombie may serve other interests, like protecting competitors—can still be reasons to encourage parties to select more arbitrary/fanciful terms over descriptive ones.  Still, it’s not good for telling us consumer mindset, and we need other measures of distinctiveness and strength.  Inherent distinctiveness is a very weak proxy for what consumers are looking at. 

Could test other factors like similarity of marks and proximity of goods in similar ways.  Courts’ general assumptions may be unreliable.

Mark McKenna (Notre Dame)
Topic: Qualitative Studies of Consumers Becoming Confused During Shopping, Commenting on: G. Miaoulis & N. D’Amato, Consumer Confusion and Trademark Infringement, J. Marketing 48-55 (April 1978); Vincent-Wayne Mitchell & Vassilios Papavassiliou, Exploring Consumer Confusion in the Watch Market, 15 Marketing Intelligence & Planning 164-172 (1997)

There aren’t very many good studies about confusion.  One reason: marketing folks aren’t interested in confusion as defined in TM law. They’re intensely interested in how marketplace practices affect a brand, but there’s a very important distinction between brands and marks; brands are much broader concepts including associated meaning which can be affected by a much wider range of practices that might or might not involve TMs/confusion.  Marketers are much more focused on the question of harm.  Those might be in cases where consumers know accurately that a new product comes from the existing source (might be thought to simulate 100% confusion)—is there any harm to the brand under those circumstances?  TM law accepts claims of harm much more readily than empirical evidence justifies.  Studies are also focused on harm to the brand, not harm to consumers except maybe incidentally. So you get statements like the M&D’A study—confusion in many circumstances won’t harm consumers because consumers don’t care. Thus these studies press normatively to include in the definition of confusion a broader range of effects than TM academics care; trying to shape legal doctrine to test the things they care about.

First study: attempts to measure extent to which consumers exposed to new product that shares features—here, packaging of mints, features of the brand but not the TM—will generalize characteristics from the known product to the new product.  Authors count stimulus generalization as confusion! They want this to count even if normal purchasing conditions make other characteristics distinguish the products, which they call “intellectually discriminating” between the products.  “Confusion” despite the absence of source confusion on the theory that this generalization harms the brand by making it less “differentiated.” And indeed they so find—people think the new product will be refreshing and minty or will taste good. That, they think, is terrible!  Consumers porting info from Tic-Tac to other mints makes Tic-Tac no longer as distinctive.
tic-tacs, Dynamints, and Mighty Mints

Conventional TM perspective: completely irrelevant to doctrinal convention. TM is concerned with one kind of generalization: source information—who is responsible b/c of the presence of mark. Non-source info shouldn’t matter; and porting info for reasons unrelated to protectable to mark should also be irrelevant. Normatively this is correct; ability to generalize is often unqualifiedly good for consumers and for competition even if bad for particular brand owner. Effective communication of “my product is minty and tastes good too” is called competition. TM’s job is not to prevent people from selling products with similar desirable characteristics. That’s why we approve comparative advertising.  How we describe a new restaurant: almost always in relation to something you already know.

Modern TM law becomes more intelligible if you read it through the lens of these studies. Doctrines that most befuddle academics are b/c courts squeeze into TM the broader concerns in these studies—dilution, initial interest confusion, post sale confusion.

Mitchell & Papavassiliou study: followed customers around watch store and asked them what aspects made the experience more challenging and what they did to overcome those challenges.  Consumers say: fragmented nature of market; newness of tech in watches; hidden nature of watch movement; role fashion; too many brands/shops; low frequency of purchase; purchase of watches as gifts. The word for these effects isn’t confusion, but contextual factors that interfere with decisions in shopping/raise search costs, but not confusion. But maybe that means confusion isn’t well defined. 

Very few of these interferences are things TM has anything to say about. Maybe that’s a good reason to avoid thinking of TM as a means to combat search costs but rather a particular sort of interference.

Info overload/overchoice.  Consumers can struggle to choose when there’s too much info. Study tries to figure out consumer strategies—clarifying goals, narrowing choice, seeking additional info. This 1998 study could usefully be updated for new contexts like the internet.  More interesting: what relationship TM has to the problem of overchoice.  Paper simply asserts that counterfeiting is a source of confusion and law can fight that, but the study is from a store apparently lacking in counterfeits. Paper says subbranding is contributor to confusion, undertaken by brand owners themselves as a way of differentiating products: abundance of functionally similar products differentiated on a fashion platform. We think of TM as reducing search costs by making search less time consuming, but TM also affects differentiation in the market, and has a lot to say how close products may be and how they must be differentiated—product configuration like color of band.

To what extent is TM contributing to and reducing search costs, and how do those net out? If TM incentivizes lots of dimensional differentiation, that can increase search costs.

Studies differ but have common pressure to define confusion more broadly. Search costs rhetoric has made it easier to redefine confusion in this way.  As Mitchell study illuminates, there are lots of other kinds of search costs, and it’s too easy to call those confusion.  Academics: if you accept that TM doctrine has responded to these papers’ concerns even if the doctrine isn’t well suited, then we have to do more than point out the irrelevance of these types of confusion. Vacuum about types of confusion that ought to be relevant; studies focus on irrelevant question. Where empirical evidence is, doctrine moves. To resist, need to do some good empirical work about when consumers are confused in the way we want to mean it.

I’ve been asked to speak about the big data approach to consumer confusion. One promise, likely illusory, of big data is that we need not care any more about causality; we can merely find correlations, however unexpected, and react to them.  But trademark infringement, and even dilution, requires a causal narrative as currently understood: this use created this mental state in the consumers who saw it.  As such, big data may not be our savior.  Instead, large datasets may be able to change the theory of what trademark infringement (and even dilution) is, the same way that previous advances in marketing and consumer psychology led some to reconceptualize what trademark infringement and dilution are, but that will be a normative choice even more than an empirical one.

We’ve seen one American case really take a stab at using “big data,” in the form of Google ad clickthrough rates.  That’s 1-800 Contacts, Inc. v. Lens.com, Inc., decided by the 10th Circuit in 2013.  The court says that the theory of IIC is that consumers seeking 1-800-Contacts (which we know, the court says, because they searched for the term) clicked on a Lens.com ad while believing it was a 1-800 site and, though no longer confused when they arrived at a Lens.com site, were nonetheless diverted.  We have no idea how many were confused when they clicked and how many were not confused but rather were seeking a possible alternative to 1-800, but the court says that we do know the upper bound of the former number: the total number of clickthroughs, which was a tiny fraction of the impressions.  For ads without the 1-800 Contacts trademark in the ad copy, Lens.com got a 1.5% clickthrough rate.  This was too tiny to count as likely confusion even if all clickthroughs were the result of IIC, so Lens.com couldn’t be liable for infringement based on those ads.  But ad clickthrough rates are always very low, so even a straight-up counterfeiter as in Rosetta Stone v. Google would seem to escape liability for infringement if you measure confusion in this manner.

This brings me to an argument made in a 2000 article by Alex Simonson, Survey Design in False Advertising Cases: he argues that we need to pay attention to attention versus comprehension.  We might have a survey where a small percentage of people notice a particular product feature, say a picture of a black rooster and the label Gallo Nero on the neck seal of a bottle of wine, but those who do almost all make a connection to E&J Gallo wines.  Or our survey might show that many people notice the neck seal, but only a few make the connection to E&J Gallo.  Simonson argues that those are different results even if they produce the same net number of so called confused consumers: for legal purposes, we ought to be concerned more about comprehension than attention.  Attention, after all, varies in the real world, and so artificial attempts to measure it may not be very good—but if we look at the percentage of people within the set who noticed a feature or symbol who were confused by that feature or symbol, we have a result that may be more generalizable as well as more important. 

I think this point applies beyond surveys: it’s a truism that most advertising is completely useless—and now we have eye tracking studies showing that most people literally do not see most ads, to which one English court has referred in its reasoning, and MRIs showing that a lot of the time our brains don’t even react to ads, not even bothering to process them.  In that sense, there is no such thing as an ad that is likely to confuse, because it’s not likely to reach us in the first place.  But the people who don’t notice the ad are arguably irrelevant to any confusion inquiry.  It’s only those who notice who might be confused. So, while I hate initial interest confusion and think Lens.com is right on the merits, I don’t think the analysis is right.  All we know is the number of people who clicked on the link—but we don’t know how big a percentage that is of the number who noticed the link.  It might be that almost everybody who looked at that link clicked on it, possibly because they were confused, and that scenario ought to concern us if it occurs, especially to the extent that there are no countervailing benefits from having the ad—and it’s hard to explain how the ad might benefit consumers who don’t notice it at all, though one might possibly construct an argument about chilling effects on truthful advertisers.

Also, we need a concept of relevant confusion, or materiality.  Something that people mostly don’t pay attention to because they don’t care about it shouldn’t be deemed confusing.  The broader takeaway is that we still need a theory to explain what we should care about and why; big data do not remove the need for big ideas.

That leads me into the papers on which the organizers specifically asked me to comment: Stefan Bechtold & Catherine Tucker, Trademarks, Triggers and Online Search, J. Empirical Leg. Stud. (forthcoming), and Lisa Larrimore Ouellette’s The Google Shortcut to Trademark Law, 102 Calif. L. Rev. 351 (2014).

Bechtold and Tucker used a large dataset to explore the effects of Google’s policy changes allowing more competitor keyword purchases of trademarks in Germany and France.  Basically, they found little net change in the number of consumers who ended up on the trademark owner’s website, but a change in composition.  They divided searches into navigational searches, where the consumer is searching for the keyword because she is directly interested in using the search engine as a shortcut to find a specific webpage such as the trademark owner’s website, and non-navigational searches where the consumer is doing something else.  They estimated which were which by looking at how long the search was—so iPhone would be navigational but “how to restart my iPhone would be non-navigational”—and some other contextual factors. 

After Google’s liberalization, navigational searches became less likely to lead to the trademark owner’s website, while non-navigational searches were more likely to do so.  Percentagewise, they classified 20% of searches as purely navigational, while 80% were non-navigational. The policy change was associated in a 9.2 % decrease in consumers visiting the trademark owners’ websites when they used a search phrase that exactly matched the trademark. But consumers who were searching using the trademark alongside other words were more likely to reach the trademark owners’ websites in 14.7% of  cases. 

What can we glean from this?  The authors rightly recognize the limits of their results.  While they suggest that a navigational searcher’s search process might be “impeded” by unauthorized use of a mark, “as her attention is drawn to many third-party websites in which the searcher is not interested,” we don’t know whether she perceives any impediment.  A searcher committed to finding the trademark owner’s own site can usually determine without clicking which site is which, especially with the prominent brands the authors tested.  David Franklyn and David Hyman have shown that consumers are often confused about whether a search result is organic or paid, but rarely confused about the underlying source of those ads. 

What the authors can test—and thus what their data might push trademark theory to care about—is whether trademark owners suffer any loss of consumer visits from the policy change. They conclude that their findings do provide an upper bound to potential negative effects of the policy change on trademark owners: they only suffer from negative consequences within the subclass of navigational users, only 20% of searches. In European law, arguably these negative effects have something to do with the “investment function” and the “advertisement function” of trademarks, though I have to admit I don’t really know what those are other than ways of stating ownership claims regardless of any effect on consumers.  Even if the consumer began by wanting to visit the trademark owner’s site, we can’t say without knowing more that she’s worse off under Google’s liberalized policy.  Maybe the new choices made her rethink her initial desires.  If the ads weren’t confusing, we need some other reason to say that’s wrongful, and even the European approach doesn’t explain why diversion is a wrong to the consumer.

I’m intrigued by some subsidiary findings for the light they can throw on what we don’t know about consumers: First, “searches on Google appear to be consistently associated with fewer visits to the trademark owner’s site and more searches and activity before a visit to the trademark owner’s site even before the policy change” compared to Bing or Yahoo!  Second, “relative to France, searches originating in Germany are less likely to lead to a trademark owner’s site and also … [German] searchers are more likely to engage in multiple searches prior to a visit to a trademark owner’s site.”  National origin and Google brand loyalty appear to be independent effects on the extent to which users are precommitted to trademarks, which ought to shake our confidence that we know what matters to consumers, much less why.

More generally: the paper provides important information about consumer behavior, but not about what consumers are thinking—we can still only infer why they typed what they did.  Theory cannot be abandoned.  Moreover, the specific results cast into doubt the free riding model where it’s wrongful for a non-trademark owner to change consumer behavior regardless of what the consumer thinks—it seems that unauthorized uses may not in fact work harm in the aggregate.  However, in a big data world where we don’t have any interest in knowing why behavior changed, only that it did, a court might attempt to fine-tune the rule, and not allow keyword purchases on the naked mark or so called navigational search, in order to make things even better for trademark owners.  This is an example of a new legal theory that can emerge from new methods of measurement; it wasn’t suggested by previous doctrine.  Whether that is a good or a bad idea depends not on consumer behavior but on one’s theory of trademark rights.

Lisa Oullette’s paper on using search engines to make trademark judgments primarily addresses a more basic question: how do we know whether a claimant has a mark at all, or a strong mark.  She argues that if a mark is strong—either inherently distinctive or commercially strong—then many top search results for that mark will relate to the source it identifies, so you can use search engines to determine distinctiveness. Relatedly, she argues, the extent of results overlap between searches for two different marks can also be relevant for assessing the likelihood of confusion of those marks.  For marks that don’t point uniquely to the claimant except within a particular set of goods and services she suggests using searches that add more information, like Mission Burritos rather than Mission, which might be deemed non-navigational in the Bechtold/Tucker framework. 

Some thoughts:

(1) This theory may have the most potential implications for registration system: registration occurs in the abstract, not a full marketplace inquiry, and at least for word marks does not concern itself with the current visual appearance of the mark.  The paper’s argument would imply that we should take search engine results much more seriously in the registration context to determine what a word means. 

(2) However, we still need a theory of trademark meaning: example from the paper’s treatment of MICRO-THIN for condoms: all the results for “micro thin condoms” referred to the relevant plaintiff’s mark, but as the paper points out, this could mean strength for condoms or it could just mean that the plaintiff is currently the only user of a not particularly distinctive term within that category—I think this is a more significant weakness than the paper acknowledges, because by entering “micro thin condoms” as the search term you have already taken the position of a consumer using the term to locate something, while what we should want to know is “would a consumer use this term to identify source”?  The paper’s conclusion: “when consumers search for MICROTHIN condoms, they are not simply looking for condoms that are ‘extremely thin’—they are generally looking for Kimono MicroThin condoms.”  But that “when” contains the assumption that drives the result.  We don’t know from these search results if the term is distinctive in the sense of serving as an identifier of source—instead the paper’s inquiry is whether the term is relatively unique within its field, and that really changes the basis for trademark protection into a less consumer-focused rationale and far more producer-oriented.  The traditional question of source significance asks “if” consumers search for micro-thin condoms as an identifier of source, not what happens “when” they enter the search terms.  Questions I would be much more interested in would be “how often do consumers search for Kimono condoms or Kimono micro-thin condoms?” and “how often do consumers use micro-thin to modify other brand names in their searches?”  The Bechtold/Tucker type of data are far more probative of that than the search outputs.

(3) IP law’s largely textual focus, which makes it much more confident handling words than other symbols.  Already reflected in the doctrine: Abercrombie spectrum for word marks might be empirically flawed, but it seems much more manageable than alternatives proposed to identify inherent distinctiveness in symbols or trade dress: Seabrook test for trade dress, for example, is just another way of saying, four times, is this distinctive?  Unfortunately, just as with current doctrine, image search is far behind word marks; the Google shortcut might be most productive with respect to the least troublesome marks.  And when there are both verbal and visual components, there are additional complications: there are word marks that are stronger when coupled with design features: a video “tube” site imitating YouTube’s red and white ovals.

(4) Concerns about using searches to assess likely confusion.  To take another example from the paper, TELMEX currently produces first page results all related to one company, and the paper suggests that therefore a new mark AUDITORIO TELMEX might be confusing.  But that seems to overweight the senior user’s interest: consider the old Gruner + Jahr v. Meredith case, PARENTS versus PARENT’S DIGEST.  When I type in “parents magazine” I get only results for the former on the first page, but when I type in “parent’s digest” I get no results for PARENTS magazine—one might conclude that PARENTS is a rather weak mark since it doesn’t survive even minimal alteration. 

The case law suggests that we should be interested in what consumers think when they see the junior use, and thus search on the junior mark is more probative than search on the senior mark. When I type in “Auditorio Telmex” I get no results relating to Telmex, the senior user.  The results from the previous paper might also be useful here, given that consumers often refine their searches when they started with too abbreviated a term to get useful results.  So, for example, if we found a certain number of people trying Telmex first and then Auditorio Telmex, that could be evidence not of confusion but of accurate understanding that these were two different entities and that the initial shorthand just wasn’t enough to identify the actual target.  Note that we could call that dilution, or we could say that such results show that Auditorio Telmex wouldn’t dilute the naked Telmex mark at all, but we would need to be clear about our definition of dilution to make that call.

[did not get to say this para.] Relatedly: as price discrimination grows, different searchers may get very different results depending on their previous purchases or demographic profiles—which relates more closely to many of trademark’s concerns than current geographic personalization: Northeastern study showed this is already happening with segregation based on devices on travel sites.  Might support the paper’s theory insofar as Google starts taking better account of differences in consumers, so price points and consumer sophistication would actually divide search results.  On the other hand, the paper’s argument about using Google to divine relatedness of goods or likelihood of expansion does not seem persuasive: the paper says “if a non-expanded mark is sufficiently strong that consumers might anticipate such expansion, searches with keywords for those fields would likely still have pages related to the mark,” but given search engines’ attempts to provide presently relevant results I can’t see why that would be true—at the very least we’d need a lot more information about who writes these pages saying “I can’t wait for the McDonald’s nav rattan korma.”

These details reinforce my concern that we should be very careful about when we change doctrine in response to what we think we can measure.  We might not even notice that we’re changing the doctrine (which might, for example, narrow the protection for visual marks if Google became a preferred source of evidence—I might be happy with that outcome even if dubious about the mechanism).  We have a historical example of this evolution with judicial treatment of survey evidence and the introduction of a requirement of control groups, which had the practical effect of changing the percentage of consumers exposed to the allegedly infringing use that courts recognized as sufficient to show likely confusion.  That’s probably the happiest story of doctrinal evolution, but the decision to shape the doctrine may be more normative than empirical, and that has to be kept in mind in all these discussions.

Beebe: two cultures—marketers on one side and lawyers on the other. Talking to marketers isn’t the same as talking to economists—who we talk to affects the law just as talking to economists affects patent law.  Marketers say: confusion as to source isn’t interesting—let’s figure out how to measure dilution.  Goes to slipperiness of models and constant push of law away from confusion as to source and towards confusion as to something else.  Marketers: it’s not consumers search costs, but TM search costs: TM law is currently dedicated to minimizing the costs for TMs looking for consumers. Subjects are TMs/brands and objects are consumers; lawyers don’t yet admit that.

Sprigman: for Steckel: cognitive delay as dilution? When I get older, I get cognitively slower, have I diluted the mark?  What’s the harm?

Steckel: when people get older: all associations are delayed, that’s what a control group is for.  There is research showing that even for the most important of purchases—home, car, etc., that at any moment only 3-4 attributes are operative. People have limited info processing capabilities and as such you want to put your best pitch up front.  New Coke debacle: New Coke tasted better, but people didn’t buy the product for its taste.

Sprigman: but there’s no falsity injected into associations through “dilution.”  What you’ve posited, is substantive weakening of meaning, can see effect of that, but not new associations.  Tarnishment story makes more sense, but is a huge First Amendment problem since in every other context you’re allowed to say “Victoria’s Secret sucks,” even as a competitor.

Steckel: you don’t buy that measuring reaction times captures notion of impairing distinctiveness.

Sprigman: need relevance in retail environment.

Steckel: by delaying reaction times, it bumps something from short term memory to long term memory, taking longer to evoke in world of limited attention.

McKenna: this is two cultures: lawyer wants to know where the associations are, with source 1 or source 2.  Steckel is asking what associations will arise—that’s a mismatch.  Substantive meaning of mark v. whether it will take longer to know whether I’m dealing with New Coke or Old Coke.

Sprigman: whose views count for the law?  Typically we don’t impinge freedom of speech without harm.

RT: I wrote a whole article about this.  Don’t agree with the characterization that delays correspond to shift from short term memory to long term memory.  160 milliseconds is not a long enough delay.  This is an example of what happens to empirical work laundered by lawyers: it gets ramped up as more significant than it is in the translation; we regularly mistake statistical significance for practical significance.

Bechtold: Use of big data and limitations.  One response: need more data.  In some areas, this is true—initial interest confusion; we tried to test this and could’ve tested this w/right data.  In general, we are only slowly seeing usable data.  Fully agree that this is not a substitute for a theory of TM that is justified.  Studies can help us find new theories/new effects.  Spillover effects from unauthorized uses can benefit brand owner; no one has clear theoretical view of how this works psychologically or what TM lawyers should do w/ it.

Jeremy Sheff: should ask what questions empirical methods should be used to answer and that depends on our theory of TM law. Consumer psychology: what are we trying to measure in the minds of consumers?  Marketing departments have been looking at these questions though not attuned to doctrine. Competition: if we think there are empirical questions, we could turn to competition/antitrust law.  Look at effects on entry, prices, output, elasticity of demand. These aren’t being investigated right now. That could help make a case for a particular theory or explore our theoretical presumptions.

Orly Lobel: useful to think about this panel v. patents panel.  Much has to do with whether we’re having the same conversations: does the theory map onto the work? Confusion seems like an instrumental, narrow goal—could it be consumer welfare, new entry, etc.?  Current work seems defeatist, focused on what we have now on the books.  This panel spoke to adjudicators: how to apply/interpret doctrinal mandate; previous panel was willing to talk about what the law should be.  Maybe we just don’t know how to aggregate the effects of TMs.  Or maybe we’ve seen more action on patent reform and thus had more demand for studies of bigger Qs.

McKenna: patent traditionally wanted to promote more output. TM traditionally just wanted to prohibit particular practices.  Yet TM is used as a substitute for or complement to patent law; can be measured in terms of same effects.  Antitrust people: don’t think TM has any effect on competition/any market power for brands. That’s bound up in history, but we need more study.

Chris Buccafusco: TM doesn’t start w/empirical assumptions about brains and decisions and then try to figure out harms; it starts w/assumptions about what bads are and tries to get evidence from social scientists.  Mental map of associations is starting point of marketers, but not TM.  Might be any number of conceptions of how consumers use brands; could be less cognitivist models.  Might produce different sets of harms.  His skepticism: TM owners want to find all the harms and use all possible theories of decisionmaking as long as they specify potentially compensable harms.

Gregory Mandel: move to the ought is useful; so is how the law is working on the ground, as the TM papers were focused on.

Scott Hemphill: clickstream data: you can see sometimes that people stayed only for 5-7 seconds. That’s a signal they didn’t get what they wanted. Could also get that from sequence of queries. Might be a way to detect brief confusion (though, RT notes, also self-refutes theory of harm to TM owners).

Irina Manta: not sure that association w/porn is the only way or even the most direct way to get at tarnishment. If you believe in the model, Victor’s Little Secret creates negative association, but people don’t remember why they have a negative association when they see VS again, it’s just a generalized dis-ease.  Quality perception reduced; that should be actionable every time there’s no First Amendment protection, according to TM owners.  (Of course, under Central Hudson, “every time there’s no First Amendment protection” means “every time there is a factually false statement about the TM owner in commercial speech,” which tarnishment is not.)

Steckel: hard to expose respondents to an experience that mimicks what happens in the market; simply asking them about associations w/porn may not do it.

Michael Meurer: there have been some market value studies about value of TMs. Not sure whether there are studies on market effects of TM litigation, but there could be.

McKenna: there are European economists who try to study relations to innovation, but there are many problems. First, they just measure TM applications which has correlation/causation problems.

Katherine Strandburg: Papers related to Abercrombie emphasized the importance of context in what consumers see.  Google genericity case: court distinguished verb use from TM use.  People use google generically as a verb but not as a source.  Can we split the baby here, giving people more protection for the mark in a certain context, but less in other contexts where the use isn’t the same?  Also, tension in Oullette paper: if you do a search and the first page is all about a particular TM, then she argues it’s distinctiveness. But wouldn’t that also suggest a lack of confusion?  That seems perfectly sensible—a more distinctive mark might be less confusable.

Barton Beebe: TM says that stronger mark = greater scope of protection/more likely confusion is. If we abandoned that principle, the edifice would crumble.  Some European courts have suggested that strength decreases likely confusion, but not American courts. (RT: I would say that individual US courts have occasionally reinvented this theory in parody/First Amendment cases, but haven’t made it coherent.)

McKenna: not about empirical reality, b/c empirical evidence supports Strandburg.

Brett Frischmann: Interesting empirical questions would challenge assumptions/theory of TM.  Assumptions about stable/fixed preferences of consumers; examine way TMs enable producers to shape consumer preferences. What are the public harms from TM? Deadweight loss—how to measure in TM law? Chilling effects, speech values.

No comments:

Post a Comment