Internet Surveys and Dilution Surveys
Moderator: Paula Guibault, Coca-Cola
Gerald L. Ford, Ford Bubala & Associates, Huntington Beach, CA
1946-1960, 18 reported surveys in Lanham Act cases; 1961-1975, 86 reported; 1976-1990, 442 (29/year); 1991-Jan. 2007, 775 surveys (46/year). What fueled this dramatic growth? First, 1960 publication of handbook of recommended practices for trials—predecessor of the Manual on Complex Litigation and companion Manual on Scientific Evidence. Second, passage of FRE in 1975, particularly Rule 703. Initially, not welcomed—many were challenged on hearsay grounds. Then a near-meteoric rise of surveys in late 1970s/early 1980s—deficiencies went to weight and not admissibility. Surveys became more complex and more expensive, and the judiciary became more sophisticated in evaluating them.
We’re back to challenging them under Daubert. 4 surveys excluded in the 13½ years prior to Daubert, 25 in the 13½ years after—from 1% to 4%. But expense has increased as surveys have become more sophisticated in response to Daubert questions. He looked at 75 recent surveys: 36 likely confusion; also genericness, secondary meaning, fame/dilution.
Surveys that missed the mark: Straumann v. Lifecore Biomedical, 278 F. Supp. 2d 130 (Aug. 2003)—secondary meaning for a dental implant. Survey found dental implant had secondary meaning, but couldn’t tie that meaning to the nonfunctional portions claimed. Good lesson: focus on causality is key in surveys claimed.
Tokidoki v. Fortune Dynamic, 2008 U.S. Dist. LEXIS 65665: heart and crossbones design; found confusion but court accorded it no weight. Methodologically flawed because it showed one product after another and asked respondents if they came from the same company; did not replicate market experience. Survey questions and procedures were wrong. Good primer on what not to do for plaintiff doing survey and faced with both marks not very well known in the market.
Trafficschool v. Edriver Inc., 633 F. Supp. 2d 1063: Question: did DMV.org create confusion about affiliation with a state agency? Many respondents thought was operated or endorsed by DMV. Defendants did a survey too, but survey question did not address sponsorship/affiliation. Result: website enjoined.
McNeil v. Merisant, 2004 U.S. Dist. LEXIS 27733: shows evolving nature of survey designs—survey for secondary meaning/likelihood of confusion of yellow color of Splenda packet. Test cell respondents were shown Splenda trade dress, yellow color with some graphic elements, and asked about source. 62% indicated they believed the sweetener came from Splenda. Control cell: shown a package that didn’t include the yellow or graphics—was actually the initial Splenda package. 4% of respondents indicated that was from Splenda.
Remax v. Trendsetter Realty, LLC, 65 F. Supp. 2d 679 (S.D. Tex. 2009): defendant’s sign looked like ReMAX sign with red white and blue. Half respondents saw the real sign; half saw red and blue elements redacted; defendants enjoined.
LG Electronics v. Whirlpool, 661 F. Supp. 2d 940 (Sept. 2009): good example of two-cell study with control cell shown ad for same product without the allegedly false/misleading claims. Judge allowed survey. Defendants have a survey on materiality; case is ongoing.
Hot issues: Daubert. Causality: courts want to know whether the element specifically claimed to create likely confusion is doing so—can the survey show a link? Same with secondary meaning. Single-cell studies can show likely confusion, but can’t tie it to the elements claimed. Today, almost all studies are two-cell studies. This can measure bias attributable to survey questions, market share, preconceptions, or other factors.
Eveready v. Promax: Evermax batteries; tested with a control of “Powermax” with same trade dress.
Callaway v. Dunlop: the longest ball on tour—test cell had the claim on the box and control had the claim redacted; name was Maxfli and question was whether the name itself would produce the claim, but there was a 30% difference in perception.
Joyagra tea (for sexual stimulation): control cell saw “Joy tea.”
Courts are going to want empirical evidence of why a survey is flawed to reject a survey—not just expert’s ipse dixit criticism.
Bruce R. Ewing, Dorsey & Whitney, LLP, New York, NY
Surveys are a product of choices: who to ask, where, what to ask, what to show. 15 years ago, you could choose mall intercept or sometimes a phone survey. Now, internet surveys show up with increasing frequency. Some courts have accepted, others rejected. Courts find universe criticisms easy to understand and accept; even if they allow it in, the jury will understand those criticisms too. So always ask: am I accessing the relevant consumers? Don’t expect doctors from a mall intercept.
Internet access is not universal. College graduates, persons with incomes above $75,000, and persons 18-54 are high internet users (94-80% in past 30 days), and non-college grads, people with incomes below $50,000, and 55+ are less likely (50-52% have used in last 30 days). So for private label goods you might make different decisions than for high-end goods.
Is the survey panel representative of the universe? Relatively few vendors out there (Harris and eRewards most popular). Problems obtaining respondents in cases involving goods or services restricted geographically, by age, or by income. Knowledge Networks does try to match US Census data, a probability sample: but that comes at a price in expense and time. If the incidence level (frequency of participation) is low, they may not be able to get you enough panelists. Other issues: geographically limited area—any internet panel will have problems with that.
Courts are very attentive to this: can you adequately replicate the marketplace/purchase experience? An issue with mall intercept surveys as well. But if a consumer would typically be able to pick up/examine the product, an internet survey will be trickier: may be attacked for not adequately replicating the context. Insurance/banking and other surveys, where stimulus in pre-internet era might have been brochure, might be particularly appropriate for internet survey. If it’s an internet-based product, then you might be asked why didn’t you do an internet survey.
Data collection is generally less expensive than mall intercepts, particularly if you’re dealing with products that many people would use in daily life—if incidence level is high, you can probably save money with an internet survey. If speed is an issue, an internet survey is also faster. The fastest mall intercept study takes weeks; we were able to do an internet survey in 10 days. Another issue: controls—showing people a control may require endless rounds of creating boxes and looking at the boxes and fixing the boxes. Miracle of photoshop: you can get your control stimulus very quickly.
Downsides: when you’re dealing with a panel where people get something pegged to the number of surveys they take, you want to check data about frequent survey takers. And both companies will preclude people who’ve taken a certain number of surveys within a certain period from being further surveyed. Ask your survey expert in advance: what is the incidence level. If they say 20% and your survey says 70%, that’s a red flag: why are suddenly all these people you wouldn’t expect to qualify saying that they do qualify? May be ok, but you need to investigate.
Determining who respondents are: you don’t have interviewer there looking at people; relying on self-reporting. So use some sort of pre- and post-validation questions.
Exposure to stimuli: leave it in front of the respondent/allow them to click back? Same question arises in mall intercept surveys. You’ll be criticized one way or the other, as giving a “reading test” or a “memory test.” If it’s a high-involvement product/service with a lot of thought, let people click back. If low-involvement like chewing gum, maybe not.
Eliciting meaningful responses: you don’t have an interviewer there to ask clarifying questions. Make sure you get a response that at least takes up some amount of word space. Include an algorithm that pops up if someone types a message below a certain number of characters—“can you explain more?” But don’t do this too often or you annoy survey takers who respond with profane rants.
Necessity of testing by counsel: before the public takes it, go through yourself. We’ve presented stimuli that looked fine but then on screen there were lines or weird colors.
Validation: advisable to include some sort of question making clear that the person you’re surveying is the intended one. Can also use an educational level check at the end.
Discovery: huge changes in expert discovery coming Dec. 1; big limits on what you can get, such as draft expert reports; certain communications with counsel. If the expert relies on something, you can get it, but not if they considered it but did not rely on it. So you should be able to get panel data if the survey relied on it. Interesting data: how long does a person take to get through the survey? Variations from 45 seconds to 5 hours, which means the person walked away and came back later. Screen shots of what people saw.
How have courts and NAD addressed this issue? Same as other surveys; generally not viewed them as having problems rising to the level of exclusion. But you can’t show people an index card over the internet when people are going to see the packaging; you need to survey the accurate universe, and provide that panel information as part of the expert report; deficiencies can lead to surveys being discounted. If the purchasing experience in the market doesn’t lend itself to being approximated online, or the relevant universe is not accessible online, you’re going to need another way—interviewing trade participants or something else.
Manny D. Pokotilow, Caesar, Rivise, Bernstein, Cohen & Pokotilow, Ltd., Philadelphia, PA
Dilution surveys: looking for association between the marks based on similarity. Surveyor in Wawa/Haha case asked in a house-to-house survey within a 2-mile radius of the Haha:
1. Have you ever seen or heard of this store?
If yes:
2. What do you think of when you see or hear the name of this store?
If any answer:
3. Do you associate this store with anything else?
29% of respondents, after control subtracted, named Wawa.
Ringling Bros. case: the survey is instructive even though the actual harm requirement is gone. There first needs to be a likelihood of association. The survey: individuals were interviewed in 7 malls, one in Utah. Shoppers presented with a card: “GREATEST ____ ON EARTH.” Questions: with whom or what do you associate the completed statement? Can you think of any other way to complete the statement? With whom or what do you associate the completed statement?”
Utah: 25% completed the blank with only SHOW. All associated with the circus. 24% completed only with SNOW, and only associated it with Utah. 21% completed with both, and associated only SHOW with circus and only SNOW with Utah. Outside Utah, 41% completed only with SHOW and associated with circus; no one put in SNOW. 5/10 of a percent completed with both SHOW and SNOW, and associated them with circus and Utah respectively.
You might think this was a perfect way to show harm: In Utah, 16% actually thought of Utah instead of “Greatest Show on Earth.” But the court found this inadequate to show threshold mental association of the marks. They’d have to associate SNOW with circus before an association would be shown by the survey. Pokotilow thought that there was harm.
Louis Vuitton v. Dooney & Bourke. Survey involved 5 bags: D&B bag of different color, bag of challenged color, and LV bags. Surveyor would put hand on one D&B bag and ask whether knowing these bags are being sold make it more likely you’d want to buy the LV bag, less likely, or does it not affect your desire to buy the bags? Respondent would then be asked what it was that would make it less likely they’d want to buy. Special master said that affect on desire to buy doesn’t show association. Survey kicked out under Daubert for not asking relevant question: is there a mental association with the famous mark when people see the accused mark.
Jada Toys Inc. v. Mattel: Hot Wheels trademark. Hot Rigz: 28% thought that Mattel/Hot Wheels puts out or makes a toy vehicle with that name. Court found that was enough to show dilution.
Nike and Nikepal: Question: What if anything came to your mind when I first said the word “Nikepal”? 87% said Nike. Incredibly high percentage of association; Nike won.
Starbucks, 588 F.3d 97 (2d Cir. 2009): Charbucks. 35% said that Starbucks was the first thing that came to mind: enough to show dilution.
TTAB finally applied dilution in National Pork Board v. Supreme Lobster & Seafood Co., Opposition NO. 91166701: THE OTHER RED MEAT. National Pork Board used a telephone survey and asked: thinking about the slogan you just heard, do any other advertising slogans or phrases come to mind? If yes: What other advertising slogan or phrase comes to mind? 35% answered “the other white meat.” Applicant argued the survey was biased—encouraged you to think of ad slogans/phrases. TTAB found that wasn’t biasing, because that was what you were looking for.
Ewing, in response to Q: tarnishment is more “I know it when I see it”—questions the need for a survey in a tarnishment case, and hasn’t seen any in a parody case.
Pokotilow: a good parody tells you there’s an association; the question then is whether it’s a fair use.
Q: when would you advise a client there was no need for a survey?
Ewing: counterfeiting; if you’re inundated with evidence of confusion; if you’re in a PI situation and don’t think you can get a good survey together—plenty of cases say survey isn’t required; also if you’re not certain what the use will be in the marketplace: the other side is ready to launch but you don’t have a sample.
Pokotilow: in dilution, you don’t need a survey when the exact mark is being used.
Q: has variance in circuits leveled off?
Ford: 11th Circuit in the 1980s said that survey evidence was less important to them than other circuits, but he hasn’t found that to continue.
Ewing: in certain courts they don’t have much TM, so you have to put yourself in the position who doesn’t see surveys much and you have to make a more deliberate and careful presentation of things you might otherwise skip over.
Pokotilow: you expect sophistication on surveys in SDNY, but not necessarily elsewhere. Even in SDNY: Judge Rakoff, 18 years on bench, and our case was his first survey. Litigators made a bunch of assumptions, with expected result.
Wawa judge was very skeptical of the survey, but accepted it as confirmation of his own conclusions.
Q: which survey is more likely to result in fake answers?
Ewing: Hard to say. You don’t have any interviewers there: reduces chance of interviewer fraud/error. But you don’t have any interviewers there to control what the interviewees are doing!
Q: Will FRE changes impact ability to get discovery about the internet panel?
Ewing: If the expert relied on the panel, then courts will allow discovery into how that survey was done. He’s had this come up twice. One panel company violently resisted disclosing information about panelists such as how long they took to complete surveys; the other company handed the information over with no fuss.
No comments:
Post a Comment