Puffery & Parody
David H. Bernstein (moderator)
Richard J. Leighton, Keller and Heckman LLP
Materiality and puffing are opposites. In one sense materiality is an element of falsity. Judge-made law: judges add their own twists, making the Lanham Act a movable target compared to the more predictable NAD. Materiality explains why some literal falsity (as many puffs are literally false) can be nonactionable. Materiality is often based on proof that the intended audience would be motivated to purchase the product or service. They can also be motivated not to buy by a comparison even if they don’t buy the advertised product. Keep negative materiality in mind.
An omission of material fact can also be deceptive. Example: KFC ad claiming that eating KFC could be “eating better” and saying that KFC has less fat than the BK Whopper. Was this a claim that eating KFC was healthier than eating a BK Whopper?
Recent trend in puffery: courts say something is an obviously false or unprovable assertion that the audience should not have relied on even if the plaintiff has evidence that consumers actually believe and rely on it. Reasonable consumer’s wouldn’t rely, therefore your evidence is irrelevant. Courts just want to get rid of a complicated case/avoid the battle of the experts/avoid a jury factfinding. Real question: is there extrinsic evidence that consumers/influencers think this claim is relevant to them?
If the advertiser decided to put the claim in the ad and the competitor was willing to sue over it, why did both parties think the claim was worth making/fighting?
Bad case: American Italian Pasta v. New World Pasta Co. (8th Cir. 2004): “America’s Favorite Pasta” held to be puffery because of a 1943 dictionary definition of “favorite” found to be unprovable. P&G v. Kimberly-Clark Huggies Natural Fit case: said that the competitor’s diaper fits a brick, not a baby. Huggies did tests to substantiate its better fit claim but after the fact argued puffery. The judge agreed: tests done with parents don’t count because the only person who can tell if it fits is a baby, and babies aren’t talking. Leighton suggested that parents were well aware when poor fit led to leaks.
Annie M. Ugurlayan, Senior Staff Attorney, National Advertising Division: Factors NAD considers in puffery determinations—are these general matters that can’t be proven or disproved? Are they about specific characteristics?
Copart Inc. salvage vehicle auctions: “the best way to sell low to mid-value trade-ins,” “the best place to find cars,” and “the most efficient bidding process possible.” Copart sold millions of vehicles each year. NAD determined that the best place/best way to sell claims were puffery because not linked to specific performance attributes. Most efficient bidding process was different: it was a broad superiority claim but there was no evidence in the record that the process was more efficient. NAD recommended ending the comparison and disclosing the basis for the efficiency claim.
Kimberly Clark (softness of tissue): “softness done right” and “new pattern for even more softness” were puffery.
Kohler Co. toilets (April 2009): “global leader in performance toilets” standing alone could be puffery, but not in context, because it was linked to specific attributes. The tagline was shown just above statements about flushing power, water conservation and noise reduction, which are objectively provable.
Vital Pharm. (Redline Princess): “world’s most effective energy drink” on website front page. No performance claims on that page, thus, without more, likely puffery. But would be problematic if linked to other elements like mood enhancement or fat loss.
Alcoa, Inc. (Reynolds Handi-Vac): claim was that leading freezer bags trap air so that freezer burn sets in the minute you close the bag, while the advertiser’s bags virtually eliminate freezer burn. This is not puffery.
Chemistry.com challenged by eHarmony: “get you out dating in the real world faster.” Not puffery: faster is a superiority claim and the core of the service’s goals is to get you dating in the real world. T
Takeaway: puffery is general; more specific claims are less likely to be puffery. NAD typically doesn’t use surveys for puffery but can find them useful. In the Gorilla Glue case, where “toughest glue on Earth” was on the package, a survey supported the challenger’s contention that the slogan wasn’t puffery. Respondents perceived a comparative strength claim. NAD noted inherent difficulties in designing a puffery survey.
Advertisers should be mindful of imagery that could contribute an attribute in non-puffing ways. Context can be key.
Rebecca L. Tushnet, Georgetown
Puffery and humor get to some fundamental truths about ads. We ask “why did you make the claim if you didn’t want consumers to believe it?” Often the answer is: because we wanted to be funny/likeable. But unpack that. We wanted the ad to be funny/likeable so that consumers would feel warm & fuzzy towards our brand which we think will cash out into purchase decisions even though that’s not really about product features. The law has agreed to pretend that consumers are rational, except when it doesn’t pretend that.
Puffery is a way to reconcile falsity doctrine with the intuition that things are a little more complicated than true/false. Puffery is the flip side of falsity by necessary implication, which operates by recognizing the contextual competence of the modern American consumer: when it is obvious that the ad makes claim X to get us to infer claim Y, we treat claim Y as if made. This is why there are routinely problems in new fields—new technologies or drugs—where consumers aren’t quite sure how seriously or metaphorically to take particular claims. I think this was an issue in the DirecTV case—the range of things that consumers might reasonably think could be true of a new type of TV is large.
Courts use puffery because they distrust juries and are confident that the way they see the world (as reasonable consumers) is the way the world is likely to be seen. Ziploc case, involving portrayal of Ziploc bag leaking while Glad bag stayed closed: the court determined that the print ad was false because it misrepresented the rate of leakage—but a single static picture can’t misrepresent a rate; the court read all that in even though Glad had substantiation that its bags leaked less.
This judicial confidence is tied into other developments—Iqbal and Twombly—making explaining how a particular business works even more important early on in a false advertising case.
Separately, the puffery/materiality question highlights two different questions we might ask of consumers: what message do they perceive the ad is really sending, and what message do they believe? A consumer might receive a message that a product lasts all night long, as with Mylanta Night Time Strength, but not believe it because she doesn’t believe any ads. Should we care about reception or deception? I’d argue for reception, because we shouldn’t make the problem of distrust worse. We shouldn’t require consumers to distrust/discount even specific factual claims.
A related problem is that consumers exist on a spectrum. Some get the right/intended message from an ad, some get the wrong/misleading message, some get none. This is part of why humor is a focal point of many challenges. Example: “Where’s the Beef?” ad. Some consumers probably receive the true message of slight superiority in size for Wendy’s burgers. Some probably receive a false message of substantial superiority. Some think it’s just a joke. Question: can you communicate the truth without deceiving a substantial number? What are the alternatives to get the truth out? This isn’t asked often enough.
Penultimately, we are seeing courts distinguish between “misleading” and “misunderstood.” At first I found this distinction puzzling—it sounds like it’s about speaker intent versus audience reception. But considering intent to be key to liability contradicts decades of precedent (and is a bad idea). It would be better to see this as about materiality: I can misunderstand something irrelevant; misunderstanding is static. To be “led,” however, is to be induced to move—a misleading statement is one likely to make consumers change their positions.
Last point: we should keep in mind the interaction of puffery/parody/materiality with trademark law. Advertising law in the US favors comparative advertising; if you can make specific factual claims about a competitor as long as they’re true, we should also be careful to preserve the ability to make general emotional claims without liability for tarnishment, as we’ve seen some trademark owners allege (and even, sadly, win in the Deere case).
Bernstein: some judges don’t like these cases—think of parties as kids fighting on the playground, and use puffery to kick the dispute out. NAD takes it more seriously.
Q: The knowledge base of consumers is important, but so is potential sensitivity. The audience for Ziploc bags may be less sensitive than the audience for weight loss or dating.
Me: It’s about risk: a consumer who sees possible benefit for her baby may be more willing to believe/hope in a claim and pay extra for it. The law needs to guard against consumers’ lowered scrutiny.
Leighton: an intended audience inquiry can help with this.
Ugurlayan: we see cases where that has been a consideration, e.g., new mothers.
Bernstein: puffery as a matter of law is troubling because one way to determine materiality is to ask consumers. How should plaintiffs go about proving materiality?
Leighton: certain presumptions exist in the law—frequently courts presume materiality from a direct express comparative ad. Price claims too go to the essence of the marketing process. Health claims. Intent to deceive can justify a presumption. He always tacks on materiality questions when surveying for evidence that a representative sample considered the claim material (would pay more for a feature v. a product that did not have that feature). Discovery: often internal admissions by advertiser. They thought it would motivate sales. This is persuasive to a judge or jury. If 15-25% of the intended audience considers it material, that matters: make clear early on that this is a material fact dispute, not one that is a matter of law.
Internet Surveys
David H. Bernstein (moderator)
Sandra Edelman, Dorsey & Whitney LLP
Ask yourself: is this the right case for an internet survey?
Dr. Eugene P. Ericksen, Special Consultant, NERA Economic Consulting
Note that internet access is not universal. High usage among college graduates (94% used in the past 30 days) v. 52% of adults who didn’t attend college; similar results for households with incomes above $75,000 and those below and with people 18-54 (81%) versus people 55 and over (50%).
Whatever the universe, how well does your sample represent it? Most internet panels are nonprobability samples. We do those in mall surveys all the time, though. It’s possible to do a probability sample on the internet by sampling people with phones, including providing people with computers if they didn’t have them. Compared probability to nonprobability samples: clear differences in quality.
Often you want to compare test to control, and the research hasn’t been done on whether a nonprobability sample biases test v. control, but we do know that people in non-probability samples tend to be more interested in the subject matter. Being more interested might skew the comparison between test and control.
Edelman: Issues around approximating market conditions: stimulus can’t be picked up in a web survey. Will the method of online presentation be realistic? How long should the respondent be exposed to the stimulus? Should the respondent be able to view the stimulus while answering questions?
Ericksen: even testing a website can be an issue—people like to poke around websites; if you only have one page, you may have problems. We usually do ok with 5-10 pages.
Bernstein: internet survey can be useful for informal discussions towards settlement—can draft questionnaire themselves and get results in a couple of days. Doesn’t replicate market conditions, people can’t pick up the bottle or look on the label, and if it goes forward you may need a full mall intercept survey.
Edelman: if you have a low incidence product, a mall intercept survey could be very difficult. You might be able to reach your niche market on the web.
Bernstein: we’ve done surveys on vets online that would be impossible in a mall.
Ericksen: most internet survey cos. interview people they recruit, so if you want to study recent car purchasers, they can line them right up—really cuts down on incidence cost. Surveys of 400-600 typically cost less than $10,000 in data collection; small fraction of overall—costs of designing a questionnaire, analyzing data, and writing an expert report are the same. He always used to say that his minimum amount of time for a survey, mall or telephone, would be 3 weeks. Now you can collect your data in a week.
Bernstein: state of art on controls: Ex-Lax was reformulated with senna—“new natural senna formula.” Were people deceived into thinking that the whole product or its active ingredient was natural? Schering-Plough sued for false advertising because lots of it was not natural. Their survey asked them what “natural” meant. But it had no control. 50% said it was 100% natural and 50% said it was not. For cross-examination, asked expert if they couldn’t have covered up the word “natural” and asked the same set of questions and found out the effects of preexisting beliefs/question bias.
“Lotrimin Ultra treats most athlete’s foot in just 1 week’s use. Lamisil Defense takes 4 long weeks. Why live with the itching and the burning?” Lamisil was concerned that the message was that the itching and burning would last 4 weeks, whereas you just have to keep using it. If you took out that last line, 19% fewer people thought that symptoms would persist longer with Lamisil. NAD was impressed with this. Change only the part you think is causing falsity.
Edelman: if the perfect control is an altered package that would look unrealistic in the real world, that might be good for an internet survey. It’s timeconsuming and really expensive to create a real-looking control for a mall intercept.
Bernstein: Hybrid—phone survey where you ask someone to go to the internet while on a phone.
Ericksen: We know that shopping mall demographics are similar to US demographics, except for people over 65 who tend to go to shopping malls less. Lately we’ve taken the computers to the shopping malls. He’s done a patent infringement survey that way where he looked at whether a particular software feature affected consumer satisfaction with the product, and that worked just fine.
Edelman: screener issues: should the sampling plan target specific groups based on information the firm previously collected? The internet panel companies want to make their money, and if they promised you 500 people from a particular geographic area they want it done fast, and they have an incentive to allow people to take these surveys, while people want to take the surveys; this leads to data quality issues. One example: their own data said 15% of the population used the product, but 60% of those who took the screener qualified. Seemed unlikely.
Ericksen: buyer beware with internet panel companies. Also they are often just now learning how to deal with litigation and discovery; he wouldn’t be surprised to see some withdraw from the field rather than reveal trade secrets. Big problem: people interested in rewards wind up taking lots of surveys. So he asks for people who have taken less than 12 surveys in a year; creates conflict by taking longer. Can also stratify: only 25% of the panel can have taken a lot of surveys; those people are probably much more unrepresentative and more likely to notice things.
Validation: you can ask questions on your survey to compare with panel data collected at time of recruitment.
Edelman: title of survey can be an issue. How the internet panel is describing subject matter to invitees can actually suggest their involvement. Even more care is required than in mall intercept era about everything part of the invitation process.
Ericksen: respondents are not robots. Where the question is vague and nonspecific, respondents look to the wording of the questions, even the preceding questions, to create a framework for answering.
Bernstein: in-room example: Ask people first whether they’re generally happy and next whether they’re happy with their relationship with spouse/significant other. Then reverse the order with a different group. There are differences! In actual research, asking people about general happiness first makes them more likely to say they are very happy compared to respondents asked about marital happiness first. And people asked about general happiness first were more likely then to say they were very happy in their marriage, though the difference was a bit smaller. Thinking about marriage helps you define general happiness.
Ericksen: Similarly, starting with a low number can change results: ask people how much TV they watch; start with 2½ hours or less per day v. ½ hour or less. In our group, only 5% in the small scale said they watched more than 2½ hours, while substantially more said that they watched more than that in the high-starting-point condition. How do you define what “watching” is? People feel socially conscious about this. In the low category condition in actual study, 84% reported they watched up to 2½ hours/day and 16% watched more. In the high category condition, giving more permission to watch more, 62.5% reported they watched up to 2 ½ hours/day and 37.5% watched more, pure self-reporting.
If people’s experience online doesn’t replicate the real world, it can distort their answers. “What did you think was the main message of the commercial” is a very broad question, and people are going to have to go through a lot of thought to deal with it.
Bernstein: controls are supposed to deal with this, but it’s tricky.
Edelman: technical problems may happen. You may not want people to take the survey on a mobile device where all they can see is a tiny little image; good practice to ask now.
Ericksen: ability to remember stimulus will fade quickly; don’t expect them to be able to answer 60 questions.
Bernstein: may be able to keep image on screen.
Ericksen: human interviewer can keep respondent motivated to answer questions, and bringing respondent back on track. Web surveys create problems with open ended questions. Respondents can get a little lazy. One survey: showed no confusion among respondents who answered with less than 10 words, but substantial confusion among respondents who did the work. Second problem: you can’t do anything when people just write gibberish. Don’t let the survey run long/tire out your respondent.
Bernstein: what can go wrong without an interviewer: advertising for Bausch & Lomb’s MoistureLock ReNu—claim that it lasted for 16 hours. Optifree, the market leader, thought that there was an implied message that the market leader didn’t last 16 hours—tagline “contact lens comfort just took a dramatic turn for the better.” B&L did an internet survey wanting to show that no one got a main message of comparison. But the verbatim responses showed that people missed the video or didn’t remember the name—showed that this study was not an effective way to test people’s reactions. If you followed all the rules, though, he thinks NAD would be a willing audience.
Edelman: practice tips on discovery. One of the first times she served a subpoena, she didn’t get the data file from the expert—the internet panel had all the responses. She had to bring a motion to compel to get Harris Interactive to turn over the data, in particular how many surveys per year their respondents take. Make sure the expert takes responsibility for getting the data from the panel to turn over as part of the expert’s production.
Experts are up on the fact that they aren’t supposed to put excessive stuff in writing/email. Internet panels email back and forth all the time; they don’t necessarily say much but you have to be careful. She encourages experts to keep emails to a minimum, tell the internet people to call to talk; they are often surprised that they have to turn stuff over.
Q from Ugurlayan: what steps are being taken to minimize technical glitches?
Ericksen: we haven’t solved it all yet. Often we say “tell me about the video you just saw,” and if they didn’t see anything we delete that response. Also try to make sure they’re not taking the survey on a cellphone. Many problems come from small screens.
Bernstein: one survey he’s working on now said 25% of responses were thrown out for consistently poor behavior. Survey expert said he didn’t know why; internet panel threw it out for him—that’s an indication of a very big problem.