Wednesday, May 16, 2018

Is this "diet" soda script too close to Diet Coke's?

I have to admit, I might expect it to be a Coca-Cola product.  What's more, it's made in the US, not Brazil, and seems to be a copy of Guarana Antarctica, a Brazilian beverage.

But in that sleep what dreams of liability may come?


When you sue a competitor for false advertising, be prepared to get sued back.  In this pair of opinions, most of the parties’ claims against each other survived, paving the way for a messy trial.

GhostBed, Inc. v. Casper Sleep, Inc., 2018 WL 2213002, No. 15-cv-62571-WPD (S.D. Fla. May 3, 2018)

GhostBed and Nature’s Sleep (hereinafter GhostBed), owned by the same family, sued Casper, a competitor in the online mattress business, for various causes of action.  Nature’s Sleep alleged that it was among the first in the mattress business to deliver a “bed in a box” concept direct to consumers: a mattress vacuum-sealed in a box, which inflates when the packaging is open, though Casper did well after its launch in 2014.  In 2015, Nature’s Sleep launched a competing DTC company, GhostBed.

Casper argued that GhostBed copied many of its product features, website design, and marketing techniques, down to the name, GhostBed, “designed for customers to associate the ‘ghost’ name with Casper based on the popular cartoon character ‘Casper the Friendly Ghost.’” Casper thus sued for trademark infringement and false advertising under the Lanham Act, along with related state law claims.

GhostBed accused Casper of intentionally infringing Nature’s Sleep’s “BETTER SLEEP FOR BRIGHTER DAYS” and false advertising; in this opinion, the court granted Casper partial summary judgment on the false advertising claims.

GhostBed registered naturessleep.com (with two ‘s’s). ICS, apparently a known cybersquatter, registered naturesleep.com (one s). In 2015, Casper allegedly arranged for users who visited the one-s site to be redirected to Casper’s website. GhostBed argued that this constituted direct or contributory infringement and violated ACPA.  Casper argued that it didn’t register or use the domain name.  AdMarketplace, “a company hired as part of an advertising campaign by Casper, had some role in the redirection” to Casper’s site.  The ACPA claim only imposes liability for using a domain name if a person “is the domain name registrant or that registrant’s authorized licensee.” Multiple factual issues, also including damages, precluded summary judgment on these claims.

Likewise, alleged infringement of Nature’s Sleep’s unregistered mark, BETTER SLEEP FOR BRIGHTER DAYS, couldn’t be decided on summary judgment.  Whether Casper’s use of BETTER SLEEP in commerce preceded Nature’s Sleep’s use was disputed.

GhostBed also alleged that Casper engaged in false advertising by: (1) posting false and misleading comments about GhostBed on the internet; (2) coercing mattress reviewers into posting fake, favorable reviews of Casper mattresses on the internet; (3) utilizing search engine optimization techniques to increase visibility of favorable Casper content on the internet; and (4) entering into settlement agreement with three mattress reviewers that resulted in elimination of negative reviews of Casper content.

These claims failed because, first, GhostBed didn’t provide evidence that Casper posted false/misleading comments about GhostBed. GhostBed argued that Casper’s use of affiliate relationships with online reviewers was “part of a concerted effort to reward reviewers to post favorable reviews and ‘strong-arm’ reviewers into posting fake positive reviews of Casper’s mattresses.” However, GhostBed didn’t prove that this conduct involved false or misleading statements that deceived consumers.  Casper also purchased the Google Ad Word “Ghostbed” and directed that an ad saying “Why Buy a Copycat?” and “Surely you Meant Casper” would appear as a sponsored link in search results when users googled “GhostBed.” “Here, the Lanham Act claim fails because these are not false or misleading statements of fact. Instead, these are advertisements suggesting Casper’s opinion that GhostBed is a copycat and that the consumer should also investigate Casper’s mattress.”

GhostBed argued that Casper manipulated search results with negative SEO techniques that caused favorable Casper mattress reviews to appear higher in search results and unfavorable Casper reviews to appear lower.  But this “common marketing strategy” wasn’t an actionable false or misleading “statement.”  So too with entering into settlement agreements with online mattress reviewers to remove negative reviews of Casper mattresses.

Ghostbed, Inc. v. Casper Sleep, Inc., 2018 WL 2213008, No. 15-cv-62571-WPD (S.D. Fla. May 3, 2018)

Here, the court denies GhostBed’s motion for summary judgment on Casper’s claims for trademark infringement/false advertising.

Casper alleged that GhostBed used Casper’s name in social media posts, creating a likelihood of customer confusion and that a Google AdWords campaign stating “GhostBed vs. The Competition—Pick your Ghost Carefully” contributed to consumer confusion by associating Casper with “Casper the Friendly Ghost.” Use of the trademark “GhostBed” also allegedly caused consumer confusion with the trademark “Casper.” Given Casper’s numerous allegations of consumer confusion. GhostBed’s argument that the confusion is de minimis was a question for trial.

Whether GhostBed’s use of the phrase “SuperNATURAL Comfort” misled consumers into believing that Ghostbed mattresses are made from all-natural fibers, or just suggested a connection with the “ghost” in “GhostBed,” was a question of fact for the factfinder at trial.  So too with whether GhostBed’s claim of being in business for 15 years was true because it could legitimately attach its length in business to that of its related company, Nature’s Sleep. There were also factual issues about whether GhostBed falsely represented reviews as “Verified Purchaser[s]” on Amazon.com when GhostBed practically gave the product to the reviewer for free (at a 99% discount) in violation of the terms of use defining a “Verified Purchaser.”

In a slightly different scenario, GhostBed’s “GhostBed vs. Casper Mattress Review” stated that Casper didn’t offer a matching mattress foundation. This statement was initially true when made, in April 2016, and was updated at some point after GhostBed became aware that the statement was no longer true, but it was unclear whether GhostBed timely corrected the statement once it became false. “While Plaintiffs do not have an obligation to monitor a competitor’s offerings minute-to-minute to correct a comparison that may later become untrue, Plaintiffs do have an obligation not to make misleading statements in advertising. A fact finder could find that a substantial delay, if there was one, in correcting a statement that became untrue, was misleading.”  This is actually more defendant-favorable than other rulings on the subject, which do find falsity the moment the claim becomes false (although of course the amount of damages from a short-term falsity may be limited).

Finally, an image GhostBed’s website depicted the Google logo and falsely reported that GhostBed had a 4.99 rating (a non-existent rating). The creator stated that it was designed to poke fun at Casper’s purported 4.9 rating—“they have a 4.9 rating. I put ours at 4.99.” Misleadingness and damages were factual issues.

Other claims were only raised as state law (FDUTPA) claims. Casper targeted an article written by non-party Ryan Monahan of Honest Reviews, LLC, a purported affiliate of GhostBed: “Casper’s Newest Product Might Be at the Expense of Animal Cruelty.” The article could suggest that Casper sources its down feathers from suppliers who “live pluck” birds, but again this was a factual issue, as was whether GhostBed “used social media to harass Casper’s customers who posted comments about Casper’s mattresses online” in a way that was unfair or deceptive under FDUTPA.

Tuesday, May 08, 2018

TM exam question: the right of publicity v. comparative advertising

What if Coco Chanel had been the plaintiff in Smith v. Chanel?  This question made me very happy, and I got a bunch of interesting answers on my final:

Kim Kardashian is famous for being famous. She is a highly successful influencer whose Instagram endorsements cost hundreds of thousands of dollars. She has lent her name to a perfume, KARDASHIAN BY KIM. Beautified also sells perfume. Beautified begins an ad campaign that states, “If you like Kardashian by Kim, you’ll love Beautified, with the same yummy smell but a lower price!” Assume there are no choice of law or other procedural issues. Explain why Beautified is or is not liable on Kardashian’s right of publicity claim under California law.

exercise company affiliation and ad revenue don't make diet review into commercial speech


GOLO, Inc. v. HighYa, LLC, 2018 WL 2086733, No. 17-2714 (E.D. Pa. May 4, 2018)

The court here declines to apply the Lanham Act to “companies that generate income through websites that review the products of others, without selling any products of their own.” GOLO sells a weight loss dieting program that can be purchased through its website. Defendants are review websites that purportedly assist consumers; HighYa has a marketing affiliation with a limited number of suppliers (e.g., BowFlex Max Trainer), but both defendants’ principal source of revenue comes from ads.  GOLO contested the fairness and accuracy of defendants’ online reviews, leading to revision on one site and removal on the other, but GOLO wanted to recover for the initial period.

Defendants’ editorial reviews principally rely on “publicly available information,” rather than defendants’ own use or testing. GOLO’s website contained a description of its program, backed by references to research purportedly supporting the merits of the program. Defendants’ editorial reviews primarily, if not exclusively, critiqued the statements in that description. HighYa’s editorial review spurred dozens of comments from purported users, with an average customer rating of 2.8 out of 5 stars. The link “was posted” across different social media platforms, one of which contained the statement: “Weight-loss #scams are everywhere. Is GOLO one of them?”

GOLO alleged that the title, “GOLO Weight Loss Diet Reviews – Is it a Scam or Legit?” was misleading; much of the review was was based on an outdated version of the GOLO program site; and  the focus of the GOLO program was not simply combatting “insulin resistance,” as the review states. The challenged portions were eventually removed.

The BrightReview article appeared in a similar form. The average customer rating was 2 out of 5 stars, with three purported users giving “highly negative ‘reviews.’ ” GOLO challenged statements about its study evidence and claims.

GOLO alleged that the websites were “designed to appear trustworthy, [and to] resemble internet versions of more traditional consumer review publications”  but were owned by or secretly related to the competitors of the products defendants review.

False advertising and false association claims only apply to commercial speech. Though there was a specific product reference, the articles still weren’t ads.  On their face, the reviews didn’t promote any competing product, and didn’t explicitly propose a commercial transaction. The court analogized to Tobinick v. Novella, 848 F.3d 935 (11th Cir. 2017). As there, the defendants “gained no direct economic benefit from readers of the reviews’ decision,” and “[t]he content of the reviews had no direct bearing on the revenue generated by traffic to the site.”  To the extent that the reviews were based only on the content of GOLO’s website, “[t]he value of such a review to consumers may be limited,” but that didn’t make it an ad.  Ad-based financial benefit was merely incidental to the content.

The Lanham Act does allow liability “if websites purporting to offer reviews are in reality stealth operations intended to disparage a competitor’s product while posing as a neutral third party.”  However, GOLO hadn’t plausibly pleaded that these review sites were shams.

Although “in the absence of discovery, a plaintiff’s ability to confirm what might be well-founded suspicion is limited,” that wasn’t enough here.  The court considered the general content of the sites, including the fact that defendants responded to GOLO’s objections by amending the reviews and specifically advising readers that changes to the reviews were based on further information provided by GOLO. “Such conduct does not plausibly support an inference that the purpose of the reviews is to create an advantage for competing products.” Defendants also disclosed the commercial relationship with BowFlex and other commercial affiliations, which made the allegedly covert competition less plausible.  And to the extent that GOLO pled that defendants’ revenues were a product of web traffic, the favorable/unfavorable nature of a review seemed irrelevant; sellers might even promote favorable reviews.

Nor did the affiliation with BowFlex render this a Lexmark situation in which “one competitor directly injures another by making false statements about his own goods or the competitor’s goods and thus inducing customers to switch.” “The review discussing GOLO’s dieting program does not at all reference, or provide a direct link to any exercise equipment, let alone to Bowflex.” Even if there were a prompt to try exercise, it doesn’t follow that diet and exercise compete; GOLO designed its program to work with exercise.  While direct commercial competition isn’t an “absolute” requirement, these observations bore on the plausibility of the conclusory allegation that defendants’ websites were covert competitors.

With Lanham Act false advertising and state coordinate claims out of the way, only a Pennsylvania trade libel claim remained.  But Pennsylvania has a one-year statute of limitations for trade libel claims, running from the date of the first publication. GOLO alleged that HighYa’s initial review was posted in “March 2016,” and filed on June 16, 2017. GOLO argued that the revised version of the article was published within the limitations period, and that it was re-published when HighYa posted links to it through its social media accounts. But the only HighYa social media post referenced dates back more than a year before filing, and GOLO didn’t object to the revised article.

As for user comments, GOLO’s allegation that HighYa was the true source of the comments “on information and belief” was insufficient in the context of the other allegations.

As to BrightReviews, GOLO didn’t adequately plead falsity. Each challenged statement was prefaced with language indicating that they are observations based primarily on GOLO’s website: “ ‘The 2010 study [was] performed with diabetics, not otherwise healthy individuals looking to optimize insulin...[T]his seems to be their target market;...None of [GOLO’s] studies appear to be peer reviewed for accuracy...;...and [W]e didn’t encounter any clinical evidence on leading medical websites...that directly linked insulin management...and weight loss.’ ” Though GOLO argued that these statements were inaccurate, it didn’t address whether those observations could reasonably and fairly been made based upon the information posted on its website at the time.

GOLO also argued that the reviews created a false impression that its product was a scam, citing low the average user rating; HighYa’s Twitter post, which stated, “Weight-loss #scams are everywhere. Is GOLO one of them?”; the initial title of the article, “GOLO Weight Loss Diet Reviews – Is it a Scam or Legit?”; and the fact that the reviews would appear prominently in web searches for GOLO. But in the context of the review, the court didn’t see an accusation of “a scam in the illegal, fraudulent sense, as compared to communicating that the product might not produce its intended result.”


Monday, May 07, 2018

Content Moderation at Scale, 2/2


You Make the Call: Audience Interactive (with a trigger warning for content requiring moderation)

Emma Llanso, Center for Democracy & Technology & Mike Masnick, Techdirt

Hypo: “Grand Wizard Smith,” w/user photo of a person in a KKK hood, posts a notice for the annual adopt-a-highway cleanup project.  TOS bans organized hate groups that advocate violence.  This post is flagged for review.  What to do?  Majority wanted takedown, but 12 said leave it up, 12 flag (leave up w/a content warning), 18 said escalate, and over 40 said take down.  Take down: he’s a member of the KKK.  Keep up: he’s not a verified identity; it doesn’t say KKK and requires cultural reference point to know what the hood means/what a grand master is.  Escalate: if the moderator can only ban the post, the real problem is the user/the account, so you may need to escalate to get rid of the account.

Hypo: “glassesguru123” says same sex marriage is great, love is love, but what do I know, I’m just a f----t.  Flagged for hate speech. What to do?  83 said leave it up.  5 for flag, 2 escalate, 1 take it down.  Comment: In Germany, you take down some words regardless of content, so it may depend on what law you’re applying.  Most people who leave it up are adding context: not being used in a hateful manner. But strictly by the policy, it raises issues, which is why some flag it.

Hypo: “Janie, gonna get you, bitch, gun emoji, gun emoji, is that PFA thick enough to stop a bullet if you fold it up & put it in your pocket?”  What to do? 57 take it down, 27 escalate, and 1 said leave it up/flag the content.  For escalate: need subject matter expert to figure out what a PFA is.  [Protection from Family Abuse.] Language taken from Supreme Court case about what constituted a threat.  I wondered whether there were any rap lyrics, but decided that it was worrisome enough even if those were lyrics.  Another argument for escalation: check if these are lyrics/if there’s an identifiable person “Janie.” [How you’d figure that out remains unclear to me—maybe you’ll be able to confirm that there is a Janie by looking at other posts, but if you don’t see mention of her you still don’t know she doesn’t exist.]  Q: threat of violence—should it matter whether the person is famous or just an ex?

Hypo: photo of infant nursing at human breast with invitation to join breast milk network.  Flagged for depictions of nudity. What to do? 65 said leave it up, 13 said flag the content, 5 said escalate, and 1 said take it down.  Nipple wasn’t showing (which suggests uncertainty about what should happen if the baby’s latch were different/the woman’s nipple were larger).  Free speech concerns: one speaker pointed that out and said that this was about free speech being embodied—political or artistic expression against body shame.  You have this keep-it-up sentiment now but that wasn’t true on FB in the past.  Policy v. person applying the policy.

Hypo: jenniferjames posts a site that links to Harvey Weinstein’s information: home phone, emails, everything— “you know what to do: get justice” Policy: you may not post personal information about others without their consent.  This one was the first that I found genuinely hard.  It seemed to be inciting, but not posting directly and thus not within the literal terms of the policy. I voted to escalate.  Noteworthy: fewer people voted. Plurality voted to escalate; substantial number said to take it down, and some said to leave it/flag it.  One possibility: the other site might have that info by consent!  Another response would block everything from that website (which is supposed to host personal info for lots of people).

Hypo: Verified world leader tweets: “only one way to get through to Rocket Man—with our powerful nukes. Boom boom boom. Get ready for it!”  Policy: no specific credible threats.  I think it’s a cop out to say it’s not a credible threat, though that doesn’t mean there’s a high probability he’ll follow up on it. I don’t think high probability is ordinarily part of the definition of a credible threat. But this is not an ordinary situation, so. Whatever it is, I’m sure it’s above my pay grade if I’m the initial screener: escalate. Plurality: leave it up. Significant number: escalate.  Smaller number of flag/deletes.  Another person said that this threat couldn’t be credible b/c of its source; still, he said, there shouldn’t be a presidential exception—there must be something he could say that could cross the line. Same guy: Theresa May’s threat should be treated differently.  Paul Alan Levy: read the policy narrowly: a threat directed to a country, not an individual or group.

Hypo: Global Center for Nonviolence: posts a video, with a thumbnail showing a mass grave. Caption: source “slaughter in Duma.”   “A victorious scene today,” is another caption apparently from another source. I wasn’t sure whether victorious could be read as biting sarcasm. Escalate for help from an area expert. Most divided—most popular responses were flag or escalate, but substantial #s of leave it up and take it down too. The original video maybe could be interpreted as glorifying violence, but sharing it to inform people doesn’t violate the policy and awareness is important. The original post also needs separate review. If you take down the original video, though, then the Center’s post gets stripped of content. Another argument: don’t censor characterizations of victory v. defeat; compare to Bush’s “Mission Accomplished” when there were hundreds of thousands of Iraqis dead.

Hypo: Johnnyblingbling: ready to party—rocket ship, rocket ship, hit me up mobile phone; email from City police department: says it’s a fake profile in the name of a local drug kingpin. Only way we can get him, his drugs, and his guns off the street. Policy: no impersonation; parody is ok. Escalate because this is a policy decision: if I am supposed to apply the policy as written then it’s easy and I delete the profile (assuming this too doesn’t require escalation; if it does I escalate for that purpose). But is the policy supposed to cover official impersonation?  [My inclination would be yes, but I would think that you’d want to make that decision at the policy level.] 41 said escalate, 22 take down, 7 leave it up, 1 flag. Violate user trust by creating special exceptions.  Goldman points out that you should verify that the sender of the email was authentic: people do fake these.  Levy said there might be an implicit law enforcement exception. But that’s true of many of these rules—context might lead to implicit exceptions v. reading the rules strictly.

1:50 – 2:35 pm: Content Moderation and Law Enforcement
Clara Tsao, Chief Technology Officer, Interagency Countering Violent Extremism Task Force, Department of Homeland Security

Jacob Rogers, Wikimedia Foundation: works w/LE requests received by Foundation. We may not be representative of different companies b/c we are small & receive a small number of requests that vary in what they ask for—readership over a period of time v. individual info. Sometimes we only have IP address; sometimes we negotiate to narrow requests to avoid revealing unnecessary info.

Pablo Peláez, Europol Representative to the United States: Cybercrime unit is interested in hate speech & propaganda. 
 
Dan Sutherland, Associate General Counsel, National Protection & Programs Directorate, U.S Department of Homeland Security: Leader of a “countering foreign influence” task force. Work closely w/FBI but not in a LE space.  Constitution/1A: protects things including simply visiting foreign websites supporting terror.  Gov’t influencing/coercing speech is something we’re not comfortable with. Privacy Act & w/in our dep’t Congress has built into the structure a Chief Privacy Officer/Privacy Office. Sutherland was formerly Chief Officer for Civil Rights/Civil Liberties.  These are resourced offices w/in dep’t and influence issues.  DHS is all about info sharing, including sensitive security information shared by companies.

Peláez: Europol isn’t working on foreign influence. Relies on member states; referrals go through national authorities.  EU Internet Forum brings together decisionmakers from states and private industry. About 150-160 platforms that they’ve looked at; in contact w/about 80. Set up internet referral management tool to access the different companies.  Able to analyze more than 54,000 leads.  82% success rate.

Rogers: subset of easy LE requests for Wikipedia & other moderated platforms—fraudulent/deceptive, clearly threats/calls to violence. Both of those, there is general agreement that we don’t want them around. Some of this can feed back into machine learning.  Those tools are imperfect, but can help find/respond to issues. More difficult: where info is accurate, newsworthy, not a clear call to violence: e.g., writings of various clerics that are used by some to justify violence. Our model is community based and allows the community to choose to maintain lawful content.

LE identification requests fall into 2 categories: (1) people clearly engaged in wrongdoing; we help as we can given technical limits.  (2) Fishing expeditions, made b/c gov’t isn’t sure what info is there. Company’s responsibility is to educate/work w/company to determine what’s desired and protect rights of users where that’s at issue.

YT started linking to Wikipedia for controversial videos; FB has also started doing that.  That is useful; we’ll see what happens.

Sutherland: We aren’t approaching foreign influence as a LE agency like FBI does, seeking info about accounts under investigation or seeking to have sites/info taken down. Instead, we support stakeholders in understanding scope & scale & identifying actions they can take against it. Targeted Action Days: one big platform or several smaller—we focus on them and they get info on content they must remove. 

Peláez: we are producing guidelines so we understand what companies need to make requests effective.  Toolkit w/18 different open source tools that will allow OSPs and LE to identify and detect content.

What Machines Are, and Aren’t, Good At
Jesse Blumenthal, Charles Koch Institute: begins with a discussion that reminds me of this xkcd cartoon.

Frank Carey, Machine Learning Engineer, Vimeo: important to set threshold for success up front. 80% might be ok if you know that going in.  Spam fighting: video spam, looks like a movie but then black screen + link + go to this site for full download for the rest of the 2 hours.  Very visual example; could do text recognition.  These are adversarial examples. Content moderation isn’t usually about making money (on our site)—but that was, and we are vastly outnumbered by them. Machine learning is being used to generate the content.  It’s an arms race. Success threshold is thus important.  We had a great model with a low false positive rate, and we needed that b/c if it was even .1% that would be thousands of accounts/day. But as we’d implement these models, they’d go through QA, and within days people would change tactics and try something else. We needed to automate our automation so it could learn on the fly.

Casey Burton, Match: machines can pick up some signs like 100 posts/minute really easily but not others. Machines are good at ordering things for review—high and low priority.  Tool to assist human reviewers rather than the end of the process. [I just finished a book, Our Robots, Ourselves, drawing this same conclusion about computer-assisted piloting and driving.]

Peter Stern, Facebook: Agrees. We’re now good at spam, fake accounts, nudity and remove it quickly.  Important areas that are more complicated: terrorism.  Blog posts about how we’ve used automation in service of our efforts—a combo of automation and human review.  A lot of video/propaganda coming from official terrorist channels—removed almost 2 million instances of ISIS/Al Qaeda propaganda; 99% removed before it was seen. We want to allow counterspeech—we know terror images get shared to condemn. Where we find terror accounts we fan out for other accounts—look for shared addresses, shared devices, shared friends. Recidivism: we’ve gotten better at identifying the same bad guy with a new account. Suicide prevention has been a big focus. Now using pattern recognition to identify suicidal ideation and have humans take a look to see whether we can send resources or even contact LE.  Graphic violence: can now put up warning screens, allow people to control their experience on the platform.  More difficult: for the foreseeable future, hate speech will require human judgment. We have started to bubble up slurs for reviewers to look at w/o removing it—that has been helpful.  Getting more eyes on the right stuff. Text is typically more difficult to interpret than images.

Burton: text overlays over images challenged us. You can OCR that relatively easily, but it is an arms race. So now you get a lot of different types of text designed to fool the machine.  Machines aren’t good at nuance.  We don’t get too much political, but we see a lot of very specific requests about who they want to date—“only whites” or “only blacks.”  Where do you draw the line on deviant sexual behavior? Always a place for human review, no matter how good your algorithms.

Carey: Rule of thumb: if it’s something you can do in under a second, like nudity detection, machine learning will be good at it.  If you have to think through the context, and know a bunch about the world like what the KKK is and how to recognize the hood, that will be hard—but maybe you can get 80% of the way.  Challenge is adversarial actors.  Laser beam: if they move a little to the left, the laser doesn’t hit them any more. So we create two nets, narrow and wide. Narrow: v. low false positive rate. With wider net that goes to review queue.  You can look at confidence scores, how the model is trained, etc.

Ryan Kennedy, Twitch?: You always need the human element.  Where are your adversaries headed?  Your reviewers are R&D.

Burton: Humans make mistakes too. There will be disagreement or just errors, clicking the wrong button, and even a very low error rate will mean a bunch of bad stuff up and good stuff down. 

Blumenthal: we tend not to forgive machines when they err, but we do forgive humans. What is an acceptable error rate?

Carey: if 1-2% of the time, you miss emails that end up in your spam folder, that can be very bad for the user, even if it’s a low error rate.  For cancer screening, you’re willing to accept a high false positive rate.  [But see mammogram recommendations.] 

Stern, in response to a Q about diversity: We are seeking to build diverse reviewers, whose work is used for the machine learning that builds classifications.  Also seeking diversity on the policy team, b/c that’s also an issue in linedrawing. When we are doing work to create labels, we try to be very careful about whether we’re seeing outlying results from any individual—that may be a signal that somebody needs more education.We also try to be very detailed and objective in the tasks that we set for the labelers, to avoid subjective judgments of any kind.  Not “sexually suggestive” but do you see a swimsuit + whatever else might go into the thing we’re trying to build. We are also building a classifier from user flagging.  User reports matter and one reason is that they help us get signals we can use to build out the process.

Kennedy, in response to Q about role of tech in dealing w/ live stream & live chat: snap decisions are required; need machines to help manage it.

Carey: bias in workforce is an issue but so is implicit bias in the data; everyone in this space should be aware of that. Training sets: there’s a lot of white American bias toward the people in photos.  Nude photos are mostly of women, not men. You have to make sure you’re thinking about those things as you put these systems in place.  Similar thing w/wordnet, a list of synonyms infected w/gender bias. English bias is also a thing.

Q: outsourced/out of the box solutions to close the resource gap b/t smaller services and FB: costs and benefits?

Burton: vendors are helpful.  Google Vision has good tools to find & take down nudity.  That said, you need to take a look and say what’s really affecting our platform.  No one else is going to care about your issues as much as you do.

Carey: team issues; need for lots of data to train on, like fraud data; for Vimeo, nudity detection was a special issue b/c we don’t have a zero nudity policy.  We needed to ID levels of nudity—pornographic v. HBO. We trained our own model that did pretty well. Then you can add human review. But off the shelf models didn’t allow that.  Twitch may have unique memes—site tastes are different.  Vendors can be great for getting off the ground, but they might not catch new things or might catch too many given the context of your site.

Kennedy: vendors can get you off the ground, but we have Twitch-specific language.  Industry standards can be helpful, raising all ships around content moderation.  [I’d love to hear from someone from reddit or the like here.]

Q re automation in communication/appeals: Stern says we’re trying to improve. It’s important for people to understand why something did/didn’t get taken down. In most instances, you get a communication from us about why there was a takedown. Appeals are really important—allow more confidence in the process b/c you know mistakes can be corrected.  Always a conundrum about enabling evasion, but we believe in transparency and want to show people how we’re interacting w/their content. If we show them where the line is, we hope they know not to cross.

Burton: There are ways to treat bots differently than humans: don’t need to give them notice & can put them in purgatory. We keep info at a high level to avoid people tracking back the person who reported them and going after them.

Transparency
David Post, Cato Institute

Kaitlin Sullivan, Facebook: we care about safety, voice, and fairness: trust in our decisionmaking process even if you don’t always agree w/it. Transparency is a way to gain your trust.  New iteration of our Community Standards is now public w/full definition of “nudity” that our reviewers use. We also want to explain why we’re using these standards. You may not agree that female nipples shouldn’t be allowed (subject to exceptions such as health contexts) but at least you should be able to understand the rule.  Called us “constituents,” which I found super interesting.  Users should be able to tell whether there is an enforcement error or a policy decision.  We also are investing more in appeals; used to have appeals just for accounts, groups, pages. We’ve been experimenting w/individual content reviews, and now we have an increased commitment to that.  We hope to have more numbers than IP, gov’t requests, terror content soon.

Kevin Koehler, Automattic: 30% of internet sites use WordPress, though we don’t host them all. Transparency report lists what sites we geoblock due to local law & how we respond to gov’t requests. We try to write/blog as much as we can about these issues to give context to the raw numbers. Copyright reports have doubled since 2015; gov’t info requests 3x; gov’t takedowns gone up 145x from what they once were. Largely driven by Russia, former Soviet republics, and Turkey; but countries that we never heard from before are also sending notices, sometimes in polite and sometimes in threatening terms.

Alex Walden, Google: values freedom of expression, opportunity, and ability to belong.  400 hours of content uploaded every minute. Doubling down on machine learning, particularly for terrorist content. Including experts as part of how we ID content is key.  Users across the board are flagging lots content; the accuracy rates of ordinary users are relatively low, while trusted flaggers are relatively high in accuracy. 8 million videos removed for violating community guidelines, 80% flagged by machine learning. Flagà human review. Committed to 10,000 reviewers in 2018.  Spam detection has informed how we deal w/other content.  Also dealing w/scale by focusing on content we’ve already taken down, preventing its reupload.  Also important that there’s an appeals process. New user dashboard also shows users where flagged content is in the review process—was available to trusted flaggers, but is now available to others as well.

Rebecca MacKinnon, New America’s Open Technology Institute: Deletions can be confusing and disorienting. Gov’ts claim to have special channels to Twitter, FB to get things taken down; people on the ground don’t know if that’s true. Transparency reports are for official gov’t demands but it’s not clear whether gov’ts get to be trusted flaggers or why some content is going down. Civil society and human rights are under attack in many countries—lack of transparency on platforms destroys trust and adds to sense of lack of control.

Human rights aren’t measured by lack of rules; that’s the state of nature, nasty brutish and short. We look to see whether companies respect freedom of expression. We expect that the rules are clear and that the governed know what the rules are and have an ability to provide input into the rules, also there is transparency and accountability about how the rules are enforced.  Also looking for impact assessment: looking for companies to produce data about volume and nature of information that’s been deleted or restricted to enforce TOS and in response to external requests.  Also looking in governance for whether there’s human rights impact assessment.  More info on superusers/trusted flaggers is necessary to understand who’s doing what to whom. We’re seeing increasing disclosure about process over time.

If the quality of content moderation remains the same, then more journalists and activists will be caught in the crossfire.  More transparency for gov’ts and people could allow conversations w/stakeholders who can help w/better solutions.

Koehler: reminder that civil society groups may not be active in some countries; fan groups may value their community very strongly and so appeals are an important way of getting feedback that might not otherwise be available.  Scale is the challenge. 

Post asked about transparency v. gaming the system/machine learning [The stated concern for disclosing detection mechanisms as part of transparency doesn’t seem very plausible for most of the stuff we’re talking about.  Not only is last session’s point about informing bots v. informing people a very good point, “flagged as © infringement” is often pretty clear without disclosing how it was flagged.]

Sullivan: gaming the system is often known as “following the rules” and we want people to follow the rules. They are allowed to get as close to the line as they can as long as they don’t go over the line.  Can we give people detailed reasons with automated removal?  We have improved the information we have reviewers identify—ask reviewers why something should be removed for internal tracking as well as so that the user can be informed.  A machine can say it has 99% confidence that a post matches bad content, but that’s different—being transparent about that would be different.

Koehler: the content/context that a user needs to tell you the machine is doing it wrong is not the same content that the machine needs to identify content for removal: nudity as a protest, for example.

Content Moderation at Scale, DC Version


Foundations: The Legal and Public Policy Framework for Content

Eric Goldman gave a spirited overview of 230 and related rules, including his outrage at the canard that federal criminal law hadn’t applied to websites until recently—he pointed out that online gambling and drug ads had been enforced, and that Backpage was shut down based on conduct that had always been illegal despite section 230.  Also a FOSTA/SESTA rant, including about supplementing federal prosecutors with state prosecutors with various motivations: new enforcers, new focus on knowledge which used to be irrelevant, and new ambiguities about what’s covered.

Tiffany Li, Yale ISP Fellow: Wikimedia/YLS initiative on intermediaries: Global perspective: a few basic issues. US is relatively unique in having a strong liability framework. In many countries there aren’t even internet-specific laws, much less intermediary-specific.  Defamation, IP, speech & expression, & privacy all regulated.  Legal issues outside content are also important: jurisdiction, competition, and trade. Extremist content, privacy, child protection, hate speech, fake news—all important around the world.

EU is a leader in creating law (descriptive, not normative claim).  There is a right to receive information, but when rights clash, free speech often loses out. (RTBF, etc.)  E-Commerce Directive: no general monitoring obligation.  Draft copyright directive requires (contradictorily) measures to prevent infringement.  GDPR (argh).  Terrorism Directive—similar to anti-material support to terror provisions in US.  Hate speech regulations.  Hate speech is understood differently in the EU. Germany criminalizes a form of speech US companies don’t understand: obviously illegal speech; high fines & short notice & takedown period.  AV Services directive—proposed changes for disability rights.  UK defamation is particularly strong compared to US.  New case: Lewis v. FB, in which someone is suing FB for false ads w/his name or image.

Latin America: human rights framework is different.  Generally, many free expression laws but also regulation requiring takedown.  Innovative as to intermediary liability but also many legislative threats to intermediaries, especially social media.

Asia: less intermediary law generally.  India has solid precedent on intermediary liability: restrictions on intermediaries and internet websites are subject to freedom of speech protections.  China: developing legal system. Draft e-commerce law tries to put in © specifically, as well as something similar to the RTBF. Singapore: proposed law to criminalize fake news.  Privacy & fake news are often wedges for govts to propose/enact greater regulation generally.

Should any one country be able to regulate the entire world?  US tech industry is exporting US values like free speech.

Under the Hood: UGC Moderation (Part 1)
Casey Burton, Match: Multiple brands/platforms: Tinder, Match, Black People Meet.  Over 300 people involved in community & content moderation issues, both in house and outsourced. 15 people do anti-fraud at match.com; 30 are engaged fulltime in content moderation in different countries.  Done by brand, each of which has written guidelines.  Special considerations: their platforms are generally where people who don’t already know each other meet. Give reporters of bad behavior the benefit of the doubt.  Zero tolerance for bad behavior.  Also not a place for political speech; not a general use site: users have only one thing on their minds. If your content is not obviously working towards that goal you & your content will be removed. Also use some automated/human review for behavior—if you try to send 100 messages in the first minute, you’re probably a bot.  And some users take the mission of the site to heart and report bad actions. Section 230 enables us to do the moderation we want.

Becky Foley, TripAdvisor: Fraud is separate from content moderation—reviews intended to boost or vandalize a ranking.  Millions of reviews and photos.  Have little to no upfront moderation; rely on users to report. Reviews go through initial set of complex machine learning algorithms, filters, etc. to determine whether they’re safe to be posted. A small percentage are deemed unsafe and go to the team for manual review prior to publication. Less than 1% of reviews get reported after they’re posted.  Local language experts are important.  Relevance is also important to us, uniquely b/c we’re a travel site.  We need to determine how much of a review can go off the main focus.  E.g., someone reviews a local fish & chips shop & then talks about a better place down the street: we will try to decide how much additional content is relevant to the review.

Health, safety & discrimination committee which includes PR and legal as well as content: goal is to make sure that content related to these topics is available to travelers so they’re aware of issues. There’s nobody from sales on that committee. Strict separation from commerce side.

Dale Harvey, Twitter: Behaviors moderation, which is different from content moderation. Given size, we know there’s stuff we don’t know. In a billion tweets, 99.99% ok is 10,000 not ok, and that’s our week. Many different teams, including information quality, IP/identity, threats, spam, fraud.  Contributors: have a voice but not a vote—may be subject matter experts, members of Trust & Safety Council—organizations/NGOs from around the world, or other external or internal experts.

Best practices: employee resilience efforts as a feature. The people we deal with are doing bad things; it’s not always pleasant. Counseling may be mandatory; you may not realize the impact or you may feel bravado.  Fully disclose to potential employees if they’re potentially going to encounter this.  Cultural context trainings: Silicon Valley is not the world.  Regular cadence of refreshers and updates so you don’t get lost.  Cross functional collaborations & partnerships, mentioned above.  Growth mindset.

Shireen Keen, Twitch: real time interactions. Live chat responds to broadcast and vice versa, increasing the moderation challenge. Core values: creators first.  Trust and safety to help creators succeed. When you have toxicity/bad behavior, you lose users and creators need users on their channels. Moderation/trust & safety as good business. Community guidelines overlay the TOS, indicating expectations.  Tools for user reporting, processing, Audible Magic filtering for music, machine learning for chat filtering. Goal: consistent enforcement.  5 minute SLA for content.  

Gaming focus allowed them to short circuit many policy issues because if it wasn’t gaming content it wasn’t welcome, but that has changed. 2015 launched category “creative,” still defining what was allowed. Over time have opened it further—“IRL” which can be almost anything.  Early guidelines used a lot of gaming language; had to change that.  All reported incidents are reviewed by human monitors—need to know gaming history and lingo, how video and chat are interacting, etc.  Moderators come from the community. Creators often monitor/appoint moderators for their own channels, which reduces what Twitch staff has to deal with. Automated detection, spam autodetection, auto-mod—creator can choose level of auto-moderation for their channel. 

Sean McGillivray, Vimeo: largest ad-free open video platform, 70 million users in 150 countries.  A place for intentional videos, not accidental (though they’ll take those too).  No porn.  [Now I really want to hear from a Tube site operator about how it does content moderation.]  Wants to avoid being blocked in any jurisdiction while respecting free speech.  5 person team (about half legal background, half community moderation background) + developer, working w/others including community support, machine learning.  We get some notices about extremist content, some demands from censorship bodies around the world. We have algorithmic detection of everything from keywords to user behavior (velocity from signupàaction).  Some auto-mod for easy things like spam and rips of TV shows. Some proactive investigation, though the balance tips in favor of user flagging. We may use that as a springboard depending on the type of content. Find every account that interacted w/ a piece of content to take down networks of related accounts—for child porn, extremist content.  We can scrub through footage pretty quickly for many things. 

There are definitely edge cases/outliers/oddballs, which is usually what drives a decision to update/add new policy/tweak existing policy.  When new policy has to be made it can go to the top, including “O.G. Vimeans”—people who’ve been w/the community from the beginning.  If there’s disagreement it can escalate, but usually if you kill it, you clean it: if user appeals/complains, you explain.  If you can’t explain why you took it down, you probably shouldn’t have taken it down.  There’s remediation—if we think an account can be saved, if they show willingness to change behavior or explain how they misunderstood the guidelines, there’s no reason not to reverse a decision. We’re not parents and we don’t say “because I said so.”

Challenges: we do allow nudity and some sexual content, as long as it serves an artistic, narrative or documentary purpose. We have always been that way, and so we have to know it when we see it. He might go for something more binary, but that’s where we are. We make a lot of decisions based on internal and external guidelines that can appear subjective (our nipple appearance/timing index).  Scale is an issue; we aren’t as large as some, but we’re large and growing with a small team.

We may need help w/language & context—how do you tell if a rant to the camera is a Nazi rant if you can’t speak the language?

Bots never sleep, but we do.

Being ad free: we don’t have a path to monetization.  We comply w/DMCA. No ad-sharing agreement we can enter into w/them.  Related: we have pro userbase.  Almost 50% of user are some form of pro filmmaker, editor, videographer. They can be very temperamental. Their understanding of © and privacy may require a lot of handholding.  It’s more of a platform to just share work. We do have a very positive community that has always been focused on sharing and critique in a positive environment.  That has limited our commitment to free speech—we remove abusive comments/user-to-user interaction/harassing videos.  We also have an advantage of just dealing w/videos, not all the different types of speech, w/a bit of comments/discussion.  Users spend a lot of time monitoring/flagging and we listen to them.  We weight some of the more successful flaggers so their flags bubble up to review more quickly.

Goldman: what’s not working as well as you’d like?

Foley: how much can we automate w/o risking quality? We don’t have unlimited resources so we need to figure out where we can make compromises, reduce risk in automation.

McGillivray: you’re looking to do more w/less.

Keen: Similar. Need to build things as quickly as possible.

Harvey: Transparency around actions we take, why we take those actions. Twitter has a significant amount of work planned in that space.  Relatedly, continuing to share best practices across industry & make sure that people know who to reach out to if they’re new in this space.

Burton: Keep in mind that we’re engaged in automation arms race w/spambots, fake followers, highly automated adversaries. Have to keep human/automated review balanced to be competitive.

Under the Hood: UGC Moderation (Part 2)
Tal Niv, Github: Policy depends a lot on content hosted, users, etc. Github = world’s largest software development platform. The heart of Github is source control/version system, allowing many users to coordinate on files with tracked changes. Useful for collaboration on many different types of content, though mostly software development.  27 million users worldwide, including individuals & companies, NGOs, gov’t.  85 million repositories. Natural community.

Takedowns must be narrow.  Software involves contribution of many people over time; often a full project will be identified for takedown, but when we look, we see it’s sometimes just a file, a few lines of code, or a comment.  15 people out of 800 work on relevant issues, e.g., support subteam for TOS support, made of software programmers, who receive initial intake of takedowns/complaints.  User-facing policies are all open on the site, CC-licensed, and open to comment.  Legal team is the maintainer & engages w/user contributions.  Users can open forks. Users can also open issues.  Legal team will respond/engage.  List of repositories as to which a takedown has been upheld: Constantly updated in near real time, so no waiting for a yearly transparency report.

Nora Puckett, Google
Legal removals (takedowns) v. content policies (what we don’t want): hate speech, harassment; scaled issues like spam and malware.  User flags are important signals. Where request is sufficiently specific, we do local removals for violation of local law (general removals for © and child exploitation).  Questions we prompt takedown senders to answer in our form help you understand what our removal policies are.  YT hosts content and has trusted flaggers who can be 90% effective in flagging certain content.  In Q4 2017, removed 8.2 million videos violating community guidelines, found via automation as well as flags and trusted flags.  6.5 million were flagged by automated means; 1.1 million by trusted users; 400,000 by regular users.  We got 20 million flags during the same period  [?? Does she mean DMCA notices, or flags of content that was actually ok?].  We use these for machine learning: we have human reviewers verifying automated flags are accurate and use that to train machine learning algorithms so content can be removed as quickly as possible. 75% of automatically flagged videos are taken down before a single view; can get extremist videos down in 8 hours and half in less than 2 hours. Since 2014, 2.5 million URL requests under RTBF and removed over 940,000 URLs since then. In 2018, 10,000 people working on content policies and legal removal.

Best practices: Transparency. We publish a lot of info about help center, TOS, policies w/ exemplars.

Jacob Rogers, Wikimedia Foundation: Free access to knowledge, but while preserving user privacy; self-governing community allowing users to make their own decisions as much as possible. Where there are clear rules requiring removal, we do so. Sometimes take action in particularly problematic situations, e.g. where someone is especially technically adept at disrupting the site/evading user actions. Biannual transparency report. No automated tools but tools to rate content & draw volunteers’ attention to it.  E.g., will rate quality of edits to articles.  70-90% accurate depending on the type of content. User interaction timeline: can identify users’ interactions across Wikipedia and determine if there’s harassment going on.  Relatively informal b/c of relatively small # of requests. Users handle the lion’s share of the work. Foundation gets 300-500 content requests per year.  More restrictive than many other communities—many languages don’t accept fair use images at all, though they could have them.  Some removals trigger the Streisand effect—more attention than if you’d left it alone.

Peter Stern, Facebook: Community standards are at core of content moderation.  Cover full range of policies, from bullying to terrorism to authentic ID and many other areas. Stakeholder engagement: reaching out to people w/an interest in policies.  Language is a big issue—looking to fill many slots w/languages.  Full-time and outsourced reviewers.  Automation deals w/spam and flags for human review and prioritizes certain types of reports/gets them to people w/relevant language/expertise. Humans play a special role b/c of their ability to understand context.  Training tries to get them to be as rigid as possible and not interpret as they go; try to break things down to a very detailed level tracking the substance of the guidelines, now available on the web.  It only takes one report for a policy violation to be removed; multiple reports don’t increase the likelihood of removal, and after a certain point automation shuts off the review so we don’t have 1000 people reviewing the same piece of content that’s been deemed ok. Millions of reports/week, usually reviewed w/in 24 hours. Route issues of safety & terrorism more quickly into the queue.

Most messaging explains the nature of the violation to users.  Appeals process is new—will discuss on Transparency panel. 

Resiliency training is also part of the intake—counseling available to all reviewers; require that for all our vendors who provide reviewers. Do audits for consistency; if reviewers are having difficulty, then we may need to rewrite the policy.

Community integrity creates tools for operations to tools, e.g. spotting certain types of images.

Strategic response team. E.g., there’s an active shooter.  Would have to decide whether he’s a terrorist, which would change the way they’d have to treat speech praising him. Would scan for impersonation accounts.

Q: how is content moderation incorporated into product development pipeline?

Niv: input from content moderation team—what tools will they need?

Puckett: either how current policies apply or whether we need to revise/refine existing policies—a crucial part.

Rogers: similar, review w/legal team. Our product development is entirely public; the community is very vocal about content policy and will tell us if they worry about spam/low quality content or other impediments to moderation.

Stern: Similar: we do our best to think through how a product might be abused and that we can enforce existing policies. Create new if needed.

CC-licensed or not? you be the judge

A knitting pattern I'm using comes with a CC license and license terms that seem distinctly un-CC.  For contracts folks out there, what license do I have?

It says CC-BY-NC-SA, but then "What does this copyright notice mean?" purports to explain "You do not have permission to make copies for anyone else (including your mother, mother-in-law, children, or friends).... [Y]ou may not publish anything based on these patterns without prior permission. And finally, it means you may not use these patterns to make any items for sale, even if you've made minor modifications of the patterns."  None of these limits are entailed by NC-SA.  (I think even the items for sale part isn't, inasmuch as you wouldn't be charging for the pattern but for the item.)

I take it that if the licensor were sophisticated, it would be uncomplicated to treat the "what does this mean?" notice as irrelevant, because it's not true license language, which is above.  Does the fact that the licensor clearly doesn't understand the CC license she's using change matters?

Showing good-looking cuts of meat is puffery for pet food


Wysong Corp. v. APN, Inc. 2018 WL 2050449, -- F.3d – (6th Cir. May 3, 2018)|

Wysong, which sells pet food, sued six competitors for violating the Lanham Act through pictures like this one:


“The bag features a photograph of a delicious-looking lamb chop—but Wysong says the kibble inside is actually made from the less-than-appetizing ‘trimmings’ left over after the premium cuts of lamb are sliced away. The district court dismissed the claims, and the court of appeals affirmed.

Wysong argued literal falsity because the photographs on the packages told consumers the kibble was made from premium cuts of meat, when it was actually made from the trimmings left over after the premium cuts are gone.  But this wasn’t unambiguously false—a reasonable consumer could understand the images as indicating the type of animal from which the food was made (e.g., chicken) but not the precise cut used (e.g., chicken breast).

Without a survey, pleading misleadingness required facts supporting “a plausible inference that the challenged advertisements in fact misled a significant number of reasonable consumers.” The complaint alleged that contemporary pet-food consumers prefer kibble made from fresh ingredients like those they would feed their own families, and that the accused packaging tricked those consumers into thinking their kibble was in fact made from such ingredients. But context matters, and “reasonable consumers know that marketing involves some level of exaggeration.”  A reasonable consumer at a fast-food drive-through doesn’t expect that his hamburdger will look just like the one pictured on the menu.  Likewise, without more facts, “it is not plausible that reasonable consumers believe most of the (cheap) dog food they encounter in the pet-food aisle is in fact made of the same sumptuous (and more costly) ingredients they find a few aisles over in the people-food sections.”

Wysong responded that  some pet foods, such as Wysong’s, do contain premium-quality ingredients. But Wysong failed to explain “how that fact impacts consumer expectations. Are these premium sellers even known to the Defendants’ intended audience? Do their products compete with the Defendants’, or do they cater to a niche market? Are there obvious ways consumers can distinguish between the Defendants’ products and the fancier brands?” The ingredient lists’ effect on consumers also needed to be explained: many of the packages listed animal “meal” or “by-product” as an ingredient. “And that information certainly suggests that the kibble is not made entirely from chicken breasts and lamb chops.”  Ultimately, the relevant market and the products’ labeling are crucial in evaluating plausibility, but Wysong said next to nothing about them. And that is fatal here, since the puffery defense is such an obvious impediment to Wysong’s success.”

Thursday, May 03, 2018

ABC doesn't find getting rid of pro se (c) and TM claim so simple


 Manigault v. ABC Inc., 2018 WL 2022823, No. 17-CV-7375 (S.D.N.Y. Apr. 12, 2018) (magistrate judge)

An app owner’s copyright and trademark claims against a news organization for broadcasting a news story about apps, including his, survive a motion to dismiss on the merits (though the owner has to replead copyright ownership).  Partly this terrible result is from bad Second Circuit precedent, but partly it stems from a refusal to consider the content of the accused TV segment on a motion to dismiss, although doing so is clearly acceptable because the content is integral to the complaint.  Despite the bad Second Circuit precedent, this is an easy case for dismissal.  Should I have an "argh" tag?

Manigault alleged that he “owns KeyiCam unregistered Trademark, which KeyiCam is Software that takes a picture of a Key and provides the biting code to end user.” ABC allegedly infringed his copyright and trademark rights “by showing a picture of KeyiCam website” “in connection with their News ‘Locked out? Smartphone app might be key to solving problems’ and ‘KeyMe addresses security concerns of key duplication.” ’ “ABC Inc advertised on their [sic] website that... KeyMe isn’t the only game in town, though; there’s also Keys Duplicated and KeyiCam.” Manigault alleged that ABC’s use of the marks similar to KeyiCam “is likely to cause consumers mistakenly to believe that the [goods identical or similar to KeyiCam] emanate from or are otherwise associated with KeyiCam,” causing KeyiCam to lose business.

ABC explained that Manigault’s claim “arises from a consumer news segment that was broadcast on [its 6abc’s] program” reported by journalist Nydia Ha, which “lasted roughly three and a half minutes.” According to ABC, the segment reported by its journalist “reviewed the phenomena of smartphone applications ... that allow consumers to create and store virtual copies of their keys - digital copies which can then be used to order physical duplicates of the keys if they are lost.” “The report focused on one service called KeyMe,” explained how it works and “mentioned the names of the other two services, a screenshot from each business’s web site was displayed on the screen for about a second each.” ABC contends that it “also posted an article on [its 6abc] website summarizing the news report,” which included “hyperlinks to the websites for each of the companies mentioned in the report.”

Since Manigualt was pro se, his allegations had to be construed liberally.

ABC argued that the complaint “is premised on the assertion that the ABC news report is a commercial advertisement for key duplication services. It is not, and so the Complaint must be dismissed as a matter of law.” The magistrate deemed this argument “meritless,” because Manigault’s allegations had to be taken as true on a motion to dismiss, and he alleged that “KeyiCam appeared in ABC Inc advertisements” and “ABC Inc advertised on their [sic] website that ... KeyMe isn’t the only game in town, though; there’s also Keys Duplicated and KeyiCam”; and “KeyiCam is mixed in the broadcasted commercial in ABC News with other similar Startups such as Keyme and Keys Duplicated that offer similar goods and services.”  This was enough.

Likewise, the magistrate rejected ABC’s argument that confusion was implausible and that reference to KeyiCam was nominative fair use.  Without discussing Twiqbal, the magistrate ruled that a challenge to plausibility went to the merits of the trademark infringement claims, as did nominative fair use (citing IISSCC) and First Amendment defenses, thus requiring a summary judgment motion or trial.

So many problems.  First: “confusion over what?”  Construed as liberally as possible, the complaint still has no allegation that there’s confusion over ABC’s connection with Manigault.  Mis-reporting, even assuming that’s what happened, isn’t generally directly infringing, and at most secondary liability would seem to be an issue.  Cf. the Hangover case, in which the court points out that confusion over whether a piece of luggage shown in a movie was actually from LV is not the kind of confusion against which the Lanham Act is directed. Louis Vuitton Mallatier S.A. v. Warner Bros. Entertainment Inc., 868 F. Supp. 2d 172 (S.D. N.Y. 2012) (granting motion to dismiss).

The Second Circuit’s bizarre treatment of nominative fair use has made this harder, but invoking a trademark doesn’t inherently remove 12(b)(6) as an option.  A motion to dismiss should still be viable “where simply looking at the work itself, and the context in which it appears, demonstrates how implausible it is that a viewer will be confused into believing that the plaintiff endorsed the defendant’s work.” Elec. Arts, Inc. v. Textron Inc., No. C 12-00118 WHA, 2012 WL 3042668, at *5 (N.D. Cal. July 25, 2012); see also Hensley Mfg. v. ProPride, Inc., 579 F.3d 603, 610 (6th Cir. 2009) (affirming grant of motion to dismiss where the defendant did not use the trademark to identify the source of its products or to suggest an association between the defendant and the plaintiff); Cummings v. Soul Train Holdings LLC, 67 F. Supp. 3d 599 (S.D.N.Y. 2014) (granting motion to dismiss where confusion based on inclusion in artistic work was implausible); cf. Kelly-Brown v. Winfrey, 717 F.3d 295 (2d Cir. 2013) (recognizing that even descriptive fair use, an affirmative defense, can be resolved on a motion to dismiss where the necessary facts are evident from the complaint).  This practice is particularly important given that nominative fair use protects important First Amendment interests, and that early resolution of assaults on news reporting prevents chilling core First Amendment-protected speech.  [Also why we need a federal anti-SLAPP law.]

Similarly, on copyright, the magistrate refused to consider ABC’s fair use argument under 12(b)(6), considering it an issue only for summary judgment motion or trial (not citing any cases).  Here’s one saying that 12(b)(6) is a fine time to resolve clear-cut cases of fair use: TCA Television Corp. v. McCollum, 839 F.3d 168, 178 (2d Cir. 2016), cert. denied, 137 S. Ct. 2175 (2017).

However, Manigault failed to plead ownership of a valid/registered copyright in any specific work, so the claim was dismissed with leave to amend. 

The magistrate also held that there were no allegations that ABC engaged in deceptive acts or practices, so claims for such under New York law had to be dismissed—inconsistent with the TM claims, but it’s hard to be surprised.

Monday, April 30, 2018

Alzheimer's Association and Alzheimer's Foundation in keyword battle


Among other things, this case has some interesting things to say about IIC and proper controls in survey cases.

Alzheimer’s Disease & Related Disorders Association, Inc. v. Alzheimer’s Foundation of America, Inc. 2018 WL 1918618, No. 10-CV-3314 (S.D.N.Y. Apr. 20, 2018)

The Association (counterclaim plaintiff) sued the Foundation (counterclaim defendant), alleging that the Foundation’s purchase of Asssociation trademarks as search engine keywords and use of the two-word name “Alzheimer’s Foundation” constituted trademark infringement and false designation of origin under the Lanham Act. The court found that confusion was unlikely.

The parties first litigated confusion in 2007, starting with fighting over checks made out to one entity but sent to the other.   The Association was formed in 1980 and is the world’s largest private non-profit funder of Alzheimer’s research. It has more than 80 local chapters across the nation providing services within each community. In fiscal year 2016, the Association raised more than $160 million in contributions and spent $133.6 million on program activities, including more than $44 million on public awareness and education. The Association had nearly 9 billion “media impressions” and more than 41 million website visits in that FY.  

In 2015, when survey respondents in the general population were prompted for the first two health charity organizations that come to mind, “respondents were vastly more likely to name the American Cancer Society (46%), St. Jude Children’s Research Hospital (28%), American Heart Association (19%), and Susan G. Komen for the Cure (10%), than they were to name the Alzheimer’s Association (3%).” When they were asked which organizations “involved in the fight against Alzheimer’s disease” they have heard of, the Association registered 8-12% awareness from 2011-2015. Among the demographic groups that the Association targets with its advertising messages, unaided awareness of the Association among organizations “involved in the fight against Alzheimer’s disease” hovered between 10% and 20% from 2009-2015. Aided awareness of “Alzheimer’s Association” was roughly 25-32% among the general population, and at 35-47% among targeted subgroups between 2011- 2015.  For the Foundation, those numbers were 15-19% and 25-32% respectively.

The Association has a standard character mark registration for ALZHEIMER’S ASSOCIATION, registered since June 8, 2004, but in use since 1988. The Association also has other registrations using “Alzheimer’s Association” along with other words or with graphical elements, as well as standard character marks for WALK TO END ALZHEIMER’S and MEMORY WALK. In 2016, nearly 500,000 participants took part in Association walks in 630 communities, raising more than $78.6 million. The Association websitet displays “alz.org” and “Alzheimer’s Association” at the top of its landing page. Its principal color is purple. 

The Alzheimer’s Foundation of America was founded in 2002 and has more than 2,600 member organizations throughout the country that collaborate on education, resources, best practices and advocacy. AFA has awarded millions of dollars in grant funding to its member organizations for services such as respite care. In 2010, AFA’s “revenues, gains and other support” from “contributions and special events including telethons” was approximately $6.6 million. Its website is at www.alzfdn.org, and principally uses the colors teal and white. Its first registered mark, from 2006, is for “AFA Alzheimer’s Foundation of America” with the organization’s “heart in hands” logo.
The Foundation also has a registration for “Alzheimer’s Foundation” plus the heart in hands logo.



The Foundation disclaimed any exclusive right to the literal elements of these marks.  In 2014, the Foundation filed for a standard character mark for “Alzheimer’s Foundation,” now registered on the Supplemental Register with a claimed first use date of 2004.  Before 2009, its use of the two-word name (without “of America”) was mostly limited to press releases and sponsored ads online, as well as on the Foundation’s Twitter account.  Since 2004, the Foundation has described itself in the header of online ads using the two-word name “Alzheimer’s Foundation,” resulting in over 30 million impressions between June 7, 2004 and the end of 2009. The Foundation also bought Association marks as keywords. From April 2012 to June 2014, the Foundation ran sponsored ads that used the word “Association” in the text, ending when the Association objected.

The Association also used keyword advertising, including buying “Alzheimer’s Foundation” as a keyword until 2010.

A Google search of “alzheimer’s association” from June 2014 showed the Foundation’s ad as the top result, with the main header as “Alzheimer’s Foundation - alzfdn.org” with the tagline “An Association of Care and Support. Reach Out to Us for Help....” The second and only other ad was from the Association. Their header reads “alz.org - Alzheimer’s Association,” and the tagline “Honor a Loved One with a Tribute Donation - Support Research & Care.”


“Alzheimer’s Association” sometimes led to more clicks on the Foundation’s sponsored ad than those received from its own brand. Indeed, campaigns targeting Foundation’s competitors performed the best and comprised roughly 40% of AFA’s keyword marketing budget.   During some of the relevant period, the Foundation may have been using Association-related metatags, though the court found this immaterial “as [site metatags] have likely not been used by search engines since before 2009, and so have little effect on the ordinary prudent consumer.”

On the Foundation’s donation page, there are many references to “AFA” or “Alzheimer’s Foundation of America,” and no references to the Association or use of any the Association’s marks. “At no point while on the AFA website during the donation process would a consumer see any of the Association Marks.” So too in reverse for the Association.

The Foundation was the first to complain of confusion, in 2004, when its then CEO wrote “a routine web search under ‘alzheimers foundation’ led me to [the Association’s] site. ‘Alzheimer’s Foundation of America’ is a registered service mark of our organization. It distresses me that supporters of our respective organizations may be confused when searching the internet.” This might not have related to sponsored ads, though. In 2014, a Foundation employee wrote that a survey showed many people saying they donated before or were “introduced to AFA through a fund raising event,” and speculated that “several respondents may have us confused with the Association or with ‘the cause.’ ”

Between 2007 and 2012, the Association received more than 5,700 checks made payable to “Alzheimer’s Foundation” or a variant totaling over $1.5 million. The Foundation received more than 5,000 checks between 2006 and June 2016 made payable to “Alzheimer’s Association” or near variants. A large percentage of the Foundation’s online donors are first-time donors, and that the average online donation, as well as check donation, is under $100.  The Foundation argued that the number of Foundation-labeled checks received by the Association was only 0.1% of the total number of checks received, and 0.252% of the total value of checks received.

There was evidence that a check for the Association was received alongside a printout of the Foundation’s internet donation form. When asked why she had submitted the Foundation form, the donor claimed that she had typed the Alzheimer’s Association name into her web browser and clicked on the first entry that came up to download the form for donations and that she was unaware that she was on the Foundation website.  No one testified about other donors or potential donors contacting one organization looking to donate to the other, apart from people asking about the difference between the Association and the Foundation.

Other purported instances of confusion included that NBC’s Today show ran a quotation the Association has provided alongside the Foundation’s logo, but that was the logo with the words “of America,” and wasn’t at issue in this litigation. In Celebrity Family Feud, the host once suggested the show was raising money for “Alzheimer’s Foundation,” when it was actually raising money for the Association. These instances didn’t have a clear link to the Foundation’s allegedly infringing actions.

There was also two studies from the Association and an expert critique by the Foundation.  Study 1 found 34% net confusion between the standard character marks “Alzheimer’s Association” and “Alzheimer’s Foundation.”  The court found that this was somewhat artificial but still probative to actual confusion. However, the court found that the control—“Alzheimer’s Trust”—artificially inflated the net confusion numbers.  They “pre-tested” two controls, “Alzheimer’s Charity” and “National Alzheimer’s Foundation,” which, by generating more confusion, would have yielded net confusion rates for Alzheimer’s Foundation of 12% and 11% respectively. 

The expert said “Alzheimer’s Charity” wasn’t good because it “sounded like a product category rather than a single real entity,” which sounds fair to me, but the court didn’t like the expert’s explanation overall.  He testified that his staff never disclosed the results of these pre-test surveys to him, even though the results were disclosed to the Association’s attorneys. “While it is legitimate to run a pre-test or pilot study for the purposes of improving a study, no credible explanation was offered for the changes made between the pre-test survey and the reported survey, and this suggests a potentially improper purpose.” 

Setting that aside, “Alzheimer’s Trust” was a weak control term. Though testing a two-work mark made sense, “Trust” as a descriptor for a charity was both more unique than “Foundation,” and more easily distinguished from “Association” in part because of its multiple meanings. “Alzheimer’s Federation” could have more clearly controlled for the confusion created by reasons other than Foundation-specific reasons, or “Alzheimer’s Foundation of America,” given that the Association was arguing that it was the two-word mark that was confusing.  “[W]hat better way to test that proposition than to compare its use with that of AFA’s full, and undisputedly non-infringing name?”

Another study allegedly showed IIC.  Respondents were asked to type “Alzheimer’s Association” into a search box and then were either shown results including the Foundation’s disputed ads and other keyword ads (test condition), or organic search results without ads (control condition). Respondents were asked to click on the link/s they thought would take them to the website of the organization for which they searched. If they clicked on the Foundation’s ad, they saw the Foundation’s web page and were asked if they thought it was the web page of the organization for which they had searched. If respondents didn’t select the “Alzheimer’s Foundation” or control links, respondents were asked to click on the link or links, if any, that they believe would take them to the website of an organization affiliated with the organization they searched for.

There was a net rate of 20% sponsorship confusion, and, of the 42% who were confused in the test condition, 74% remained confused as to source or affiliation after viewing a screenshot of the Foundation’s web page (31% of the total respondents in the test condition).

The court found it significant that the study design assumed that participants asked to type in “Alzheimer’s Association” knew that it is a particular organization. “Consumers cannot mistakenly associate the AFA ad or website with the Association if they were not aware of the Association’s existence to begin with.”  The test stimulus was also biased—there was no sponsored ad for the Association, whereas the Association also bid on its own marks as keywords, and often would have appeared in the sponsored ads section as well.  The order of the ads would have changed based on the organizations’ respective bids, and thus order bias was also implicated—a bias that couldn’t be accounted for by the chosen control.  [citing Winnie Hung, Limiting Initial Interest Confusion Claims in Keyword Advertising, 27 Berkeley Tech. L.J. 647, 666 (2012) (stating that statistics show ad rank has a significant effect on consumers’ confusion).] 

The court, however, concluded that though the bias suggested a lower actual confusion percentage, the study did use a real web page with real ads that did actually result from typing in “Alzheimer’s Association” at one point in time. “[I]f the Court were to find a likelihood of confusion based on the one test stimulus used, it would constitute a Lanham Act violation just the same.”  [Is that really likely confusion if there weren’t evidence about how often it would have happened?]  

The study inflated net confusion because the first search result in the control was the Association’s website, while the first search result in the test condition was the allegedly infringing Foundation ad.  “A better control stimulus would have contained non-infringing sponsored ads at the top that, if clicked, counted towards the confusion rate for the control.” What was at issue was the specific ads, not sponsored ads in general, and thus “only the offending item should have been removed or replaced.”

Overall, the evidence “strongly militates against placing much weight on the study evidence.”

The Association’s evidence of intentional attempts to confuse also faltered.  Of possible interest:  “AFA’s patent counsel filed two false specimens of use”—in its filing for a standard character mark, t counsel included as a specimen of use a page from its magazine in which the words “of America” had been obscured. After the Association raised the issue, counsel filed press releases as substitute specimens of use, which were accepted by the USPTO.  The Foundation also filed a section 8 declaration with respect to the composite mark and included one specimen of use (among others) in which “of America” was removed and that had not actually been used in commerce. The Foundation again filed an amendment after the Association raised the issue, and the USPTO accepted the amendment.  There was no evidence that the Foundation was aware of either of the false specimens of use.  [Still, looks bad for the lawyers. To the extent that this increases the expense/burden of litigation, it may raise questions that should be taken up with an insurer.]

Because the Association didn’t challenge “Alzheimer’s Foundation of America” in any context, the key question was the [marginal] likelihood of confusion arising from the use of the two-word name and the Foundation’s use of Association marks in search keyword ads and metagags. 

[One way to read this case might be as a cautionary tale for the assumption that structures much discussion of the TM/unfair competition interface—that there is something that can really be done to limit confusion without disallowing use of generic terms, or functional features, or whatever isn’t protectable by TM but may still be contributing to consumer confusion.  Then-Judge Ginsburg might have wanted there to be a good solution for the two Blinded Veterans association, but we have no empirical evidence that any intervention would work.]

[Also, another interesting question here is the appropriate comparator keyword buy.  Suppose that the Foundation had merely bought “Alzheimer’s.”  Its ads should then have stilled displayed in response to a search for “Alzheimer’s Association,” though possibly at a different price point depending on the search engines’ practices. The consumer doesn’t know why the ad displays.  If that’s the appropriate comparator, then the net confusion from buying the Association’s trademark would be zero.  Or should we also allow the Association’s trademark rights to force the Foundation to use negative keywords, as 1-800-Contacts made its competitors do (in violation of antitrust law, as the ALJ found)?]

Anyhow, the court noted that it wasn’t evaluating the Foundation’s keyword buys in a vacuum, but rather the effect of the keyword purchases in conjunction with the Foundation’s resulting ads.  Further, the key type of confusion was IIC.  Was the Foundation engaging in a bait-and-switch likely to confuse consumers, or offering consumers a choice?

Though the Association’s mark was incontestable, it was still commercially weak, which was crucial to the court’s holding here.  As the Ninth Circuit has said, “a consumer searching for a generic term is more likely to be searching for a product category. That consumer is more likely to expect to encounter links and advertisements from a variety of sources.”  Both components of the Association’s mark were descriptive of the relevant charitable endeavor. Althought the Association is the world’s largest Alzheimer’s-related non-profit and the world’s largest non-governmental funder of Alzheimer’s research, the consumer studies in evidence showed that the mark’s secondary meaning was not strong.  Between 10-20% unaided awareness among potential donors wasn’t much.

“Here, because of the weakness of the mark, it is not easy to disaggregate the consumers searching for the Association as a specific organization from those searching generically for an Alzheimer’s charity when they type ‘Alzheimer’s Association’ or some similar derivation into a web browser.” Relatedly, it was difficult to distinguish consumers confused by the Foundation’s actions from those “confused simply by the similarity of the descriptive marks.”

Mark similarity: the court reasoned that the proper comparison was not the mark to the keyword buy itself, but the mark to the resulting ads, because that’s how consumer confusion would manifest itself.  The Association argued that, in organic search results, AFA’s website appears as “Alzheimer’s Foundation of America,” and that the use of the two-word name in the ad header and sometimes “association” in the ad text combined with the keyword buys to increase the similarity between the marks/ads. 

The Foundation argued that, in context, the similarity was “mitigated by the unique information and URL of its ads,” but the URLs www.alz.org and www.alzfdn.org and ad text, e.g., “Honor a Loved One with a Tribute Donation. Support Research and Care.” and “An Organization Providing Support for Your Loved One with Alzheimer’s” weren’t particularly dissimilar. And differences in the parties’ websites came too late.  The ads were similar in appearance and meaning, favoring the Association.

The Foundation tired to argue that it was primarily focused on providing care services while the Association focuses on funding research, but that didn’t distinguish them in the market for donors.

Actual confusion was a key factor, but the evidence “must be related to the actions or behaviors at issue.”  The evidence of 11,000 mislabeled or misdirected checks was probative of confusion “generally,” but not by confusion generated by the Foundation’s actions.  Nor were the limited instances of confusion in the media and anecdotal reports of consumer confusion persuasively related to the Foundation’s actions. “In light of the descriptive nature of the marks, confusion between the unchallenged ‘Alzheimer’s Foundation of America’ mark and ‘Alzheimer’s Association’ supports the notion that it is the weakness of the marks and consumers’ inattention, not AFA’s specific disputed practices, that yields confusion.”  

Finally, the surveys’ flaws meant they merited little weight.  The point of the surveys ought to have been to pin down causation by focusing on the challenged behaviors, but that was exactly what they did not do.

Intent: The Association argued that the Foundation complained of the confusion created by the similarity in the organization’s names back in 2004, but, despite its own complaints and lawsuits, then proceeded to employ Association marks in its metatags and as keywords, as well as attempting to register a two-word mark without really using it.  The Foundation argued that it didn’t intentionally use false specimens and that its internet ads had been similar since 2004.  The court didn’t find the Foundation’s trademark counsel “particularly credible in his explanation of the false filings, but circumstantial evidence about AFA’s trademark applications does not itself suggest an intention to capitalize on the Association’s goodwill or to exploit confusion.”  And the Foundation noted that the Association also bought Foundation keywords before 2010, and coexisted with the Foundation and its practices for six years before suing.  When the Association complained about the Foundation’s use of the word “association” in its ad text in 2014, the Foundation removed the offending ads.

The court found that intent favored the Foundation.  “In many respects, the lack of direct evidence of actual confusion undermines the Association’s attempts to argue that AFA acted with the intention of exploiting consumer confusion.” The evidence didn’t support the claim that the Foundation believed the specific practices at issue here were causing consumer confusion. Because this factor failed, the Association also failed to show the intentional deception vital to IIC.

Consumer sophistication: the relevant population was the average consumer searching for the Association online and seeing the Foundation’s ad, who are less likely to be institutional donors and many of whom are first-time donors. “[I]n this day and age, the ability to complete a form on a website does not itself make consumers particularly sophisticated.” This factor favored the Association.

Weighing: the most important factors here were strength of the mark, similarity of the marks, and evidence of actual confusion, two out of three of which strongly favored the Foundation.  A final consideration didn’t neatly fit within the Polaroid factors: “the labeling and segregation of online advertising.” The “ad” label “heightens consumers’ care and attention in clicking on the links, and further diminishes the likelihood of initial interest confusion.”  [Or more likely, contributes to ad blindness if it’s noticed at all.]

Highlighting my ongoing conviction that elaborating causation stories makes actionable confusion less likely to be found, the court elaborated that it was trying to figure out how many consumers fell into the relevant subpopulations: (1) Those who weren’t confused, either because they were using “Alzheimer’s Association” in a generic sense in their search or because, on seeing the search results, they understood the difference between the two organizations. (2) Those who clicked on the Foundation’s ad by mistake and who were diverted because of the Foundation’s keyword buys and its use of the two-word name “Alzheimer’s Foundation.”  A subset of (2) could remain confused even after viewing the Foundation’s website.  (3) Those who were mistakenly diverted for other reasons, such as because of the inherently weak and descriptive nature of the parties’ marks. “These consumers are ‘confused’ in the colloquial sense, but would have been confused even if they searched the word ‘Alzheimer’s’ alone or even if AFA solely utilized its full name.”  In light of the evidence, it was difficult to identify the proportion of consumers in each group, and thus the Association didn’t show a probability, as opposed to a possibility, of confusion.

Finally, and perhaps exhausted by the effort so far, the court rejected the Association’s attempt to cancel the Foundation’s marks.  As to one mark, the court accepted the dubious rationale that filing a Section 8 declaration in 2017 showed lack of abandonment.  Even though the court didn’t see where the challenged mark, as opposed to the other mark, was included as a specimen of use in that filing, “the filing of the application alone demonstrates AFA’s intent to use the ‘014 Mark. Given the high burden placed on the Association to establish abandonment, and the limited evidence adduced, the Court cannot find abandonment.”  I can’t see how that can be right—if the challenged mark isn’t in the filing, given multiple years of alleged nonuse, that can’t be the end of the analysis, unless there’s some sort of implicit tacking analysis going on.