Thursday, March 31, 2011

American University Washington College of Law IP/Gender tomorrow

Reminder: IP/Gender is tomorrow!  Live webcast here, and then eventually it will be archived, I believe at the same link.  This year's topic is gender and traditional cultural expressions.

Wednesday, March 30, 2011

Threat letters aren't "advertising or promotion"

Luxpro Corp. v. Apple Inc., 2011 WL 1086027 (N.D. Cal.)

Luxpro and Apple compete to sell mp3 players. In previous scuffles, Luxpro was forced to abandon the name “Super Shuffle” and use “Super Tangent” instead for one of its products because of German litigation, and was also subject to partially successful trade dress litigation in Taiwan. Allegedly, during and after the Taiwanese litigation, Apple threatened Luxpro’s distributors and retailers to stop them from doing business with Luxpro, threatening them with the same type of litigation Apple filed against Luxpro and threatening to withhold Apple products from them. Luxpro alleged various misrepresentations and disparaging statements about Luxpro's products, including that Luxpro was selling cheap knock-offs and cheap and illegal copies of Apple's iPod products.

The court first held that Noerr-Pennington was not a complete shield for Apple. Petitioning the government for redress is generally immune from antitrust, statutory, or tort liability, unless litigation is a mere sham; conduct incidental to prosecuting a suit is also protected. Apple’s pursuit of injunctions in Germany and Taiwan were not shams, and the doctrine applies to actions in foreign countries.

However, Apple’s alleged threats to Luxpro’s commercial partners were not protected. Apple argued that the letters were warnings of potential lawsuits if the companies continued to sell Luxpro products and thus incidental to the prosecution of a lawsuit. In the Ninth Circuit, Noerr-Pennington immunizes parties who send pre-litigation demand letters warning a potential adversary about a possible lawsuit, because that’s conduct incidental to the prosecution of the suit. This provides “breathing space” for the right to petition the government. However, Luxpro made some allegations that Apple generally threatened Luxpro's commercial partners to stop doing business with Luxpro without suggesting the possibility of a lawsuit or mentioning the enforcement of any intellectual property rights. The court couldn’t conclude at this stage that all of Apple's threats were incidental to the prosecution of a lawsuit.

Some of Luxpro’s claims for intentional interference with prospective economic advantage and one claim for intentional interference with contractual relations were sufficiently well-pled to survive a motion to dismiss.

The court found that Luxpro failed to state a Lanham Act false advertising claim. Apple’s statements weren’t made in “commercial advertising or promotion.” Apple’s demand letters weren’t designed to influence Luxpro’s customers to buy Apple’s products. Deterring companies from doing business with Luxpro isn’t necessarily the same as persuading them to do business with Apple. Luxpro didn’t allege any facts to show that Apple threatened Luxpro’s partners to influence them to buy Apple products, though the court granted leave to amend.

Luxpro also stated a claim for defamation, but not trade libel. Defamation is about the character of a person or business; trade libel is about the goods. Luxpro alleged that Apple told Luxpro’s partners that its products were "cheap 'knock-offs,' " "cheap copies," and "illegal copies" of Apple's iPod products. These were assertions of objective fact sufficient to ground a defamation claim. Trade libel, however, requires a plaintiff to plead special damages. At a minimum, the plaintiff must plead that it had an established business, the amount of sales for a substantial period preceding the libel, the amount of sales subsequent to the libel, and facts showing that such loss in sales were the natural and probable result of the libel. Pleading general economic loss, including loss of goodwill and lost profits, was insufficient.

Luxpro’s California Section 17200 claim survived under the “unlawful” prong, since unfair competition under the statute covers anything that can be called a business practice that is forbidden by law, thus borrowing other violations of the law—including tort law. Given the properly pled tort claims above, the Section 17200 claim also proceeded.

Monday, March 28, 2011

Mad Men part 4

Panel Four: Psychology of Online Advertising
Moderator: Christopher Wong, Yale ISP

Jeff Chester, Center for Digital Democracy
All new technologies get hailed as bringing democratization, but there are always multiple impacts. Online advertising system has been designed to discriminate: to make decisions about you (and your friends): what you spend, where you live, what’s your race, what’s your income, what kind of credit card rate you should get. Done in a completely nontransparent manner.

Interlocking components: ubiquitous data collection; ability to reflect back information that addresses that individual; purposeful use of subconscious neuromarketing techniques to influence conscious and unconscious mind. Realtime analysis: on websites, mobile devices, during gameplay—they know enough about you that they put you up for realtime auction. Google, Yahoo!, etc. do realtime auctions, now exported to Europe and China. The platforms have been designed with data collection at their core, driven by advertisers. Advertising industry also invested significant resources in neuroscience. They feared that people would not spend attention on ads with the rise of the internet. Wanted to make sure ads stayed powerful.

Health marketers: consumers and health professionals are both targeted to ensure that people get certain prescriptions. Leads for many subprime loans were sourced through online marketing. Inside these ads are cultural cues as well—targeted based on ethnicity. We need to pay attention to these changes as well as to the benefits.

Tom Collinger, Medill Northwestern University

How advertisers find or create the individuals they want to target: the behaviors of people, not just the behaviors of companies. People are funny.

Behavior turmps intention. Databases allowed for prediction. Allows more relevance on a one to one basis. Digital grows those powers: addressable everything, including addressable TV (targeted to specific households), estimated at $11.6 billion by 2015. Advice from strangers is twice as trustworthy as advice from a journalist. Another commentary on the future of journalism, but the point is that consumers as a collective have tremendous influence and power, forcing companies to stop doing some things they did. If the consumer doesn’t think you’re trustworthy, you’re in deep trouble.

Targeting is one of the three things that results in a message’s staying power: who you’re talking to; but what the message is will overwhelmingly drive whether the message gets attention. Bad ads with great targeting still suck. Perfect messages with mediocre targeting, by contrast, will be shared by everyone—friends will happily pass them on to other friends.

David Ogilvy: “the customer is my wife.” (RT: Wow, that’s a loaded statement.) Don’t do anything that would upset your wife. “Just because I posted on the ‘being fat’ FB group wall doesn’t mean I need diet pill ads.’ Consumers demand respect, and will deliver a ‘no’ on messages as well as products/services/how the advertiser does business. There is always a context of consent: under what circumstances will people share information, sometimes incredibly intimate?

Most grocery stores capture all kinds of info about you and don’t do anything with it. Whole Foods, which captures almost nothing, has a very successful business model. Data collection is not the foundation for all effective marketing. Financial services have perfect data and know everything, but do you feel like your bank understands you and markets to you as you wish you were marketed to? (Comment: perfect data except for where the heck the mortgage notes underlying securitized obligations are, that is.)

People are irrational. They are unable to demonstrate or articulate through behavior alone their unmet needs. Look at the last thing in the grocery basket: it was probably an unplanned and unexplained purpose. But people also reward greatness. The aim of marketing is to know and understand the customer so well the product or service fits him and sells itself. You can do that with or without individualized data: McDonald’s does quite well without too much individualized data.

Companies need to use behavioral data while understanding there are other contextual issues. Consumers will say no if the data are misused. Enable the entire enterprise so that the consumer gets a unified experience.

Aleecia McDonald, Carnegie Mellon University

Users’ views of online ads, behavioral advertising, opt-out cookies, and do not track—more detail is available in her longer piece.

Lab study with 14 participants for an “advertising” study. Asked them to define online advertising; first answer was pop-ups. Then banner ads and spam. Only one person attempted to describe behavioral ads, because that’s not how people think about online ads—described them as a way to “exploit a person’s history.”

Once trust is damaged, it takes a long time to return—pop-ups aren’t a big deal now. Users may think of online advertising circa 1999. Users are just behind current practices.

Mental models of online advertising. They make analogies to the offline world, but they don’t understand the offline world either. One woman explained that online shopping is like offline shopping: you may be shopping in a public place but there’s a privacy issue with companies knowing where you spend money and time [that is, no one would follow you around]—so shopping online is also private. Another person said that online was like talking on the phone, and that recording conversations can be illegal, and companies will also follow cultural norms and expectations. People don’t understand that even if they’re on the do not call list they can be called for certain reasons (political, existing relationship).

People expect laws, lawsuits, decency, and publicity risk will protect their privacy. They think no company would want to be known as doing stuff that’s actually quite common.

Larger online study. 69% agreed or strongly agreed that privacy is a right and it is wrong to be asked to pay to keep companies from invading my privacy; 3% strongly disagreed or disagreed. 61% said that asking me to pay for companies not to collect data was extortion. 59% said it’s not worth paying extra to avoid targeted ads (5% disagreed/strongly disagreed), but that’s because 55% said advertisers will collect data whether I pay or not, so there’s no point in paying (4% disagreed/strongly disagreed). So when we look at low click-through on privacy, that’s because (1) they think they’re already protected and (2) they don’t trust advertisers—they think they’ll be taken to see another ad and that the opt-out won’t work. Only 11% agree/strongly agree that they hate ads and would pay to avoid them, 36% disagree/strongly disagree. People understand that ads support free content, but they don’t understand what happens to their data.

People argued with her when she described current practice—“this doesn’t actually happen!” Scenario:
Imagine you visit the New York Times website. One of the ads is for Continental airlines. That ad does not come to you directly from the airline. Instead, there is an ad company that determines what ad to show to you, personally, based on the history of prior websites you have visited. Your friends might see different ads if they visited the New York Times.
86% say this happens now. 11% say it doesn’t but could. 1% said never because of law; 1% because of consumer backlash.

Next, described Gmail:
Imagine you are online and your email provider displays ads to you. The ads are based on what you write in email you send, as well as email you receive.
39% say this happens now. 16% say never because it’s illegal. 13% say it could never happen because of consumer backlash. 4% yelled for asking the question—horrible even to contemplate. 28% say not now but could happen. About 43% of respondents were Gmail users. 50% of them think it happens now, but half don’t realize that it’s happening now. Ad blindness is real. People don’t understand “ads are targeted” in the way that advertisers use that term.

Proposition: no one should use data from email because it’s private like postal mail. 62% agreed. Same number: it’s creepy to have ads based on my emails. It’s creepy to have ads based on sites I’ve visited: 46%. No one should use data from internet history: under 1/3, same between web and email. Glad to have relevant ads about things I’m interested in instead of random: 18% with behavioral ads, but only 4% for email. There are people who really want the benefits of targeted ads, though they don’t understand the data flows. Advertisers aren’t making this up. Slightly larger percentage, 20%, is completely against it for privacy reasons. Folks in the middle: why would we want better ads? Ads are things we ignore. Why give out data? Might be willing to make tradeoff for a benefit, but until they see it, they’re not interested in giving up their data. Last proposition: ok to have email ads based on content as long as the service is free: only 9% agreement. No difference between Gmail and non-Gmail users—they don’t have lower preferences for privacy and they aren’t better informed.

What can users do? NAI opt-out cookies. Showed a screenshot and tested NAI website text with consumers. Opt-out varies from site to site: some companies stop collecting data if they can’t do behavioral ads. Google, on the other hand, aggregates data. They put an opt-out cookie but still collect the data as if it all came from one big superuser named opt-out; Yahoo! collects the data and just doesn’t use it. If you visited this site, what would you think it is?

Only 11% said NAI is a site that allows companies to profile you, but not show you ads based on their profile. Equal to the percentage who think it’s a scam: 6% think it’s a scam to collect your private information. 5% think it’s a scam to find out which websites you’ve visited. People who actually go to the website probably know what they’re looking for a little better, but that’s not good. (Other responses were misunderstandings of what NAI does. 25% answered “A website that lets you tell companies you do not want to see ads from them, but you will still see as many ads overall.” This is incorrect because companies continue to serve ads, just not targeted ads. 18% answered “A website that lets you see fewer online ads.” This is wrong and prominently disclaimed in the NAI text.)

Did a pilot study on what consumers thought “do not track” means. What data can a site collect before you click do not track, and what after? 10% think that websites can’t collect data at all before you click. 60% expect no data collected after they click do not track. Huge red flag! People think information is aggregated right now—almost 90% say that’s going on right now. After clicking DNT, they think information should not even be aggregated. They think DNT applies to first parties. Only 12% think that tracking ads they’ve seen would be allowed (frequency capping). Fewer people understand that browser information goes out today; if they do understand, they’re more likely to understand that browser information would continue to be protected.

Users don’t understand how the internet works or data flows. Think privacy is already protected. Current practices cause surprise. They are ok with free sites, but do not think data is part of the deal. Given choice, users prefer random ads to targeted. Current measures don’t appear to address misunderstandings.

Wong: is this purely an educational issue?

McDonald: depends on how you define education. There are ways in which user interfaces are discoverable in other contexts; techniques used there could be applied in these contexts. Janice Said (sp?) put up green boxes next to search results for how well or poorly companies handled privacy. People were willing to pay more for companies that protect privacy. But you can’t ask people to read through privacy policies.

Chester: there is no way an individual can understand, much less control, the system for marketing. Content and data and transactions are seamlessly merged. That’s why we need regulation.

Collinger: there’s always been a gap between actual and self-reported behavior. Real challenge in this space. Even the people who report maximum satisfaction with their vehicles only repurchase the same brand half the time.

Chester: ask who is setting the agenda. What are Google and Facebook saying to tech companies? Whose values are being served?

McDonald: users are frustrated with Facebook, but saying no is difficult given the powerful network effects.

Chris Hoofnagle: to Collinger: say more about succeeding without data—Whole Foods offers better food, which is why it can succeed. Also, re: Chester—everything affects our autonomy; we have fraud in the inducement all over the place and it’s not considered a consumer protection problem. Don’t we have to put up with it to some extent? How do we think about when we shouldn’t have to put up with it?

Chester: needs to be transparent: who is shaping this environment, in what ways? We don’t have a system in real life where there’s one ad that other people see on TV—an ad honed to change your behaviors, and that’s extremely powerful and undemocratic. As we allow companies and governments to have greater access to this tech without accountability we’ll see new mechanisms of control.

Collinger: his message is that it isn’t that companies that have data are handicapped or vice versa, but that there are ways of understanding/delivering on unmet needs that don’t always require understanding the customer at the granular, individual level. As long as that’s true, it reminds us that talking only about the data is leaving out the whole. There are ways to use data well and still fail if you get the brand wrong, and ways to win even if you don’t use data well.

Q: are there differences in understanding due to age? Why don’t people understand this stuff?

McDonald: it’s difficult to figure out why people don’t know these things; we could look at how people come to know the things they do know. Age: we looked at it, and found comparatively few differences based on age. Subjective answer from the in-depth user studies: people who are in their late 30s/early 40s seem to know the most about what’s going on. Generation older manage to get computers to work and use as a tool; generation after—19-year-olds know less; they use FB because their parents do, so it must be safe. That’s not what you hear from advertisers.

Q: front page NYT article about German politician who found out how much information the cellphone provider had about him—maybe politicians will do something once they understand this.

McDonald: do not track has potential in areas like that too—doesn’t have to be limited to OBA.

Collinger, in response to question about whether good messages would prevail even without data: it’s the content, stupid! Well-targeted bad ads don’t work, and you need to earn customer trust for targeting. That will be part of the future way in which people evaluate businesses. We’ve all agreed, consciously or otherwise, that we’re happy to trade off Amazon knowing everything to get the benefits. Those are incremental decisions; he thinks the good guys will win.

Panel Five: Regulating Online Advertising
Moderator: Jennifer Bishop, Yale ISP

Chris Hoofnagle, Berkeley School of Law

Research questions remaining in the field: how we feel as a society about price discrimination. We react badly to price discrimination in some contexts and not anothers. Another: how much OBA is worth over contextual ads—there’s a paper by Howard Beales that doesn’t answer that question and doesn’t define the terms. Data retention: how long do advertisers need data to target? People give different answers. Whether OBA grows, shrinks, or divides up the ad pie. The cost of self-regulation—if Evidon charges 1 cent/CPM, that has costs as well. Alternatives: local OBA where targeting could be done on your computer, without privacy issues though cryptography would be necessary. Deep packet inspection might be the most privacy-friendly way out of this problem. Comcast already knows who you are, they have the full pipe, and they’re governed by electronic privacy laws—you have a right to sue them if they violate your privacy.

Need more time with the landscape of the industry, because right now the debate in Washington is: regulate the internet yes/no. There are more options than that! NAI and advertisers think they’re saying the right things and not getting the right outcome—we’ve lived with NAI for 10 years, and it lacks credibility.

Say you’re Williams Sonoma and you want to know someone’s home address when they buy in the store. But they don’t want to tell you, and California law says you can’t ask for personal info. What you do: you hire Axciom who can combine zip codes and credit card or check routing number and find their home address—you trick them into giving this info—marketed as “lets you avoid losing customers who ‘feel’ that you’re invading their privacy.” Suppose users are deleting cookies. United Virtuality: flash cookies, which people can’t control—they say “users don’t know what they really want,” so we’ll track them that way. When a consumer deletes normal cookies and retains the flash cookie, the normal cookie can be resurrected. Finally, when you realize visitors to your site won’t accept third-party cookies because of cookie blocking in IE6, you can get around it by posting a compact privacy policy—even if it’s blank. A huge number of sites can therefore circumvent cookie blocking.

Self-regulation needs to recognize that the industry has not been friendly to law, pro-privacy technology, or consumers who want pro-privacy defaults. Pineda, the Williams Sonoma case: this was a circumvention of the “no asking for the address” law.

What would credible self-regulation look like? A statement of policy purpose. Measurable standards. Operation and control separate from industry. Complaint procedures. Penalties. Whether it’s funded. Whether it updates itself. Whether it includes new actors and independents. Whether it supports competition.

Look at leading self-regulatory organizations’ statements of purpose: EASA, the European organization, doesn’t even invoke privacy as a goal. It’s used instrumentally, as in “privacy policy.” Doesn’t discuss privacy as a legitimate interest until p.35 when it mentions deep packet inspection, which is irrelevant since these actors don’t engage in DPI and are therefore willing to condemn it.

Sensitive information: these standards sound great until you read the definitions. Medical information = “precise” information about past, present, or future medical conditions. What does that mean?

Operational control: NAI isn’t even its own independent nonprofit. It’s a project of the Digital Policy Forum; only 4 employees with 1 on compliance. Complaint procedures are largely illusory. A lot of what self-regulators do is absorb complaints that would otherwise go to Mary Engle. TrustE was getting more privacy complaints than the FTC. EASA has a highly developed with complaint procedure: should a company continue to breach the rules on a persistent and deliberate bases, they’ll be referred to the FTC-equivalent. Again, what does that mean? That doesn’t even lead to the revocation of the seal they give!

Updating is a big issue in NAI. Membership in the NAI dipped as soon as the Bush FTC said it wouldn’t do anything—down to 2 members at one point. Even created an associate membership that didn’t have to comply with all the rules! Now NAI membership is back. (World Privacy Forum 2007 report, worth checking out.)

There are a lot of companies in OBA. Evidon says over 600. NAI only has 66 members.

How to build a positive agenda for privacy advocates and consumers supporting real self-regulation? Embedded in the industry is a lack of respect for individuals. People just “feel” their privacy is violated. Thus, the norms the self-regulators come up with lack protection—NAI still allows tracking of an opt-out as long as it’s not for OBA. NAI initially farmed out complaints to TrustE, which quickly stopped reporting on the issue. This is why people are at the point of wanting regulation, not self-regulation. That’s why advertisers’ arguments aren’t working.

Alison Pepper, Interactive Advertising Bureau

Advertising trade groups: IAB is a trade ass’n representing about 470 different companies in the online space.

NAI is being held up as the model (failure). When organizations got together about what to do about self-regulation, it was on the heels of 2007 FTC workshops. Ad networks (NAI) aren’t the only component, just a piece of the ad ecosystem. Publishers, service providers, agencies, advertisers engaged in meetings. Privacy did come up!

What would make self-regulation effective? Enforcement/accountability. Recent study that found no violations of the law in foreclosures—that’s not credible. Enforcement means you will find bad actors. If an industry finds that everyone is in compliance, it will lack credibility.

Self-regulation also requires widescale adoption and implementation. Two-pronged education: consumer point of view—consumers don’t read privacy policies, Gramm-Leach-Bliley notices, etc. With the new ad info icon, you don’t even have to leave the site to find out what info is being collected. Also have to educate the business community. IAB is working on that right now.

What makes self-regulation fail? Lack of widespread adoption; lack of enforcement/accountability; evolution of principles as tech and norms change. The industry is not going to get 3 bites at the apple from the FTC.

OBA is a part of the overall privacy issue, not the whole. Offline restrictions—at what point do we merge two things that are the same and need to be regulated the same way. 15-20 years ago you got a warranty card; now you fill it out online.

Consumers and the power to say no. Consumers can strike bargains, as Pesta said yesterday, but they have to know what they’re agreeing to.

Business model issue: FTC hosted a workshop on the future of journalism. The inventory for ads has exploded but the demand is the same. More publishers competing for fewer ads. No one knows what that will mean, though consumers have a basic concept that ads support content. Another component of educational process.

Rebecca Tushnet, Georgetown Law Center

I’m the odd woman out: although I agree that privacy is extremely important, the focus of my work is what the advertisers say to you when they find you.

In yesterday’s panels, I heard eerie echoes of what copyright owners were saying 15 years ago: the fantasy of perfect control (if we just add enough tech, we will control everything we want to control and we will receive all of the consumer surplus from any transaction) has shifted from copyright owners to advertisers. Jason Kelly quoted a FastCompany article from 2008: “Thanks to the internet and digital technology, agencies are finding that the realization of their clients’ ultimate fantasy—the ability to customize a specific message to a specific person at a specific moment—is within their grasp.” And he spoke about advertisers planning on 100% targeting. The natural slippage is to think that 100% targeting is 100% control and 100% efficient, and neither of these are true, but the attempt to achieve them has risks beyond simply failing to realize the promise.

There is a sleight of hand in many of the promises about targeting—that somehow they’ll save publishers because they’ll unlock more spending. All else being equal, if an advertiser can target more efficiently, it can spend less money—something Kate Kaye mentioned yesterday when she talked about having a natural cap on what you can spend on political ads. More than that, all else is not equal: we are in a recession. Unless people have jobs, the best ads in the world won’t help them buy stuff. More generally, demand for the stuff in ads is not purely an artifact of how good the ads are. It also depends on the stuff and the audience. That’s what I mean about the fantasy of 100% control. (Also this discussion about consumer education was also had with respect to copyright owners: teaching consumers that all unauthorized copying was theft didn’t work either; it’s hard to get people to know something that their self-interest depends on not knowing.)

To get more specific, move from advertising to branding: marketing doctrine: you don’t own your brand. Consumers at the very least own a share of your brand. Pabst Blue Ribbon; Burberry plaid; Timberland boots: brands that were adopted by groups they were definitely not targeting. Forgetting this for a fantasy of choosing the proper consumer leaves money on the table. Maybe the response is, ok, once a nontargeted consumer picks up on the brand we’ll start targeting that group, but at least we should recognize that this is reactive, not perfect optimization or price discrimination.

I am also reminded of the copyright owners of 15 years ago when we discuss whether the internet will survive without strip-mining consumers for their data. Back then, the argument was: no one will put cars on the information highway if we don’t have perfect digital rights management and direct liability for ISPs on whose networks copyright infringement takes place. Now, the argument is: no one will have free content if they have to use contextual marketing instead of maximum behavioral advertising. Well, maybe, but all those rickshaws and bikes and trucks, and, yes, cars out there on the information highway suggest that maybe we don’t need to allocate maximum control over any input to any one interest group in order to get the system to work.

Other than that, my message is just: don’t forget that there are still issues with the content of what gets delivered. In my thought piece: I wrote about the interaction between trademark law and submerged marketing—nondisclosed marketing. The FTC has moved very clearly to require transparency in online marketing with respect to source, that is, with respect to the information that this is advertising and not an independent opinion: consumers want to know when we’re being advertised to, and the revised endorsement guides support that desire, and expectation, that there will be a label allowing us to distinguish between advertising and editorial content or advertising and consumer reviews. The connection to the privacy issues is twofold: transparency (a lot easier to achieve in terms of the information delivered to the consumer—this is sponsored—though what that means to the consumer is of course still debatable) and consumer expectations—what does and should a reasonable consumer expect from the information out there in the marketplace?

We need to be careful with the moving parts in the advertising regulatory system: things we do in one place have effects elsewhere, and I’m not just talking about compliance costs raising prices. My example: With respect to disclosure, we sometimes hear the argument that sponsorship disclosure is unnecessary because reasonable consumers expect that content is sponsored. Trouble is, that means that, as trademark owners, advertisers have colorable arguments to claim control over anything that’s said about them online. Because, after all, reasonable consumers could believe that they sponsored it! If we want freedom to talk about trademarks online, to review them honestly, we need to hold to the normative principle that, absent disclosure of a connection, consumers need not expect a connection between a trademark owner and a discussion of that trademarked product or service.

More generally, I want to reassert the power of the normative. A lot of times we make descriptive claims about consumers that are really normative, since consumers don’t know what’s actually happening and can’t express their preferences.

Mayer: we’ve heard about the info gap between what’s going on and what users perceive. Minute user interface changes can have dramatic impacts on uptake of the tool. Recognizing how info-specific these tools are, can we expect self-regulation to provide the right tools, taking into account what’s known about user psychology?

Hoofnagle: are consumer expectations worth aligning with reality? A lot of expectations are just unreasonable—consumers may expect to have the cake and eat it too. Also, consider what would happen if the FTC’s interest changed. As soon as FTC spotlight disappeared, NAI disappeared.

Pepper: Ad agencies did pro bono work creating the icon, holding about 10 focus groups with consumers asking them about the icon. We have discussed continuous funding for self-regulation—one way is the icon. License that to support the entity’s self-regulatory efforts.

Q: once you click on the icon—he went into Yahoo! and looked at what happened what he clicked and found a series of rabbit holes. A college education in privacy, if you have the time, but little specific information. Transparency has been thrown around, but that’s not transparent—most people would find it offputting. When you ultimately get to the NAI page, they say “why do you want to do this, when you won’t get relevant ads?” Seems to be set up to fail.

Pepper: companies have the right to argue the value proposition—you still have the right to opt out.

Q: but that’s the only factual proposition I saw beyond complicated discussions of how this works. The icon leads to a series of gates making it difficult.

Pepper: Yahoo! is longer than some she’s seen, but it’s 2 clicks.

Q: there are two columns, and if you hit the right ones you’ll hit the landing pages. Not transparent as to why you’d want to be there. Why can’t there be more factual data related to the individual clicking on the link: what we know about you, so you can decide whether you like us knowing that about you.

Pepper: implementation may be an issue—you are talking about Google’s ad preference manager?

Q: that would be a start.

Pepper: Initial criticisms: can you give consumer the ability to opt out of as much as possible in 2 clicks? This is not ideal, but it’s what we came up with.

Q: why not have a pro/con on opting out? Or offering people the opportunity to “be” the identity they chose—let them be a woman in Nebraska for a week? People would either like it or not.

Another Q, from Chester: We filed with the FTC and showed what an Evidon company (I wasn’t clear whether this was Evidon itself or an Evidon client) tells the consumer what they’re doing—it’s at odds with what they tell prospective clients they’re doing. System is not designed to disclose properly.

Industry is doing self-regulation to head off regulation; this is all about what the components of the safe harbor should be in a world of fair information practices. This system will be the equivalent of the EU regime, or at least it will be proposed to be. How to balance EU style privacy with self-regulation?

Hoofnagle: tactically, would recommend delay—let Europeans work through problems and come up with answers. And then you can do reverse preemption—EASA says you can’t use technical means to circumvent users choices. NAI says that in a FAQ but not in their principles. Ken Bamberger & Dierdre Mulligan have a great article on privacy on the books v. privacy on the ground—ambiguity might be better than legislation.

McDonald: the role of user expectations—there are points where it makes no sense to ask people what they expect. They have no idea what I’m talking about; it’s killed entire lines of research. We should distinguish between what consumers want normatively and what they expect. The FTC has moved away from fair information principles and more toward a model of expectations. This is narrow.

DPI? Really?

Hoofnagle: What’s the problem with DPI? Google says data collection isn’t a problem, it’s a problem how it’s used. If we only want the data for advertising and it’s ok for us to have it, then why not DPI? Smart people at Google say DPI breaks net neutrality, which is true. If the issue is collecting as much info as possible about a user and only using it for advertising, then DPI is an issue of degree and not one of kind.

Q: What parts of the law apply to Comcast? Or non-cable providers?

Hoofnagle: ECPA would apply to DSL. Circuit court has held that the Cable Act doesn’t apply, but that’s wrong. Congressional record reflects concerns about the cable company being able to look into the home. Brought under different facts, you could win: a different plaintiff who wouldn’t annihilate Comcast with a class victory.

Q: do you agree that collection doesn’t matter, only use? Sometimes original use is fine but use changes over time; government could ask for it. Policy slippage: we have this data, we may as well mine it.

Hoofnagle: He does not agree that collection doesn’t matter. Industry takes this position. Paper: “the ethical use of analytics.” If we can’t collect the data, we won’t get the untapped value. The value is innovation, but use restrictions prevent innovation too. The Fair Credit Reporting Act is based on ideas that you can’t opt out—imagine putting those values into effect for advertising.

Organization for Transformative Works fundraising drive

The Organization for Transformative Works, of which I am proud to be a member, is having its spring fundraising drive.  We’re going to have to go back to the Copyright Office next year to ask for another DMCA exemption, and we’re building more tools for fans to use every day. If you support noncommercial remix, the OTW is a great place to give.

Mad Men part 3

Panel Two: Online Advertising and Privacy
Moderator: Bryan Choi, Yale ISP

Jonathan Mayer, DoNotTrack.Us Project, Stanford University
Stateful tracking (something stored on your device--tagging) and stateless tracking (things that don’t require something stored on the device but nonetheless allows you to figure out the device—fingerprinting). Many stateful mechanisms in the browser, incl. http cookies, http authentication, etc. So many options that it’s hard to counteract all of them, or all potential ways to fingerprint a browser (looking at plug-ins, clock skew, etc.). Cruder way of preventing: block the content that would tag or fingerprint the browser.

Trouble with lists of allowed sites: not comprehensive and overinclusive; must trust a third party; requires updating; breaks stuff (if you can’t use the “like” button on Facebook, no one will use the tech). TrustE allowed companies like Axciom but consumers weren’t in a good position to evaluate the merits.

Opt out: biggest is from the Network Advertising Initiative, 65 companies that you can opt out of in a click. It’s not comprehensive, a fraction of several hundred engaged in third-party tracking online. Also an issue of updating, making sure they don’t expire or clear optout cookies when you clear your cookies. Some browser extensions try to deal with that—e.g., Google’s Keep My Opt-outs.

Greatest weakness in cookie model: opt-out cookie is a promise not to target ads, not a promise not to track.

Do not track: design objectives were: universal, no updating, one-click. You need to observe suspicious behavior and monitor ad distributions: if there’s really no tracking then browsers should not behave as if there were. Gave a demo of catching someone fingerprinting—the system they developed is enforceable.

Dan Christen, Microsoft Corporation
IE9 has various privacy features. Tracking protection lists using multiple organizations; you can choose which 3d party list to use. You can turn it on or off easily (for example if content you want to see is blocked). You can personalize and see what sites are tracking you. Does implement header approach for do not track: http header and DOM property, sent when tracking protection list or personalized tracking protection are enabled. W3C has scheduled a workshop on possible further standardization of a do not track signal. Mozilla and Stanford submitted another proposal to IETF on do not track headers.

A lot of activity on the browser side; Mozilla and IE are both pushing the header approach. Process should be allowed to continue to form a standard with wide participation.

Joseph Turow, Annenberg School for Communication, University of Pennsylvania
Words used in public discussion to describe what’s going on: queasy, icky, creepy. When lawmakers invoke the ick factor as a reason, society has a problem. We have a situation in which policymakers and industry are not converging on the fact that there is a problem. Executives in ads sense that policymakers haven’t worked through the issue well enough to present a succinct, logical argument about harm: they say, what’s the problem given that we have COPPA, HIPAA, Gramm-Leach-Bliley? This is just psychological, not real. This is a continual theme in the literature—that the public has been misled. Executives say the antidote to customer distaste is anonymity.

But look at what anonymity means in practice. We have to discuss what’s actually taking place. What is at stake even with anonymity is social discrimination via reputation silos. Issue is ability to control identity, sense of self, and notion that other people are defining them without their knowledge or control. Personalization goes beyond whether people buy. Ads and discounts are status symbols; they alert people to their social position. So Lincoln is offering an ad to heavy NYT readers—if they click on the ad, they get the rest of the year of the NYT free. How do I get that ad?

In the future, these calculations of our marketing value may become routine parts of the info exchanged about people through the media system. Whether they know your name or not will be irrelevant. Tech developed for ads allow targeting individuals with personalized news and entertainment. This is already happening—targeting TV ads to subscribers via cable—the logic is becoming more urgent to advertisers and publishers. It’s not just the impact on individuals, but the media ecosystems: what magazines, newspapers, and TV shows are. “Church/state” wall between editorial and advertising is falling apart. Ads will be packaged with news tailored both to the advertiser’s goals and the targeted individual.

So will people get angry about the segmentation/discrimination, or will they just learn to live with it (as they have largely accepted skyrocketing inequality elsewhere)? Industry’s real hope is that people will not use do not track lists. A few people who know will use them; so what. Public ignorance of potential implications will not alleviate the longterm dangers. We need information respect based on information reciprocity. Behavioral targeting is just one facet of this—3% of online activity. Need to look at what publishers do with our data generally.

If companies want to use information about individuals, let people know where those notions originated and negotiate about them. Permission may well raise the cost of using that information. But the payback will be in veering away from divisive marketing.

Julia Kernochan Tama, Venable LLP
Consumers may be unaware of data collection/third party involvement, or worried about choice. But consumers may also value free content and relevant ads.

Self-regulatory principles, from major players: education, transparency, consumer control, data security, consumer notification for change in how data is treated, special procedures for treatment of sensitive data, and accountability for implementing the other principles. Transparency requires multiple mechanisms for clearly disclosing data collection and use practices, and control provides for mechanisms to give users the ability to choose whether data is collected/used and transferred to a non-affiliate. Thus, the “advertising option icon” (I heard a lot about this at the ANA last week too).  Consumers can go to to learn more about targeting and go to the opt-out page.

Accountability: BBB and Direct Marketing Association have accountability/monitoring programs. They identify noncompliant entities, follow up with them, and refer uncorrected noncompliance to the government, since being a member of these organizations while not complying with their requirements is deceptive.

Next steps: consumer and business education, promote industry participation, keep developing flexibly in response to technological change—mobile platforms, international integration, treatment of sensitive data. Flexibility is the strength of self-regulation.

Lee Tien, Electronic Frontier Foundation
We believe that most activity online is First Amendment activity: speaking, reading, association. Strongly protected against government regulation. The right to engage in these activities, including to do so anonymously, is very important. This does not depend on whether the information is “sensitive” in the sense of being medical. EFF is not focused on advertising, though we recognize it’s an incentive for collecting information—it’s not necessarily what needs to be controlled. Our concern is the surveillance. There is tremendous risk to privacy from government because of the accumulation of repositories of information either specifically or generally about preferences (to which the government may then seek access). Thus, he wants to think about when the info loses business value and should be destroyed, because it may never lose value to the government. So let’s work on preserving business value without creating a civil liberties sinkhole.

If we think we’re anonymous, we will say things we wouldn’t otherwise say. That’s a First Amendment freedom. When people are misinformed about the true anonymity of their activities, they make mistakes. It’s meaningless to talk about informed consent; people know what they do, but they don’t know what they do does. Though EFF supports Do Not Track to make clear how big the problem is, how many people will use it?

“Voluntary consent”—is this redundant? Fourth Amendment cases show that it is really hard to talk about the conditions under which consent was obtained. Who is really making the decision? A lot of times, it’s your browser.

Reidentification: 33 bits of information are sufficient to identify you uniquely. Only about 7 billion people in the world. Knowing your hometown, if it has 100,000 people, is worth 16 bits; a zip code can go way below that. It’s not the data you’re looking at that matters, it’s all the data out there—many bits of data can be put together to pinpoint you.

Mayer: there’s a void of information about behavioral ads. We don’t know how much more valuable it is. We don’t know how much more effective it is. We don’t know how widespread it is. The numbers tossed around earlier (yesterday) are from a small number of sources. What he’s seen suggests that behavioral advertising is more profitable, but not much more; this is status quo and might change.

We need the right incentives. Do not track could be a generative technology, allowing others to build on top of it—means of enriching ad value without compromising ad quality. You can do interest-based targeting without tracking. The user or the browser could list/transmit their interests, instead of a unique identifier. This could create value without privacy problems.

Turow: we’re at the beginning of the new world. Other kinds of tracking/utilities will develop. The real change will happen when TV—real TV—gets into the picture. People will be looked at individually and in household terms; tech already exists to send different ads/offers to people.

Tama: we know that behavioral advertising adds more value; the internet is largely ad-supported and companies need to experiment with other ways of supporting content. It’s all in flux. What would happen if you ended OBA tomorrow? No one knows, but there’s a concern for content drying up and innovation drying up. Senator McCaskill: we need to exercise caution so we don’t kill the goose that laid the golden egg.

Tien: we have a general problem larger than this one: our reliance on new media entities for services—ISPs see everything you do and thus threaten your privacy.

Christen: Value of each additional bit from the browser perspective will evolve over time as the tech evolves; the key is consumer choice.

Mayer: the goose is not at risk. OBA is relatively new; the goose is older. Even if you killed OBA, the goose would live (the state of advertising a few years ago).

Q: Talk about the distinction between opting out of use and opting out of collection? Those self-regulatory principles talk about opting out of collecting and using information “for behavioral advertising”—does that prohibition modify “collecting” information, or only “using”? One way to satisfy the requirement is not to collect, but another way is to collect for another purpose. Won’t at least some industry members interpret this in the most favorable way for themselves?

Tama: the principles are about use in OBA. What if you place a cookie—can the company still collect info for analytics and other ad delivery purposes? Yes, those activities are fundamental to the current model, and typically don’t raise the same privacy concerns. Frequency capping: I don’t want to get the same ad 200 times and the company doesn’t want to show that to me either. Capping doesn’t reveal anything about me, just allows efficiency.

Mayer: Many industry participants are interested in doing the minimum necessary to avoid government intervention. Back in the 90s, FTC got interested and industry formed NAI, which languished for a decade. Now the FTC is back, and so is NAI. The clickthrough rate on the advertising icon is .0035% and the overall opt-out rate is .00014%. When surveys indicate that the majority of users would like to opt out, we have a real market failure in users expressing preferences. We want to cut through the ambiguities in language and the weird ad icon interfaces and provide a mechanism so that users can easily find a clear policy statement. We want to design this with the consumer in the center; the incentives on the other side are misaligned.

Tama: self-regulatory effort is still being rolled out. She’s only seen a couple of the icons in the wild and the industry is working hard to increase awareness. Opt-out rates can mean that consumers aren’t concerned enough about it to make a change. There are ongoing developments like Google’s Chrome browser and we need to see how we can fit new tech into existing self-regulation processes.

Tien: disagrees about what can fairly be inferred from low opt-out rates. There is an attempt to frame discrepancies in behavior as lack of concern for privacy, but it’s really hard to believe that given some of the specific things research has shown consumers don’t know. One of the most important is that consumers don’t in the first place understand what a privacy policy is—they think the existence of a privacy policy means that the company is not sharing information. Given that, they may act unconcerned because of false beliefs. (It’s not as if they’re aware of the details of who’s serving the ads; they can easily believe that, as in the age of the print, it’s the Washington Post that intermediates between them and the advertiser.)

Christen: it’s early to look at response rates—he hasn’t even seen one of those icons yet. It will take time to do education. (RT: Of course we know a lot about how asterisks and other disclosures work, or don’t, already; the icon is not the first teeny addition to the main text of an ad to which consumers have been exposed.)

Q: What are the international implications? So much of this is independent of boundaries.

Christen: that’s why we recommend tools for users to protect themselves.

Tama: her group wants to figure out how compliance with US self-regulation should count for European data protection rules. The ideal is for the self-regulatory standard to become the standard everywhere. (Yes, I bet it would.)

Panel Three: Youth-Oriented Online Advertising
Moderator: Seeta Peña Gangadharan, Yale ISP

Mary Engle, Federal Trade Commission
FTC is the enforcement agency for deception and unfairness in advertising. Disclaimer: these views are her own and not necessarily those of the commission or any individual commissioner. FTC has long been interested in protecting children against unfair/deceptive ads—brought a case against free distribution of razor blades with Sunday papers, as dangerous to kids/pets. In 90s, people encouraged kids to call 900 numbers which cost $2/minute and the kids didn’t know the cost; ads showed toys performing in ways they couldn’t in real life.

Now, kids have huge purchasing power, and spend 7 ½ hours/day with media. Digital natives: facile with technology, and there’s a temptation to forget that just because they have tech prowess doesn’t mean they have similar levels of emotional maturity. They are impressionable, bad at assessing risks, bad at delayed gratification, and often naïve about the intentions of others. Some practices may not cross legal lines but are still appropriate for self-regulation.

COPPA: Children’s Online Privacy Protection Act regulates collection of personal information on websites directed at children under 13 or where there is actual knowledge that children under 13 are users; actual parental consent is required. Currently looking at rules, which haven’t been amended since the rise of social media. One issue on the table is the definition of “personal information,” which right now includes name, address, city, etc. but not a lot of the kinds of info collected via behavioral advertising. Statute gives some flexibility: personal info is any info that allows online or physical contact with an individual. Geolocation data, when a young child has a smart phone.

2009 behavioral ad report: teens are most likely to be visiting general audience sites. Issues include protecting teens from making mistakes that will haunt them forever. Facebook may have some idea of the age of a visitor, but most sites don’t—what do we do then? Some sites can be expected to do vigorous age identification—buying wine online, for example—but not all. Also, teens deserve autonomy as well as protection—they have free speech rights as speakers and to access to information.

Kathryn Montgomery, American University
The marketing we acted on to get COPPA seems very rudimentary now, but advocates got regulatory action even though the industry said it would never happen. advertised as a safe place online, but sold all sorts of stuff—this was the kind of thing that led to COPPA. Ads aren’t supposed to come into a child’s room and collect information from that child. We need rules of the game for marketing to kids under 13.

Digital media help children developmentally for exploring identities, self-expression, relating to peers, autonomy. This isn’t just about serving standard ads online. Food and beverage companies are in the forefront of innovative marketing techniques. Example--an award-winning campaign. “The more information you gave us at registration, the creepier the experience.” This is a quote from the ad, not from Montgomery. They used Facebook Connect to pull two of the teens’ friends and “put” them in the asylum and allowed the teen to pick which to save; then they invited the teens’ entire social network to try and “save” them. Then they “forced” the user to take the position of torturer in order to finish the experience (also they needed to buy Doritos to get codes to unlock the final level). The idea was engagement with brands (and, apparently, with the position of torturer). Connecting brand identity with teen’s own identity. Brands are measuring emotional connections with brands. Immersive, cross-platform nature of new media: virtual/gaming reality, putting the user in a subjective state, inducing “flow” and making them more inclined to accept the ad message.

Personalization: one-to-one tailoring of ads. User generated content: youth aren’t just viewing ads but creating them and distributing them—they take ownership of the ad content. Very different from traditional understanding of ads for children. Marketing fully integrated into social and personal relationships and daily lives of young people, and this is only the beginning.

What do we do? We need a regulatory framework—combination of self-regulation and government rules. Pay particular attention to adolescents, who are not invulnerable based on their cognitive abilities. Teen brains aren’t fully developed; they have emotional vulnerabilities to this kind of marketing. Particularly around health, privacy, and socialization into a new marketing system—media literacy is important, but not sufficient. Need a dialogue about fair marketing principles. We don’t just want to look at kids: privacy safeguards are for all consumers.

Wayne Keeley, Children's Advertising Review Unit
CARU is a self-regulatory unit of the BBB, formed in 1974. Address children under 12 and, for privacy, under 13. Guidelines operate as safe harbor for FTC purposes. 50-100 cases/year, all transparent. Monitor thousands of commercials and hundreds of websites per year. 30-40% of cases are COPPA-related and the rest general advertising. Prescreen storyboards and rough cuts for advertisers, catching issues early.

Leslie Harris, Center for Democracy and Technology
How do we take advantage of currrent concern to get real results? In some ways COPPA is remarkable, since it’s never been challenged and there’s a great deal of industry alignment to the law. That’s no accident: very strenuous debate about whether to include teens, due to civil liberties groups, librarians, reproductive rights groups etc. (allied with industry). Taking teens out of the bill limited constitutional questions; didn’t try to boil the ocean—didn’t produce a full code of conduct, but directed only at sites directed to kids or sites with reason to know/actual knowledge that the kids were there.

COPPA has had some salutory effects, increased parental involvement with kids online, but some unintended consequences. Very few sites directed to kids—large brands dominate; smaller players/innovations have been pushed aside because there is in fact a cost to compliance. COPPA also led us to think that parental consent is the gold standard and that consent solves all problems. Consent is not the gold standard any more than opt-out is—puts all the burden on figuring out what to do on the parent. People think they’re being given a safety seal, but we have not spent enough time on norm-setting. We need to ask on the other side of the transaction: what should be the norms for kids? Not what should the obligations for parents be. Should we ever use behavioral ads on kids under 13?

FTC needs a more robust idea of what counts as an unfair practice, especially as new law is likely to be unworkable. CARU is inadequate, given the interconnections in the online environment. How long should companies hold on to their data? Self-regulation should create norms, and then over time the FTC can start enforcing them. Unfairness and deception can be more powerful than currently interpreted.

People believed that age verification would develop and make things easier. Jury has now come back: we don’t have effective age verification, which is why COPA was struck down. Every entity looking into this has come to the same conclusion. (Compare what’s going on in the UK with internet content filtering implemented by the major mobile provider, o2. )

For teens: does it make sense as a matter of public policy to provide new protections for 17-year-olds that we won’t make available to 18-year-olds? Or to possibly vulnerable 80-year-olds? Some possibility of a comprehensive privacy law; the US stands almost alone in the developed world in having no ground rules generally for fair information practices. Risk that we will go back to doing what we’ve done all along—attacking the bright shiny object, so we have a privacy act specific to home video renting instead of a real comprehensive law. Her biggest nightmare: do not track for kids.

Montgomery, in response to a question about teens who say they don’t care about ads: teens don’t always know what’s affecting them. They can feel invulnerable and feel that they’re not being influenced, but ads aren’t presenting themselves as boring things you click on but as entertainment vehicles to be involved with. It’s our responsibility to ensure that young people understand more about the environment and have some safeguards against deceptive/unfair practices. Ads directed to that age group are mainly pushing unhealthy products high in fat and sugar. The food industry has not wanted to pay any attention to self-regulation to that group as opposed to kids under 12.

Harris: some people in the industry fear that self-regulation norms for teens will be enacted into law and that’s why they’re reluctant to act.

Engle: we definitely can’t rely on self-perception of whether ads work. Even if half of ad dollars are wasted, half isn’t.

Michael Rand, Baruch: where do parents and teachers come in?

Montgomery: Does not support parental consent for teens. There is a role for education, but media literacy is weak in the US; it’s not in all schools. People who do media literacy are ill-equipped to deal with the complexity of things like Asylum 626. Research hasn’t shown that media literacy training works over the long term. Need multiple strategies, including self-regulation, education, and government. Young people may otherwise have no sense of what their privacy is worth.

Keeley: Canada has media literacy as part of core curriculum; we should look at that.

Engle: FTC instituted an ad literacy program aimed at tweens. interactive video game allowing kids to spot ads where they appear and ask questions about it. In partnership with Scholastic, they got a curriculum for teachers that any school can use. Remains to be seen how it will work, but the game is very engaging.

Q: we all tried to get comprehensive legislation in the 90s; the industry was opposed even to protecting children. Told us “we’ll never give you teens” because that’s the most lucrative market, and that’s still true. Global marketing research: children and teens are ground zero for research being done everywhere—to inculcate not just the ad modalities but the data collection, which can’t be separated out. If you want to see what online marketing to kids is really like, go here.

Harris: raising all boats and getting self-regulatory safe harbors to develop fair information practices for industry segments will get a better result than constantly segmenting populations.

Montgomery: we’d call for measures within a larger framework addressing the special vulnerabilities of teens, who may not be as familiar with privacy issues.

Tien: FTC unfairness jurisdiction: If we imagine a situation where in the next year nothing happens on the legislative level, then is there a way for the FTC to more actively use the notion of unfairness?

Engle: traditionally we’ve looked for economic injury or physical harm for harm that can’t reasonably be avoided by consumers (part of the unfairness standard)—can harm to privacy be included? We had a case against Sears when Sears offered rewards for consumers who agreed to have their online behavior tracked, but what was disclosed in small print was that it would look at online buying behavior and other sensitive information. FTC alleged deception for failure to adequately disclose what info was collected; but suppose they hadn’t—could we allege that collecting the info was unfair? Sears didn’t do anything with the info—thought it would be cool to have, but couldn’t figure out what to do with it. Hard to bring that case as an unfairness case. Active debate in the commission now about boundaries of harm.

Spyware: brought a case against a remotely installed keylogger tracking everything, including passwords. Experts could testify about potential financial harm and even risks from stalkers, but the harm should be cognizable even before those consequences materialize.

Harris: in looking at fact patterns, it is frustrating to think that the FTC can’t stop the manipulation of kids to disclose a lot of information. Goes back to kidvid days when the FTC tried to regulate sugary food ads for kids as unfair.

Engle: Congress took away our ability to regulate such ads for 14 years, and then codified the unfairness policy statement but said we couldn’t rely solely on public policy to show unfairness.

Montgomery: example of heavy constraints placed on FTC by industry.

Q: In Japan, age verification is done offline.

Keeley: there are a lot of models; in Europe they go up to 16 and the industry is looking at what can be done.

Harris: she’d oppose any parental consent for teenagers. Japan does have a new law, but she doesn’t think it’s particularly strong though it’s hard to criticize given that the US has no law. Credit card verification: anyone could be holding that credit card. It’s hard to talk about age verification without talking about identity, which is a bigger conversation.

Friday, March 25, 2011

Mad Men part 2: in which it develops that Google thinks I'm a man

Panel One: The New World of Digital Advertising: Technologies and Business Models
Part 1

Moderator: Emily Bazelon, Slate Magazine and Yale Law School
Scott Spencer, Google Inc.
Publishers who work with Google fundamentally want to provide the content free online: there is a tradeoff between ads and the price of a magazine, on or offline. What has changed: in the past, ads were bought based on the demographics of the readers. Newest change: buying process. You can now buy the audience independent of the publication, and that’s more efficient. If someone goes to a car site, I can target them with a car ad later, or if they abandon a shopping cart, I can remarket the items. Increases publisher revenue/advertiser’s effectiveness.

Drugstore purchases: items I buy may be very personal, but I let CVS scan my card and use my info any way it wants. Same with credit cards. Many offline parallels, but online is just more obvious.

Michael Blum, Quantcast
Quantcast as case study. The only way that people have tried to reach an audience beyond the existing audience is by using demographics. But demographics is always a guess wrong in both directions. Target market is 50% right, but we don’t know which 50%. So: take existing customers and find how they behave; then find people online who behave like that, and target those folks.

Jesse Pesta, Wall Street Journal
WSJ did a project: What They Know. This is about more than advertising. The ability to crunch numbers—mix databases including those that have existed for many years. Insurers are experimenting with the data pools and finding they can be as revelatory as a blood test.

Q: how to back up promises to consumers?

Blum: there are technological answers to problems like security breaches. There is still a distinction between personally identifiable and not identifiable information; scariest involve the former. Quantcast collects 2 quadrillion pieces of info a month, in a room of servers that no one at Quantcast could open and understand. It’s a big pattern recognition machine. It knows that Scott went online and searched for boots, but it doesn’t know what boots are or who Scott is.

Spencer: There’s a difference between speeding and having a law against speeding. Going 3 mph over speed limit is different than 100 mph over. The key is to make sure that things that are bad are curtailed, not the technology. Can a credit card company sell my data to an insurer? He’s not sure. It’s more transparent online.

Pesta: My name is personally identifiable information. If all you know is that I shopped for cowboy boots, that’s not personally identifiable. But these things meet in the database. If you know gender, age, zip code, shopping preferences—that probably does identify me uniquely. So is that personally identifiable information? Substitute “has diabetes” for “shops for cowboy boots.” What then?

Blum: personal health/other sensitive information should be treated differently, there’s a consensus on that.

Bazelon: so what happens once that information is collected to prevent its use to deny you health insurance?

Blum: maybe it shouldn’t be collected in the same way. EU has a rule: opt in before even collecting it.

Bazelon: with do not track, are we relying on companies not to collect such info? How do we ensure it’s not collected?

Spencer: many questions about what tracking means, etc. Useful to have some bright lines. We make an opt-out persistent for the user.

Bazelon: what are the unclear lines?

Spencer: resell data: if you’re comfortable with a first party seller using the data, why not allow resale?

Bazelon: is this a moment where we need government regulation?

Spencer: no official answer, but both government and self-regulation could work.

Blum: The NAI, mentioned above, represents advertisers who are very interested in showing that consumer choice is a top priority—not sitting around trying to avoid regulation, but trying to find something that is consistent, easy for consumers to understand, and could be adopted by market leaders as well as smaller companies.

Joe Touro, Annenberg School: heard rumors that Google is starting to desilo its data: Gmail v. contextual marketing etc.

Spencer: he doesn’t know. We take data privacy very seriously; consumers can look at what’s stored and can opt out, and we try to make the opt out work across browsers.

Linda Greenhouse: how concerned should she be that, after researching individual health insurance, ever since then, 5-10/day she gets invitations to buy health insurance? That, after researching odd health conditions, she gets ads related to those? Who thinks of her as an uninsured (Lucinda Williams-loving, per Amazon anecdote), multiply diseased person?

Pesta: Gets to the “creepy” place of reaction. Amazon is different because you have a relationship with Amazon, which you agreed to when you started buying from it. Amazon is paying attention within its own ecosystem, just like the guy behind the counter at the local bookstore. Other examples get closer to the creepy factor because you don’t know who surmised the existence of your status/diseases. There’s also collaborative marketing on Amazon (users like her).

Touro: but underneath the hood, Amazon is doing the same sort of thing that other sites do.

Pesta: but you did agree to the first-party relationship, like with the local donut shop.

Blum: it’s easy to ask “who’s watching” or say “it’s nobody’s business,” which is part of the creepiness factor, without understanding the technology. But in a lot of this tech there aren’t other people who know these things, and “we” aren’t being watched. No humans are involved. It’s just pattern recognition.

Bazelon suggests that this isn’t making anyone feel better.

Blum: but it doesn’t understand that boots function like shoes.

Greenhouse: but a human will make use of the knowledge.

Blum: not in many of these cases, though consumers should be given choices.

Greenhouse: but they could find out.

Blum: mostly they’re just interested in doing it by automation.

Pesta: follow the money—there might not be money in aspects of this business. Example from WSJ: man in business of online ads/tracking, and he said, in regards to whether he knew a real name—“no, and I have no interest, because it would be too costly and there’s no money in it.” But on the other hand, another company is associating people’s real names/addresses with these databases; they thought they’d found a way to make money.

Blum: the examples you gave are creepy in part because they’re health information, which should be treated differently.

Lee Tien, EFF: The right to forget: users being able to expunge data held about them. How valuable are those data as they age? Consumer advocates want to know: are there places where natural degradation of value becomes clear? A day, a week, 30 days? Mitigate potential costs to privacy from civil litigants/government access while not necessarily damaging advertising uses of those data.

Spencer: we try to limit retention, but he doesn’t know of any study of lifespan.

My thoughts, listening to this overall discussion: the fantasy of perfect control (if we just add enough tech, we will control everything we want to control) has shifted from copyright owners to advertisers, which isn’t a great surprise.

Blum: could distill data over time and keep only the most important parts for longer.

Tien: but what are the times?

Blum: 13 months—hard to see greater value.

Bazelon: 180 days for stored email is generally thought to be the line past which it should be harder to subpoena.

Amy Kapczyinski, Berkeley: Aren’t we worried that Greenhouse is going to get charged more for health insurance? Either they’ll offer you a higher price, or they won’t put you in the “discount” group. Some of this tracking is about price discrimination, notoriously unpopular with consumers. Do we mean that Google shouldn’t track diabetes searches when we say sensitive information is different, or what?

Spencer: many aspects to this. I search for a computer and I’m offered 20% off if I go to a particular site. That’s great price discrimination for me. There are positive and negative use cases for the tech. Self-regulation attends separately to sensitive data, avoiding storage and use.

Kapczyinski: how do we operationalize that for Google? Do you store those searches?

Spencer: we don’t store health related information, though it may work differently for things like Gmail—he only works in one area. We don’t use sensitive information in behavioral ads. (Though they do in contextual.)

Bazelon: per Greenhouse, clearly somebody is.

Pesta: nontraditional advertising: banks have started using flavors/varieties of this data to decide what sort of potential customer you might be, in realtime. Capable of offering the bank a quick general judgment, and the bank can then decide which credit card offers to show you. Is that price discrimination?

Wendy Selzer, Princeton: we’ve heard about opacity to the end user and hearing that advertisers don’t want to disclose algorithms/get too close—incentives are against good notice to user who might be creeped out in the moment. How could we shape incentives to require notice given through the tech closer to the way consumers are using it? What might that look like?

Spencer: we believe in greatest possible transparency. You can go see what data we have about you. You can adjust them. Also, every ad, we say this is targeting and you can click on it. Almost nobody does, but they can, so it’s transparent; we can’t force them to click.

Blum: why don’t people opt out? Maybe they aren’t worried.

Bazelon: what happens if you opt out? Is it forever?

Blum: allows you to opt out once or forever.

Spencer: you can adjust your profile or you can opt out, and you can plug it into the browser so that it carries across (I thought this was just Chrome). (As it turns out, Google thinks I’m a man for advertising purposes.  Also it thinks I’m really interested in gymnastics. OK then. Given how much time I spend reading fan fiction—and not following gymnastics—I’m oddly disappointed by Google’s profiling abilities. On the other hand, were Google to de-silo its information by integrating Gmail, that bad profile would be much more accurate, and I don’t think I would like that.)

Pesta: we were aware of the ad bargain, but we weren’t aware of the personal information collection bargain we were making. Tools should make clear what bargain we are in fact striking.

Bazelon discussed having a Twitter impersonator. Even if you’re not a journalist, you’re probably participating in these social networks where you’re putting a lot of personal information out there. Maybe it just seems like a fact of life to her: you have to be careful and hope you don’t make a mistake you’ll be stuck with forever. She’s tempted by a right to forget, especially for teens/young people who aren’t thinking about the import of information they put online—they deserve more of a safe harbor than they get.

Pesta: He’s not the Jesse Pesta who hired a hit man to kill his family, if you Google him.

Hoofnagle: Costs of regulation? OBA looks a bit like a Ponzi scheme to him. Everyone is talking it up, but are the numbers revolutionary or incremental? Is there anyone doing good work to show the revenue gain over contextual models—which supported the web until this time?

Spencer: The cost of self-regulation is not necessarily in the checks we pay to the NAI. The real cost is that we look at the tenets of the initiatives when we’re developing a product, building in an opt-out. That has a much bigger costs, because it takes away time, but we’re committed to notice, choice, and opt-out. As to whether OBA works: studies he’s seen indicates that it does—the ability to use audience data does have the ability to get an offer in front of a consumer that has more of an impact (he said “more impactful,” but I refuse). Consumers do like ads that are relevant. Advertisers who can target their ads are willing to spend more of their budget doing so, which increases money paid to publishers.

Q: NAI has members that include Google and Quantcast, but some members will also target people based on their “social graph,” and others will target not just based on what disease you have but what stage you’re at. Some will take “I opt out” to mean “I opt out of receiving this ad via tracking.” So what kind of self-regulation/enforcement are we talking about?

Blum: you end up with a nugget of agreement, and then individual practices. Sensitive information requires opt-in consent, so maybe the company has that, but if it doesn’t, then he wants to hear about it.

Q: so self-regulation is a floor, but will it give consumers what they’re looking for?

Blum: the aspiration is to do that, and we have an incentive for members to comply because it doesn’t work unless all members do comply.

Daniel Kreiss, ISP: Maybe consumers need to get more comfortable paying for journalistic content, per NYT, if the bargain isn’t working. Especially if publications are no longer the go-to site for selling audiences to advertisers, and the intermediaries are now in charge of matching.

Spencer: There is a tradeoff: free content for ads. The more efficient that is, the more will be able to survive. Advertisers still want a strong, safe site environment where consumers will have a good contextual experience. They also want the ability to mine the relevant data.

Pesta: WSJ has a pay wall: the key issue is striking a bargain. Paying can be one bargain. (My questions: What kind of advertising does the WSJ do behind the pay wall? Is it contextual or behavioral? That is, can I get the privacy benefit of my bargain?) Newspapers made a blunder: after training people to buy their product for 100 years, they stopped asking people to pay. People don’t want to pay for the NYT because they haven’t had to for years.

Bazelon: in fact, a few big news organizations play a huge newsgathering role—the NYT might be in a category where the model could work.

Q: privacy concerns associated with a paywall? If the internet goes to an authenticated model, isn’t that just as privacy-risky?

Pesta: you enter into a deal with WSJ; the WSJ has your credit card number and your name, so there is a database. Traditionally that’s one of the most valuable assets of a newspaper: people willing to trade money for the product. Long tradition of renting the mailing list, though he doesn’t know if the WSJ sells its list.

Part 2

Moderator: C.W. Anderson, Yale ISP
Berelson, What Missing the Newspaper Means—studied responses to a newspaper strike in NYC. People initially said they missed knowledge of the public world. What they really missed: the weather report, the radio listings, and the ads. See also this piece.

Jason Kelly, AdMeld Inc.
His company: has 500+ publisher clients, from Fox to Pandora. Now there are demand side publishers and supply side publishers in between the advertiser and the publisher. (The advertiser is the demand side.) So you have a four-party chain. At least. There’s a lot of fragmentation. And a lot of money out there. Advertisers want to come online in a way that’s brand-safe and contextually relevant. (I just saw a mention that no one wants to advertise on 4chan.)

Realtime bidding on ads: will more than double in 2011. $1.4 billion in online ads (real and nonrealtime). “Thanks to the internet and digital technology, agencies are finding that the realization of their clients’ ultimate fantasy—the ability to customize a specific message to a specific person at a specific moment—is within their grasp.” – 2009 article. (Speak of the fantasy of perfect control!) Some advertisers are going to 100% targeted ads. Many major media companies are trying to cut the middleman to sell ads online—a private exchange. Google and Yahoo! have them, and AdMeld has one allowing specific publishers to curate their own supply directed to advertisers—gives the advertisers more protection.

Privacy: opt-in would be the most difficult for consumers. Relational marketing is good. AdMeld supports self-regulation; we’re just starting out. We believe in consumer privacy and transparency. It’s important to ensure that transparency mechanisms are cognizant of industry structure, which is changing rapidly. FTC is pushing for simplified just-in-time choice, not just a website privacy policy. One example: Evidon (which I saw discussed at the ANA last week).

David Ambrose, Scoop St.
Daily deals: group buying. We do neighborhoods: Manhattan, Brooklyn, soon part of Connecticut. Realtime bidding is neat, but a small business really just wants the customer to go through the door. So a deal is a way to do that. There’s a lot of education left for small businesses—a different demographic than we’ve been talking about. It’s also an older form of marketing: through email, not Facebook or other sites. Allows 2-way communication with consumers & small businesses. Lots of businesses have no idea what AdWords is or what it can and can’t do. They might hire a consultant to put that together. AdWords is really complicated. Daily deals seem like a good way to acquire new customers.

Many businesses ask: can you help me get the customer to return? Advertising as a dialogue.

Kate Kaye, ClickZ
Political advertising: done online to connect with donors. Creating a supporter base is really creating a database. They’ll ask for money, they’ll ask for help with your friends. Facebook ads are relatively cheap and can help later on with your lists.

Special election for Ted Kennedy’s seat: Coakley wasn’t doing a heck of a lot until a few days before the election. At that point, the only thing to do is get out the vote advertising. Can’t buy ads on CNN over the weekend. So they used self-serve platforms like Google and Facebook. They couldn’t do issue-based advertising; couldn’t use ads to counteract negatives. That has to be during the campaign when people are deciding who to vote for. Coakley’s campaign: a solid lead eroded by a great campaign that built momentum in the last month. Online ads alone wouldn’t have saved her, but it helps a traditional campaign.

SEIU was running anti-Scott Brown ads on; Brown’s campaign manager called up and said that he wanted to own the website. Made a direct buy a few days before the election. Rather than doing get out the vote, they used all 6 ad spaces on the homepage for days before election day. Many were video ads. Brown targeted 10 specific districts for GOTV and asked people to volunteer with the campaign. Even Google text ads were targeted in those districts. Their MO was translating what happened online to the real world. They sent text alerts to people whenever Brown or Coakley were on the radio—told them to call and ask a question about X. Brown’s spending was 5x that of Coakley on online ads.

Signing a petition online: she’s not saying it doesn’t help with the specific issue, but what it’s really about is recruiting you and finding you later. 2012 will be a real year for video ads. Persuasion message coupled with a clickthrough where you can learn more about the campaign/donate. A call to action, which can’t work as well on TV.

Anderson: raising money is about running TV ads. So is that the point of running internet ads to raise money?

Kaye: most campaigns spend maybe 5% on online advertising; most of the budget is to run TV and print ads. Given its cost and the audience they need to reach, they can’t use the TV budget—they could make awesome videos and websites, but in terms of online ad buys there’s only so much you can spend that’s valuable. They don’t have too much money online, but down the road they might. Online ad people have been sitting at the kids’ table and are just starting to get in on the talking points/daily calls. 5% will be the norm for a while.

Q: would Obama be president without the internet?

Kaye: hard to answer, but online organizing was crucial and revered. They did tons of interesting stuff and had a lot of money to play with. They spent $20 million online, which was nothing compared to print/TV, but way more than McCain spent.

Gangadharan, ISP: Does the daily deal constrain what small businesses you engage? A piano tuning business isn’t going to benefit so much.

Ambrose: true, some businesses work really well for this model—waxing at salons/manicures and pedictures—and others less so. Could consider packaging piano teacher with romantic dinner, though. See a lot of restaurants/spas/massages. Percent off deals lead to impulse purchases. Small businesses are becoming savvier: they want to cap the number of people, they want to do specific days; they’ve just introduced personalized targeting.

Kelly: we don’t buy/sell ads ourselves. We’re a service provider for media companies for their inventory sales.

Anderson: paper dollars to digital dimes: people in the news industry don’t understand exactly why that happened. Is it better tracking (efficiency means lower prices), greater inventory, what?

Kelly: was the predigital cost structure the right one? No doubt that the number of platforms has changed for reaching audiences. It is clearly moving from dollars, maybe to quarters. Look at TV consumption—distracted attention means people have a laptop open when they’re watching TV. Is TV advertising still effective? Spending isn’t going down. The ability to reach you while you’re watching the Superbowl having a conversation on Twitter or Facebook is a benefit, though. Advertisers can target all ad capability surrounding that event. It’s not a 1:1 tradeoff.

Anderson: a big switch in power.

Kelly: that’s what publishers have face when they move from an environment in which a NYT salesperson says to Ford “you’ll be on the front page of the NYT”—advertiser values knowing where that content is. Now, supply exceeds demand. NYT can still command a premium, but $1 is different from $20.

Q: publishers are then told that they’ll make more money if they supply their data; this gives up on publisher autonomy and encourages content farming. (He clarified that he meant that this drove the intrusions into privacy/massive data gathering that drive the presentations at this conference.)

Kaye: she agrees there are way too many low-level pages lowering prices. But data exchange isn’t totally a negative for publishers.

Kelly: the advertiser wants the targeting. The publisher has to decide whether to provide it. Some media companies want to have their own tech so they don’t have intermediaries in the middle, whether those are ad networks or data companies. This is an attempt to recoup some of the lost value. In response to a question, Ambrose discussed the idea of TV everywhere: if someone who pays for Time Warner can authenticate his identity, why can’t he watch TV everywhere? Another possible model.

Ambrose: if you can control the channel (be Time Warner) that’s definitely a promising model.

Q: the idea is “we represent a certain kind of subscriber, so you don’t need to know more data about them.” With Scoop St., the key information is locational—is that the key piece of data? How much additional value does additional information provide? Even if you want perfect price discrimination it might be too hard to get there.

Kelly: buyer may not know what site the ad is going to; advertisers are willing to pay to know the context. Then you get beyond that: what other info is valuable? We’ve seen everything from 30-300% return on knowing additional information about the audience. Everyone in this room is a publisher. Retailers like Amazon and Wal-Mart are also publishers—they have content, they have audiences, they have data they could potentially package. That data appended to inventory would be 3x-10x more valuable to the advertiser.

Ambrose: small business perspective is: I want to buy an ad on the NYT—cachet of the name was important to them. For our space it’s too early to tell what that key piece of info will be, though location is obviously very important.

Anderson: it’s less important that data be accurate than that everyone (ad buyers, publishers) agree that they seem accurate. Down the road: fights over accuracy.

Kaye: already happening! Publishers can’t stand Nielsen, because Nielsen doesn’t agree with data in publishers’ log files.

Yale Law School, From Mad Men to Mad Bots: Advertising in the Digital Age

Welcome Remarks
Laura DeNardis, Yale Information Society Project

Technology, advertising, and social control. Info tech has a profound effect on advertising, and businesses are redesigning advertising and how it works. Targeted ads; new tech can test and measure reactions in response. Mobile devices/GPS allow other kinds of behavioral/contextual/locational tracking and realtime analysis.

Opening Interview With Ed Felten: The Evolution of Online Advertising
Interviewer: David Robinson, Yale ISP
Ed Felten, chief technologist, Federal Trade Commission

Q: What is behavioral advertising?

A: Ad shown depends on user’s behavior in the past. Contextual = you see an ad depending on what you’re doing right now—reading about golf, for example. Or the ad might be served based on demographics of publication’s readership. Behavioral goes beyond that: If you’ve been reading a bunch of articles on golf, you get golf ads. First-party: Amazon does that based on what you’ve been looking at on Amazon. Third-party: connecting the dots between different sites. Behavioral advertising thus drives a desire to track people over time.

Q: Should individuals be worried?

A: Sure. The collection of this kind of information leads to creation of files about what people have done. Following someone around and gathering info about what they do can be quite revealing about personal matters: family, health, personal relationships, work, etc. Even if you’re fine with seeing the ads, the tracking done to enable it poses potential dangers. Worst-case scenario: using this information for health insurance, employment, housing decisions. People self-censoring to avoid creating a misimpression. Internet has historically been a great place to explore, but now, it’s not just that on the internet people know you’re a dog, they know which dog you are.

Q: These are prospective harms. If they’ve happened, they’ve happened quietly. Where is our BP oil spill moment?

A: That could happen. Someone with access to a large/sensitive body of data could use it in a really dangerous way. Or there could be an intrusion that was exploited.

Q: People don’t know what to do about it—don’t know they’re being tracked. But there are opt-out extensions; few who are vaguely worried have worked that hard to avoid being tracked.

A: if you start to describe how these things work, you quickly start to get into complicated technical issues and complicated privacy policies. Clarity would benefit consumers; the challenge is how to make that possible. Today consumers can use various tools like cookie controls on browsers, extensions and plugins, browsers themselves; you might avoid certain kinds of sites; but none of these really provide comprehensive protection. If a consumer asked him “in 60 seconds, what should I do to be safe from tracking online,” he’d be hard pressed to give an answer because there are so many different ways to track. Simplification and broad protection is important.

Q: about a billion people use services using behavioral advertising/tracking. What if people say “I don’t mind the ads, I don’t even care about persistent tracking, but I worry about my health information”? Is there a middle ground for avoiding really bothersome possibilities, not opting all the way out?

A: not an easy way today to opt out of just non-advertising, secondary uses of data. Self-help mechanisms in existence focus on blocking collection of data in the first place. It’s hard to draw the line between personal and aggregated information because the databases are big; what people often mean when they say “anonymous” is “we haven’t connected it to a name yet.” Consumers don’t want to be engaged in careful case-based reasoning about which situations should be connected, so you get a general intuition that the less released, the better.

Q: what can we do then?

A: people are agreed on broad types. First-party uses are generally ok. Third-party tracking using invisible technologies that are difficult to turn off and that may be used for dubious purposes are not. Debate is in the middle. We can deal with consensus cases through government or the private sector. There are issues with the scope of the opt-out being proposed in the industry, but at least the debate is going on.

Q: do we need a law?

A: we can get to a result consumers will be more comfortable with even without a new law, if the industries get together. But we may also find ourselves in a situation where new laws are on the table. (Disclaimer: he is not a spokesperson for the FTC.) Attention from policymakers sometimes focuses the mind of industry on what should happen. Consensus among stakeholders, if it provides appropriate protection, is preferable—but that’s the very question: will that process give consumers the protection they need and expect?

Q: what would you tell outsiders?

A: he’s only been at FTC a little while. Characteristic mistakes in dealing with the government: assuming we know less than we do. The FTC functions well and is very competent in the area in which it works.

Q from Chris Hoofnagle: Microsoft has a paper saying how hard it is to detect behavioral ads. How would we detect compliance or noncompliance? Network Advertisers Initiative: 4 employees, only 1 dedicated to compliance; all 4 employees are shared with other companies—how can that person ensure compliance for 66 member companies.

A: First, there are nontechnical measures: if they are doing behavioral advertising, presumably they have clients they’re telling about it. The technical task depends in part on what the promises are—some may be easier to monitor than others. Also, the goal is to deter rather than prevent 100%--don’t hold ourselves to a standard that we don’t apply to other law enforcement goals. We look for deterrence to reduce the harm of the activity. (I wonder what DRM Ed Felten would have to say to FTC Ed Felten on that point. I’m guessing it would be something about the worthiness of the things suppressed by the rule as well as the magnitude of what got deterred v. what remained.) Technically, you can set up systems to see what behaviors predictably produce what results.

Q: what about opt-outs who “free ride” on advertisers’ willingness to subsidize content?

A: depends on size of opt-out cohort. We can also ask: what will the site publisher choose to do with an opt-out? Will they show ads that bring less revenue? Will they say, sorry, you can’t see our site unless you agree—FTC staff report in Dec. asks for comment about that. In that case the consumer’s not free riding. FTC hasn’t said that sites should be compelled to provide a service to opt-outs.

Q: what about access to this data for litigation purposes? A divorce, law enforcement?

A: that question can’t be answered in the abstract. Availability of accurate evidence is in the abstract a good thing but needs to be balanced against ways in which info can be extracted & used in ways other than helping courts discover truth.

Seeta Gangadharan, ISP: is the FTC speaking to other agencies about privacy? NTIA/Rural Utilities Service, trying to bridge the digital divide.

A: there is definite interaction.

Q: Watson—a computer that can analyze language and come up with answers—how does the deployment of tech like this change the issues?

A: computers are getting better at dealing with freeform text and can extract more from it.

Q: how do you distinguish between first and third parties if there’s a recognizable affiliation in an ad network?

A: present proposals take a reasonable consumer approach: would a reasonable consumer think these entities are affiliated? (Consider how here we are using “affiliated” in a way more narrow than trademark owners usually use it. If we used their definition, then nobody would be a third party.)

Q: consumers think do not track means that data won’t be collected, not that data won’t be used. How to deal with that gap in expectations?

A: A bunch of proposals out there defining tracking. Some just say that when you opt out, what you’re opting out of is seeing ads, not gathering/use of information. That’s not what consumers really want or expect. Others tend to say info won’t be collected except—now we add exceptions—click fraud, frequency capping ads; different lists tend to be anti-fraud and bookkeeping oriented.

Q: how to tell consumers that?

A: most consumers want the experts to figure this out. Do Not Call: Congress made the framework about what counted as a covered call and what didn’t. Actual definition is more nuanced than most consumers probably expected/expect now. Bottom line, though: fewer calls during dinner, and that’s satisfying.