Panel Four: Psychology of Online Advertising
Moderator: Christopher Wong, Yale ISP
Jeff Chester, Center for Digital Democracy
All new technologies get hailed as bringing democratization, but there are always multiple impacts. Online advertising system has been designed to discriminate: to make decisions about you (and your friends): what you spend, where you live, what’s your race, what’s your income, what kind of credit card rate you should get. Done in a completely nontransparent manner.
Interlocking components: ubiquitous data collection; ability to reflect back information that addresses that individual; purposeful use of subconscious neuromarketing techniques to influence conscious and unconscious mind. Realtime analysis: on websites, mobile devices, during gameplay—they know enough about you that they put you up for realtime auction. Google, Yahoo!, etc. do realtime auctions, now exported to Europe and China. The platforms have been designed with data collection at their core, driven by advertisers. Advertising industry also invested significant resources in neuroscience. They feared that people would not spend attention on ads with the rise of the internet. Wanted to make sure ads stayed powerful.
Health marketers: consumers and health professionals are both targeted to ensure that people get certain prescriptions. Leads for many subprime loans were sourced through online marketing. Inside these ads are cultural cues as well—targeted based on ethnicity. We need to pay attention to these changes as well as to the benefits.
Tom Collinger, Medill Northwestern University
How advertisers find or create the individuals they want to target: the behaviors of people, not just the behaviors of companies. People are funny.
Behavior turmps intention. Databases allowed for prediction. Allows more relevance on a one to one basis. Digital grows those powers: addressable everything, including addressable TV (targeted to specific households), estimated at $11.6 billion by 2015. Advice from strangers is twice as trustworthy as advice from a journalist. Another commentary on the future of journalism, but the point is that consumers as a collective have tremendous influence and power, forcing companies to stop doing some things they did. If the consumer doesn’t think you’re trustworthy, you’re in deep trouble.
Targeting is one of the three things that results in a message’s staying power: who you’re talking to; but what the message is will overwhelmingly drive whether the message gets attention. Bad ads with great targeting still suck. Perfect messages with mediocre targeting, by contrast, will be shared by everyone—friends will happily pass them on to other friends.
David Ogilvy: “the customer is my wife.” (RT: Wow, that’s a loaded statement.) Don’t do anything that would upset your wife. “Just because I posted on the ‘being fat’ FB group wall doesn’t mean I need diet pill ads.’ Consumers demand respect, and will deliver a ‘no’ on messages as well as products/services/how the advertiser does business. There is always a context of consent: under what circumstances will people share information, sometimes incredibly intimate?
Most grocery stores capture all kinds of info about you and don’t do anything with it. Whole Foods, which captures almost nothing, has a very successful business model. Data collection is not the foundation for all effective marketing. Financial services have perfect data and know everything, but do you feel like your bank understands you and markets to you as you wish you were marketed to? (Comment: perfect data except for where the heck the mortgage notes underlying securitized obligations are, that is.)
People are irrational. They are unable to demonstrate or articulate through behavior alone their unmet needs. Look at the last thing in the grocery basket: it was probably an unplanned and unexplained purpose. But people also reward greatness. The aim of marketing is to know and understand the customer so well the product or service fits him and sells itself. You can do that with or without individualized data: McDonald’s does quite well without too much individualized data.
Companies need to use behavioral data while understanding there are other contextual issues. Consumers will say no if the data are misused. Enable the entire enterprise so that the consumer gets a unified experience.
Aleecia McDonald, Carnegie Mellon University
Users’ views of online ads, behavioral advertising, opt-out cookies, and do not track—more detail is available in her longer piece.
Lab study with 14 participants for an “advertising” study. Asked them to define online advertising; first answer was pop-ups. Then banner ads and spam. Only one person attempted to describe behavioral ads, because that’s not how people think about online ads—described them as a way to “exploit a person’s history.”
Once trust is damaged, it takes a long time to return—pop-ups aren’t a big deal now. Users may think of online advertising circa 1999. Users are just behind current practices.
Mental models of online advertising. They make analogies to the offline world, but they don’t understand the offline world either. One woman explained that online shopping is like offline shopping: you may be shopping in a public place but there’s a privacy issue with companies knowing where you spend money and time [that is, no one would follow you around]—so shopping online is also private. Another person said that online was like talking on the phone, and that recording conversations can be illegal, and companies will also follow cultural norms and expectations. People don’t understand that even if they’re on the do not call list they can be called for certain reasons (political, existing relationship).
People expect laws, lawsuits, decency, and publicity risk will protect their privacy. They think no company would want to be known as doing stuff that’s actually quite common.
Larger online study. 69% agreed or strongly agreed that privacy is a right and it is wrong to be asked to pay to keep companies from invading my privacy; 3% strongly disagreed or disagreed. 61% said that asking me to pay for companies not to collect data was extortion. 59% said it’s not worth paying extra to avoid targeted ads (5% disagreed/strongly disagreed), but that’s because 55% said advertisers will collect data whether I pay or not, so there’s no point in paying (4% disagreed/strongly disagreed). So when we look at low click-through on privacy, that’s because (1) they think they’re already protected and (2) they don’t trust advertisers—they think they’ll be taken to see another ad and that the opt-out won’t work. Only 11% agree/strongly agree that they hate ads and would pay to avoid them, 36% disagree/strongly disagree. People understand that ads support free content, but they don’t understand what happens to their data.
People argued with her when she described current practice—“this doesn’t actually happen!” Scenario:
Imagine you visit the New York Times website. One of the ads is for Continental airlines. That ad does not come to you directly from the airline. Instead, there is an ad company that determines what ad to show to you, personally, based on the history of prior websites you have visited. Your friends might see different ads if they visited the New York Times.
86% say this happens now. 11% say it doesn’t but could. 1% said never because of law; 1% because of consumer backlash.
Next, described Gmail:
Imagine you are online and your email provider displays ads to you. The ads are based on what you write in email you send, as well as email you receive.
39% say this happens now. 16% say never because it’s illegal. 13% say it could never happen because of consumer backlash. 4% yelled for asking the question—horrible even to contemplate. 28% say not now but could happen. About 43% of respondents were Gmail users. 50% of them think it happens now, but half don’t realize that it’s happening now. Ad blindness is real. People don’t understand “ads are targeted” in the way that advertisers use that term.
Proposition: no one should use data from email because it’s private like postal mail. 62% agreed. Same number: it’s creepy to have ads based on my emails. It’s creepy to have ads based on sites I’ve visited: 46%. No one should use data from internet history: under 1/3, same between web and email. Glad to have relevant ads about things I’m interested in instead of random: 18% with behavioral ads, but only 4% for email. There are people who really want the benefits of targeted ads, though they don’t understand the data flows. Advertisers aren’t making this up. Slightly larger percentage, 20%, is completely against it for privacy reasons. Folks in the middle: why would we want better ads? Ads are things we ignore. Why give out data? Might be willing to make tradeoff for a benefit, but until they see it, they’re not interested in giving up their data. Last proposition: ok to have email ads based on content as long as the service is free: only 9% agreement. No difference between Gmail and non-Gmail users—they don’t have lower preferences for privacy and they aren’t better informed.
What can users do? NAI opt-out cookies. Showed a screenshot and tested NAI website text with consumers. Opt-out varies from site to site: some companies stop collecting data if they can’t do behavioral ads. Google, on the other hand, aggregates data. They put an opt-out cookie but still collect the data as if it all came from one big superuser named opt-out; Yahoo! collects the data and just doesn’t use it. If you visited this site, what would you think it is?
Only 11% said NAI is a site that allows companies to profile you, but not show you ads based on their profile. Equal to the percentage who think it’s a scam: 6% think it’s a scam to collect your private information. 5% think it’s a scam to find out which websites you’ve visited. People who actually go to the website probably know what they’re looking for a little better, but that’s not good. (Other responses were misunderstandings of what NAI does. 25% answered “A website that lets you tell companies you do not want to see ads from them, but you will still see as many ads overall.” This is incorrect because companies continue to serve ads, just not targeted ads. 18% answered “A website that lets you see fewer online ads.” This is wrong and prominently disclaimed in the NAI text.)
Did a pilot study on what consumers thought “do not track” means. What data can a site collect before you click do not track, and what after? 10% think that websites can’t collect data at all before you click. 60% expect no data collected after they click do not track. Huge red flag! People think information is aggregated right now—almost 90% say that’s going on right now. After clicking DNT, they think information should not even be aggregated. They think DNT applies to first parties. Only 12% think that tracking ads they’ve seen would be allowed (frequency capping). Fewer people understand that browser information goes out today; if they do understand, they’re more likely to understand that browser information would continue to be protected.
Users don’t understand how the internet works or data flows. Think privacy is already protected. Current practices cause surprise. They are ok with free sites, but do not think data is part of the deal. Given choice, users prefer random ads to targeted. Current measures don’t appear to address misunderstandings.
Wong: is this purely an educational issue?
McDonald: depends on how you define education. There are ways in which user interfaces are discoverable in other contexts; techniques used there could be applied in these contexts. Janice Said (sp?) put up green boxes next to search results for how well or poorly companies handled privacy. People were willing to pay more for companies that protect privacy. But you can’t ask people to read through privacy policies.
Chester: there is no way an individual can understand, much less control, the system for marketing. Content and data and transactions are seamlessly merged. That’s why we need regulation.
Collinger: there’s always been a gap between actual and self-reported behavior. Real challenge in this space. Even the people who report maximum satisfaction with their vehicles only repurchase the same brand half the time.
Chester: ask who is setting the agenda. What are Google and Facebook saying to tech companies? Whose values are being served?
McDonald: users are frustrated with Facebook, but saying no is difficult given the powerful network effects.
Chris Hoofnagle: to Collinger: say more about succeeding without data—Whole Foods offers better food, which is why it can succeed. Also, re: Chester—everything affects our autonomy; we have fraud in the inducement all over the place and it’s not considered a consumer protection problem. Don’t we have to put up with it to some extent? How do we think about when we shouldn’t have to put up with it?
Chester: needs to be transparent: who is shaping this environment, in what ways? We don’t have a system in real life where there’s one ad that other people see on TV—an ad honed to change your behaviors, and that’s extremely powerful and undemocratic. As we allow companies and governments to have greater access to this tech without accountability we’ll see new mechanisms of control.
Collinger: his message is that it isn’t that companies that have data are handicapped or vice versa, but that there are ways of understanding/delivering on unmet needs that don’t always require understanding the customer at the granular, individual level. As long as that’s true, it reminds us that talking only about the data is leaving out the whole. There are ways to use data well and still fail if you get the brand wrong, and ways to win even if you don’t use data well.
Q: are there differences in understanding due to age? Why don’t people understand this stuff?
McDonald: it’s difficult to figure out why people don’t know these things; we could look at how people come to know the things they do know. Age: we looked at it, and found comparatively few differences based on age. Subjective answer from the in-depth user studies: people who are in their late 30s/early 40s seem to know the most about what’s going on. Generation older manage to get computers to work and use as a tool; generation after—19-year-olds know less; they use FB because their parents do, so it must be safe. That’s not what you hear from advertisers.
Q: front page NYT article about German politician who found out how much information the cellphone provider had about him—maybe politicians will do something once they understand this.
McDonald: do not track has potential in areas like that too—doesn’t have to be limited to OBA.
Collinger, in response to question about whether good messages would prevail even without data: it’s the content, stupid! Well-targeted bad ads don’t work, and you need to earn customer trust for targeting. That will be part of the future way in which people evaluate businesses. We’ve all agreed, consciously or otherwise, that we’re happy to trade off Amazon knowing everything to get the benefits. Those are incremental decisions; he thinks the good guys will win.
Panel Five: Regulating Online Advertising
Moderator: Jennifer Bishop, Yale ISP
Chris Hoofnagle, Berkeley School of Law
Research questions remaining in the field: how we feel as a society about price discrimination. We react badly to price discrimination in some contexts and not anothers. Another: how much OBA is worth over contextual ads—there’s a paper by Howard Beales that doesn’t answer that question and doesn’t define the terms. Data retention: how long do advertisers need data to target? People give different answers. Whether OBA grows, shrinks, or divides up the ad pie. The cost of self-regulation—if Evidon charges 1 cent/CPM, that has costs as well. Alternatives: local OBA where targeting could be done on your computer, without privacy issues though cryptography would be necessary. Deep packet inspection might be the most privacy-friendly way out of this problem. Comcast already knows who you are, they have the full pipe, and they’re governed by electronic privacy laws—you have a right to sue them if they violate your privacy.
Need more time with the landscape of the industry, because right now the debate in Washington is: regulate the internet yes/no. There are more options than that! NAI and advertisers think they’re saying the right things and not getting the right outcome—we’ve lived with NAI for 10 years, and it lacks credibility.
Say you’re Williams Sonoma and you want to know someone’s home address when they buy in the store. But they don’t want to tell you, and California law says you can’t ask for personal info. What you do: you hire Axciom who can combine zip codes and credit card or check routing number and find their home address—you trick them into giving this info—marketed as “lets you avoid losing customers who ‘feel’ that you’re invading their privacy.” Suppose users are deleting cookies. United Virtuality: flash cookies, which people can’t control—they say “users don’t know what they really want,” so we’ll track them that way. When a consumer deletes normal cookies and retains the flash cookie, the normal cookie can be resurrected. Finally, when you realize visitors to your site won’t accept third-party cookies because of cookie blocking in IE6, you can get around it by posting a compact privacy policy—even if it’s blank. A huge number of sites can therefore circumvent cookie blocking.
Self-regulation needs to recognize that the industry has not been friendly to law, pro-privacy technology, or consumers who want pro-privacy defaults. Pineda, the Williams Sonoma case: this was a circumvention of the “no asking for the address” law.
What would credible self-regulation look like? A statement of policy purpose. Measurable standards. Operation and control separate from industry. Complaint procedures. Penalties. Whether it’s funded. Whether it updates itself. Whether it includes new actors and independents. Whether it supports competition.
Look at leading self-regulatory organizations’ statements of purpose: EASA, the European organization, doesn’t even invoke privacy as a goal. It’s used instrumentally, as in “privacy policy.” Doesn’t discuss privacy as a legitimate interest until p.35 when it mentions deep packet inspection, which is irrelevant since these actors don’t engage in DPI and are therefore willing to condemn it.
Sensitive information: these standards sound great until you read the definitions. Medical information = “precise” information about past, present, or future medical conditions. What does that mean?
Operational control: NAI isn’t even its own independent nonprofit. It’s a project of the Digital Policy Forum; only 4 employees with 1 on compliance. Complaint procedures are largely illusory. A lot of what self-regulators do is absorb complaints that would otherwise go to Mary Engle. TrustE was getting more privacy complaints than the FTC. EASA has a highly developed with complaint procedure: should a company continue to breach the rules on a persistent and deliberate bases, they’ll be referred to the FTC-equivalent. Again, what does that mean? That doesn’t even lead to the revocation of the seal they give!
Updating is a big issue in NAI. Membership in the NAI dipped as soon as the Bush FTC said it wouldn’t do anything—down to 2 members at one point. Even created an associate membership that didn’t have to comply with all the rules! Now NAI membership is back. (World Privacy Forum 2007 report, worth checking out.)
There are a lot of companies in OBA. Evidon says over 600. NAI only has 66 members.
How to build a positive agenda for privacy advocates and consumers supporting real self-regulation? Embedded in the industry is a lack of respect for individuals. People just “feel” their privacy is violated. Thus, the norms the self-regulators come up with lack protection—NAI still allows tracking of an opt-out as long as it’s not for OBA. NAI initially farmed out complaints to TrustE, which quickly stopped reporting on the issue. This is why people are at the point of wanting regulation, not self-regulation. That’s why advertisers’ arguments aren’t working.
Alison Pepper, Interactive Advertising Bureau
Advertising trade groups: IAB is a trade ass’n representing about 470 different companies in the online space.
NAI is being held up as the model (failure). When organizations got together about what to do about self-regulation, it was on the heels of 2007 FTC workshops. Ad networks (NAI) aren’t the only component, just a piece of the ad ecosystem. Publishers, service providers, agencies, advertisers engaged in meetings. Privacy did come up!
What would make self-regulation effective? Enforcement/accountability. Recent study that found no violations of the law in foreclosures—that’s not credible. Enforcement means you will find bad actors. If an industry finds that everyone is in compliance, it will lack credibility.
Self-regulation also requires widescale adoption and implementation. Two-pronged education: consumer point of view—consumers don’t read privacy policies, Gramm-Leach-Bliley notices, etc. With the new ad info icon, you don’t even have to leave the site to find out what info is being collected. Also have to educate the business community. IAB is working on that right now.
What makes self-regulation fail? Lack of widespread adoption; lack of enforcement/accountability; evolution of principles as tech and norms change. The industry is not going to get 3 bites at the apple from the FTC.
OBA is a part of the overall privacy issue, not the whole. Offline restrictions—at what point do we merge two things that are the same and need to be regulated the same way. 15-20 years ago you got a warranty card; now you fill it out online.
Consumers and the power to say no. Consumers can strike bargains, as Pesta said yesterday, but they have to know what they’re agreeing to.
Business model issue: FTC hosted a workshop on the future of journalism. The inventory for ads has exploded but the demand is the same. More publishers competing for fewer ads. No one knows what that will mean, though consumers have a basic concept that ads support content. Another component of educational process.
Rebecca Tushnet, Georgetown Law Center
I’m the odd woman out: although I agree that privacy is extremely important, the focus of my work is what the advertisers say to you when they find you.
In yesterday’s panels, I heard eerie echoes of what copyright owners were saying 15 years ago: the fantasy of perfect control (if we just add enough tech, we will control everything we want to control and we will receive all of the consumer surplus from any transaction) has shifted from copyright owners to advertisers. Jason Kelly quoted a FastCompany article from 2008: “Thanks to the internet and digital technology, agencies are finding that the realization of their clients’ ultimate fantasy—the ability to customize a specific message to a specific person at a specific moment—is within their grasp.” And he spoke about advertisers planning on 100% targeting. The natural slippage is to think that 100% targeting is 100% control and 100% efficient, and neither of these are true, but the attempt to achieve them has risks beyond simply failing to realize the promise.
There is a sleight of hand in many of the promises about targeting—that somehow they’ll save publishers because they’ll unlock more spending. All else being equal, if an advertiser can target more efficiently, it can spend less money—something Kate Kaye mentioned yesterday when she talked about having a natural cap on what you can spend on political ads. More than that, all else is not equal: we are in a recession. Unless people have jobs, the best ads in the world won’t help them buy stuff. More generally, demand for the stuff in ads is not purely an artifact of how good the ads are. It also depends on the stuff and the audience. That’s what I mean about the fantasy of 100% control. (Also this discussion about consumer education was also had with respect to copyright owners: teaching consumers that all unauthorized copying was theft didn’t work either; it’s hard to get people to know something that their self-interest depends on not knowing.)
To get more specific, move from advertising to branding: marketing doctrine: you don’t own your brand. Consumers at the very least own a share of your brand. Pabst Blue Ribbon; Burberry plaid; Timberland boots: brands that were adopted by groups they were definitely not targeting. Forgetting this for a fantasy of choosing the proper consumer leaves money on the table. Maybe the response is, ok, once a nontargeted consumer picks up on the brand we’ll start targeting that group, but at least we should recognize that this is reactive, not perfect optimization or price discrimination.
I am also reminded of the copyright owners of 15 years ago when we discuss whether the internet will survive without strip-mining consumers for their data. Back then, the argument was: no one will put cars on the information highway if we don’t have perfect digital rights management and direct liability for ISPs on whose networks copyright infringement takes place. Now, the argument is: no one will have free content if they have to use contextual marketing instead of maximum behavioral advertising. Well, maybe, but all those rickshaws and bikes and trucks, and, yes, cars out there on the information highway suggest that maybe we don’t need to allocate maximum control over any input to any one interest group in order to get the system to work.
Other than that, my message is just: don’t forget that there are still issues with the content of what gets delivered. In my thought piece: I wrote about the interaction between trademark law and submerged marketing—nondisclosed marketing. The FTC has moved very clearly to require transparency in online marketing with respect to source, that is, with respect to the information that this is advertising and not an independent opinion: consumers want to know when we’re being advertised to, and the revised endorsement guides support that desire, and expectation, that there will be a label allowing us to distinguish between advertising and editorial content or advertising and consumer reviews. The connection to the privacy issues is twofold: transparency (a lot easier to achieve in terms of the information delivered to the consumer—this is sponsored—though what that means to the consumer is of course still debatable) and consumer expectations—what does and should a reasonable consumer expect from the information out there in the marketplace?
We need to be careful with the moving parts in the advertising regulatory system: things we do in one place have effects elsewhere, and I’m not just talking about compliance costs raising prices. My example: With respect to disclosure, we sometimes hear the argument that sponsorship disclosure is unnecessary because reasonable consumers expect that content is sponsored. Trouble is, that means that, as trademark owners, advertisers have colorable arguments to claim control over anything that’s said about them online. Because, after all, reasonable consumers could believe that they sponsored it! If we want freedom to talk about trademarks online, to review them honestly, we need to hold to the normative principle that, absent disclosure of a connection, consumers need not expect a connection between a trademark owner and a discussion of that trademarked product or service.
More generally, I want to reassert the power of the normative. A lot of times we make descriptive claims about consumers that are really normative, since consumers don’t know what’s actually happening and can’t express their preferences.
Mayer: we’ve heard about the info gap between what’s going on and what users perceive. Minute user interface changes can have dramatic impacts on uptake of the tool. Recognizing how info-specific these tools are, can we expect self-regulation to provide the right tools, taking into account what’s known about user psychology?
Hoofnagle: are consumer expectations worth aligning with reality? A lot of expectations are just unreasonable—consumers may expect to have the cake and eat it too. Also, consider what would happen if the FTC’s interest changed. As soon as FTC spotlight disappeared, NAI disappeared.
Pepper: Ad agencies did pro bono work creating the icon, holding about 10 focus groups with consumers asking them about the icon. We have discussed continuous funding for self-regulation—one way is the icon. License that to support the entity’s self-regulatory efforts.
Q: once you click on the icon—he went into Yahoo! and looked at what happened what he clicked and found a series of rabbit holes. A college education in privacy, if you have the time, but little specific information. Transparency has been thrown around, but that’s not transparent—most people would find it offputting. When you ultimately get to the NAI page, they say “why do you want to do this, when you won’t get relevant ads?” Seems to be set up to fail.
Pepper: companies have the right to argue the value proposition—you still have the right to opt out.
Q: but that’s the only factual proposition I saw beyond complicated discussions of how this works. The icon leads to a series of gates making it difficult.
Pepper: Yahoo! is longer than some she’s seen, but it’s 2 clicks.
Q: there are two columns, and if you hit the right ones you’ll hit the landing pages. Not transparent as to why you’d want to be there. Why can’t there be more factual data related to the individual clicking on the link: what we know about you, so you can decide whether you like us knowing that about you.
Pepper: implementation may be an issue—you are talking about Google’s ad preference manager?
Q: that would be a start.
Pepper: Initial criticisms: can you give consumer the ability to opt out of as much as possible in 2 clicks? This is not ideal, but it’s what we came up with.
Q: why not have a pro/con on opting out? Or offering people the opportunity to “be” the identity they chose—let them be a woman in Nebraska for a week? People would either like it or not.
Another Q, from Chester: We filed with the FTC and showed what an Evidon company (I wasn’t clear whether this was Evidon itself or an Evidon client) tells the consumer what they’re doing—it’s at odds with what they tell prospective clients they’re doing. System is not designed to disclose properly.
Industry is doing self-regulation to head off regulation; this is all about what the components of the safe harbor should be in a world of fair information practices. This system will be the equivalent of the EU regime, or at least it will be proposed to be. How to balance EU style privacy with self-regulation?
Hoofnagle: tactically, would recommend delay—let Europeans work through problems and come up with answers. And then you can do reverse preemption—EASA says you can’t use technical means to circumvent users choices. NAI says that in a FAQ but not in their principles. Ken Bamberger & Dierdre Mulligan have a great article on privacy on the books v. privacy on the ground—ambiguity might be better than legislation.
McDonald: the role of user expectations—there are points where it makes no sense to ask people what they expect. They have no idea what I’m talking about; it’s killed entire lines of research. We should distinguish between what consumers want normatively and what they expect. The FTC has moved away from fair information principles and more toward a model of expectations. This is narrow.
DPI? Really?
Hoofnagle: What’s the problem with DPI? Google says data collection isn’t a problem, it’s a problem how it’s used. If we only want the data for advertising and it’s ok for us to have it, then why not DPI? Smart people at Google say DPI breaks net neutrality, which is true. If the issue is collecting as much info as possible about a user and only using it for advertising, then DPI is an issue of degree and not one of kind.
Q: What parts of the law apply to Comcast? Or non-cable providers?
Hoofnagle: ECPA would apply to DSL. Circuit court has held that the Cable Act doesn’t apply, but that’s wrong. Congressional record reflects concerns about the cable company being able to look into the home. Brought under different facts, you could win: a different plaintiff who wouldn’t annihilate Comcast with a class victory.
Q: do you agree that collection doesn’t matter, only use? Sometimes original use is fine but use changes over time; government could ask for it. Policy slippage: we have this data, we may as well mine it.
Hoofnagle: He does not agree that collection doesn’t matter. Industry takes this position. Paper: “the ethical use of analytics.” If we can’t collect the data, we won’t get the untapped value. The value is innovation, but use restrictions prevent innovation too. The Fair Credit Reporting Act is based on ideas that you can’t opt out—imagine putting those values into effect for advertising.
Monday, March 28, 2011
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment