Thursday, June 18, 2009

ABA Consumer Protection Conference, part 2: privacy

The Evolution of Privacy in a Facebook Age

Changing Consumer Expectations of Privacy: Can the Law Keep Up?

Moderator:

Lynda K. Marshall, Hogan & Hartson LLP, Washington, DC

How do we balance encouraging investment in tech with consumer protection/choice?

Eileen Harrington, Bureau of Consumer Protection, FTC

Setting the stage: In practice, consumers encounter a broad spectrum of data protection practices, and the use of various services/tools may reveal more than the consumer knows or desires. Email: consumers lose control over what happens to info once it’s sent. Similarly, posting pictures and information on Facebook leads to a surrender of control to other users, and to the site (given the ToS). These concepts are common-sense, but the ease with which we pick up and use services like Twitter or FB can combine to desensitize us to privacy ramifications.

Where data collection/use isn’t transparent, the potential for harm can increase. Audiences and services can collect published content and make inferences from it. Problems also occur when people leave the services—FB pictures can still be tagged with your name.

What happens when collected data are used to recommend other products/services. It’s intuitively transparent when it’s part of an ongoing relationship with a website: Amazon. Consumer can exercise choice options or take business elsewhere. (Assuming that data isn’t shared with third parties who aren’t necessary to make the service work.)

Gmail: Google targets ads based on content of email, similar to search results. Less transparent and voluntary than first-party behavior advertising. Google’s disclosure is in the middle of its multipage privacy policy, and consumers generally ignore such policies. Consumers may expect more privacy for surfing than email.

Third-party collection practices: based on activities on unrelated websites. Increases the possibility of data loss and unanticipated uses.


Researchware: survey info provided for marketplace purposes; consumer allows software onto computer, which monitors substantially all online activities, including searches and content viewed, as well as potentially sensitive information. Data generally anonymized and aggregated and can be destroyed within a few days. The degree of transparency and consumer control depends on the amount of disclosure at installation. The FTC has seen improper disclosures, so consumers are unlikely to understand the full scope of collection and use.

Deep packet inspection: consumers’ online activities collected from ISP. This allows much broader info collection, because it allows ISP to monitor all activity, not just activity in a particular network. Less transparent and voluntary, because no interface for ISP to explain the practice, and consumers are unlikely to look for this in ISP TOS. And there’s no way to disable it. The ISP has subscriber info, so could identify the consumer.

Marshall: what are the differences between online and offline collection? Safeway gives me a Safeway card and knows a lot about me; I get a discount in return. Is that different?

Leslie A. Harris, President & CEO, Center for Democracy & Technology

That’s transparent: you agreed to swap Safeway info for discounts. Offline, signing up is more obvious. The offline world is far from perfect—we should have a baseline consumer privacy law that covers all data not specially treated by topic-specific legislation. But online, for sheer capacity of data storage, makes the problem more salient.

Julie S. Brill, Senior Deputy Attorney General and Consumer Protection Chief, Office of the North Carolina Attorney General’s Office

It’s also aggregation that matters. She was talking to retailers who want to issue automated calls to consumers for recalls based on records of their purchases using grocery cards. Do consumers really understand that this is a possibility when they sign up for a grocery card? If they don’t understand how one entity uses the info, they understand much less about sharing.

Harrington: that gets to consumer expectation—why would they scan my card if they aren’t collecting information? And it would be good to get a phone call if my peanut butter is tainted!

Brill: Clearly there are benefits, but what else could it be used for?

Wendy Seltzer, Berkman Center for Internet & Society, Harvard University

Our sense of privacy in public is what’s at issue here. What distinguishes online information gathering from offline is how much information we’re voluntarily sharing and publishing, and the lack of transparency of how that will be gathered, how long it will persist, and how it might be used. What’s new is that everything can be saved: the ISP collects our info, the sites we visit do that, third-party networks do that, and storage and computational power have increased the power of anybody (e.g., Google crawling public information) to create profiles of individuals or “characters” using their sites. So we have tremendous new means of self-expression and community formation, but those interactions can be surveilled and recorded, potentially for use against us.

Self-regulation by giving consumers notice of privacy practices is the first move. But do we have a functioning market for privacy? Are consumers getting/processing adequate information? Do we have competition for privacy provision? Reasons to think the market isn’t working: (1) Information costs of reading/understanding privacy policies. How much time it would take an average consumer to read a privacy policy on each site visited in average internet use? Study suggests: 81-293 hours, just skimming, per user. (2) These are persistent problems—consumers undervalue future costs, engage in hyperbolic discounting; agree to ToS for short-term benefit and don’t think about, say, the long-term costs of making party photos available for anyone to see. (3) Tech is accelerating so fast that even if we considered all the potential problems today we don’t think about how aggregation might improve in the future, how anonymous datasets that might be deanonymized in the future—research suggests that “cleaned” datasets can be used to identify individuals, as with the release of AOL’s search queries.

As a matter of policy, we should give reality to some of the illusions of privacy we have now. Regulate use, not just disclosure, to allow people to take advantage of new contexts for communication.

Marshall: Consumer often makes tradeoffs in the beginning: Facebook is really neat; they won’t read ToS or privacy policy, they just want to get on the service to join their friends. Given how quickly they decide, and how diverse the participants on services like Facebook are and thus how diverse their privacy preferences are likely to be—can we really measure consumer expectations of privacy?

Seltzer: Leery of imposing tech mandates, but we can do a better job of signalling how far the info will go, how long it will stick around. Maybe disclosures should be repeated periodically. Maybe we should allow people to see how other people see their profiles—do you want to be presenting this face to the various people who can see it?

Harris: Isn’t persuaded that there’s a generation that doesn’t care about its privacy; there may be a generation that has not yet encountered situations in which the need for privacy is apparent. But look at Facebook’s Beacon kerfuffle: when people are faced with concrete questions of information use, they have a different reaction than to the inchoate “do you value privacy?” We see that also with health information moving online. People are much more able to grasp the risk there. Every significant event leads to a public outcry, so don’t assume that there’s a baseline of disregard for privacy.

Marshall: is there a difference in the way you communicate choices based on generational gaps, educational gaps, or other differences?

Harris: We have to move this out of privacy policy. Whatever the notion was when the FTC insisted on privacy policies, the data practices have become so complex and the amount of time people spend on the internet has increased so much; the FTC’s mandate that one comply with one’s own privacy policy is simply not enough. The FTC’s principles on behavioral advertising indicate that the issue is going to have to come out of privacy and become disclosure that is clear, concise, prominent and consumer friendly.

Harrington: Vladeck referenced the fact that the FTC thinks it’s time to revisit the framework. It’s been over a decade of looking at privacy. We for a time thought that the best framework was fair information practices; we shifted to a harm-focused framework. There are circumstances where notice and choice work, and lots of others where it doesn’t work at all—notice is too burdensome, or poorly timed, or doesn’t prevent harms. Sometimes the harm is so clear that the practice needs to be regulated. But in the broad middle, neither framework seems to really get to the result that works well for consumers.

Brill: AGs have been talking about this for years. GLB financial privacy notices: can consumers really understand these notices, especially when they have to opt out to avoid info sharing? The pendulum seems to be swinging back because consumers are now bombarded with these complex notices.

Harris: wouldn’t want to abandon notice and choice, but we need to rethink what that means. We’re in this tech-enabled world and have to think about how tech can be built-in to enable consumers, including whether there ought to be particular defaults. New working group report in Europe: social networks may have to set defaults to highest privacy. We’ve crossed a line: self-regulation is not going to work on its own, though it does help sort out the truly bad actors.

Brill: danah boyd analogizes social networking sites to the mall. But you don’t have the ability to personally inspect the person you’re interacting with to make sure, for example, that they’re closer to 14 than to 41. The size of the networks also makes it harder to have real interactions to assess safety. AGs have tried to place speed bumps on the fast-moving and anonymous info superhighway that social networks represent.

First, how to keep kids safe from accessing adult-only info, and second, how to keep adults from lying about their ages to engage in inappropriate interactions with kids. MySpace agreement with NY AG: focused on better dispute resolution system when kids complained about adults. Other AGs also agreed on principles with MySpace—Texas didn’t sign because the agreements aspire to achieve age verification technologies, but Texas felt the document should have required age verification. Later, similar joint statement signed with Facebook. W/r/t minors, these are the two major social networking sites, and they’re similar, though not identical because of differences in business model and also timing (a whole 5 months).

Low-hanging fruit—easy changes—and the brass ring (age verification tech). Everybody (but Seltzer) thinks it would be nice to get age verification, but we aren’t there yet. Easy changes: age locking—if someone signs up as a minor, they can’t change a birthdate without contacting customer service, and they can only do that once. Smart kids may create two profiles, one with an older birthdate, and that’s true, but at least it catches the ones who tell the truth. Tobacco and alcohol ads won’t appear to minors. Adult entertainment groups will be blocked for minors, as will (known) pornographic images. How do you tell what’s pornographic? There’s a database that cops keep, and they have hash numbers, and the sites review pictures against those databases. Mature profile categories on FB have been eliminated for minors. Improved response time to complaints about inappropriate contact/content for minors. (Hmm. I wonder what danah boyd would have to say.)

Big focus: removal of registered sex offenders from these sites. It seems to the AGs that these social networking sites aren’t in the business of allowing sexual predators to lure kids, so they’ve been cooperating in developing different business models to remove registered sex offenders from the site (if they’re using their actual identities). 90,000 removed from MySpace, using a technology that uses a scoring system to compare to state databases. It’s difficult but it is continuing. FB is going through profiles manually, and claim to have found 6000.

Age verification tech: Reminds her of the initial debate over security freezes as an aspect of fair credit reporting. The major agencies said it couldn’t be done, and now they all do it even in states that don’t have laws requiring them to do so. We can debate whether age verification is good, but feasibility ought not be the end of the discussion. Reports: online predation is a problem, but not actually as big a problem for kids as physical-world harassment. AGs felt the data was too old; social networking sites have exploded so much, and law enforcement reports are that matters are more serious than that. Finding the denominator on this issue is going to be very difficult.

If there isn’t voluntary action, the states are going to act themselves. North Carolina prohibits registered sex offenders from accessing social networking sites, and sanctions sites for failing to take reasonable measures to remove sex offenders.

Harris: The internet isn’t a credit reporting agency. It’s a First Amendment forum. Start with that difference in environment. It’s also a global medium, and what we do here has implications around the world—look at Iran. Age verification works well in an adult-only environment involving a financial transaction: buying alcohol with a credit card. Works badly when you are trying to ID a kid (kids don’t have some kinds of ID), trying to ID across countries. When you move to an identified internet, you move to a real-names internet (you can’t just classify adult/nonadult and have that information be useless to figure out other identifying information; the credit card number, for example, identifies a specific person), compromising the basic right of anonymity online. If the US goes to real names, Iran and China do so tomorrow.

You can look at these problems narrowly and the solution seems right, but its effects on the system are more complicated. You’ll just push people into sites in other countries—Friendster was big but disappeared; some new site can come up.

Seltzer: Echoes all that. YouTube pulled out of South Korea after SK tried to put a real-name mandate on all content. We can’t say even that it’s just these social networking sites, because while she supports keeping sex offenders from interacting with children, keeping them off LinkedIn might actually be a pretty bad idea, preventing them from reintegrating.

Harris: instead, work with sites on more robust privacy protections and controls, and especially reminders—periodic prompts/visualizations of their available data. Parental controls have a role to play as well.

Harrington: This shows the importance of context. The kinds of tech Brill is calling for could be incredibly valuable in preventing identity theft, but we’d want them applied in a defined context where the likely harm is of identity theft rather than bleeding over into fundamental rights. We have not yet found a framework that works for all circumstances.

Questions:

Eric Goldman: Disclosure as a theme—what needs to be disclosed? Harrington suggested that it was silly to disclose that a site was going to use your info to bill you when you buy a plane ticket. But plaintiffs’ lawyers can be pretty tendentious and argue that disclosure was insufficient. Compare the FTC’s recent action against Sears: didn’t properly disclose how much information its program collected. But not everything can be in bold print and up front; only one thing can be up front. How do we decide how to prioritize information?

Harrington: FTC determined that the information being pulled off the consumer’s computer was so far beyond what the consumer could expect that it wasn’t a close call. The whole pitch was that Sears would monitor browsing and provide coupons for the stuff you’re looking at. In fact, in some instances, they were tracking banking activity. Plaintiffs’ lawyers are aggressive, but that can’t stop us. We’re thinking hard about layered notice.

Seltzer: Tools like EFF’s ToSWatch service, keeping an eye on privacy policies, forms the germ of harnessing the internet to parse privacy policies for the masses.

Carol Miu, economist: Start with the perspective of a naïve consumer who wants to be able to protect herself, but doesn’t really know how. The naïve consumer wants to employ a heuristic, not spend hours reading incomprehensible policies. What can the government do to help me out? Color-coded privacy warnings, like terror alerts? Green means the info is just used for billing and not shared with others; etc. On FB, a popup could warn you about when you’re changing your privacy settings to “red.”

Harrington: We’ve talked about that with behavioral ad guidelines—symbols or icons.

Q: Situations where there’s no consent at all: Google Earth allows people to download photographs of people’s houses. Businesses aggregate publicly available information and track your residence, credit history, etc.

Harris: Good question, but it’s unlikely we’ll ban aggregation of information from publicly available sources. Google Earth will pull your picture out based on particular circumstances, but it’s complicated. Do you have an expectation of privacy that your house is not visible to other people? Tech has amplified the number of people who can see your backyard, but this is about magnitude rather than expectation.

No comments: