Alessandro Acquisti: He’s been researching user privacy preferences on Facebook. Read his position paper – people will disclose potentially embarrassing information in fairly open spaces, including photos of themselves drunk and behaving badly. Why?
The very people who post pictures of themselves in their underwear may object to Facebook’s new contextual ads. There are contrasting, coexisting needs for privacy and for fame; even bad publicity sometimes satisfies. Also, control – bad information is not so embarrassing if we are the ones who hand it out. In economics, this is a matter of signaling. It may be reputation-enhancing in some circles but reputation-diminishing in others.
Studies on willingness to pay v. willingness to accept: people assign different values to their personal information depending on whether they’re focusing on protecting it or revealing it. You can get different decisions in offering people an anonymous $10 gift card or an identified $12 card, based on the endowment effect and framing of options.
Danielle Citron: Malevolent crowds are denying women in particular the benefits of their online reputations, and the online environment accelerates the problems by removing some of the traditional checks on harassing behavior. (Read her position paper for discussion of the social psychology literature on how anonymity in crowds liberates people to express aggression.) Individuals who write under female pseudonyms are 25 times more likely to receive sexually threatening comments than individuals who write under male pseudonyms.
How to diffuse dangerous group behavior? (1) Fear of getting caught, especially when group leaders reinforce that fear; (2) perception of need for victims; (3) difficulty organizing. Online, the accelerants are present, but none of these inhibiting factors. This dynamic is not self-correcting, as some things might be (as elaborated by Benkler).
William McGeveran: The privacy panel is often the “sky is falling” panel. We want to make sure that the internet is a space that everyone can take advantage of; if privacy concerns interfere with that, that’s a problem. Here, he wants to talk about reputational piggybacking.
Web 1.0 was devoted to profiling users to find better ways to address them directly: to sell to them. Data was being collected massively, but basically for the collector to use in its dealings with the collectee. The 2.0 shift is towards selling not just attention but endorsement. Entities want to use your reputation to promote their offering to others, not just to you. Facebook, of course. Netflix; others who try to sell either to others in your network or others who “look like you” in some way. (This is the flipside of my presentation, which is more about the perspective of the owners of the properties being promoted or disparaged in this process.)
The piggybackers have incentives to get you to transmit your preferences as widely and as often as possible. Of course there’s a tragedy of the commons problem – if everyone piggybacks, it loses effectiveness. The piggybackers also want to do this seamlessly – so they go for opt-out solutions, and create problems of consent and privacy. (And here he makes me think about manufacturing consent – when we talk privacy, we usually talk about consent as something manifested privately between contracting parties. But now our consent has been used in a new way, to create public opinion. Not just manufactured, but advertised.)
Daniel Solove: People are writing about their private lives (and others’ private lives) – the blogosphere is mostly personal, only a smaller part political. This creates persistent records so that people can’t escape a moment that has come to live in infamy. And a lot of these people creating these potentially embarrassing records are young. It should not be impossible to escape the shackles of our past.
The law can’t do everything, but can operate as an incentive to shape norms and to push people to work problems out informally. Privacy torts exist, but are weak and insufficient incentive for online responsibility. We need a viable threat of lawsuits to create better incentives.
Traditional notions of privacy have been binary: public/private. If information is somehow revealed to others, it’s no longer private. But that’s not really true: information is usually somewhere in the middle, as are places. Buying medication in a drugstore: that’s a public place; we still don’t expect high-res photos of our names and prescriptions to be posted online. If we don’t do something to protect that middle ground, it will be lost.
In England, there is a tort of confidentiality (ours is much more limited). If an ex-spouse or ex-friend reveals your secrets, they can be liable.
Control: Dr. Laura Schlessinger’s nude photos, taken by a lover decades ago – she wanted them gone, but was unable to control their dissemination. But the voyeur website that posted them successfully asserted rights against republication – copyright was stronger than privacy! It is possible for the law to give rights to control that are stronger than privacy currently is (though he thinks copyright goes too far).
Jonathan Zittrain: Various horror stories of privacy and persistent reputation. Qatar shares one IP address, and one bad apple got that IP address banned from Wikipedia for a while! These automated/semiautomated systems for determining reputation have powerful effects; Cyworld, popular in Asia, has nontransparent rating systems that effectively train people to behave.
Zittrain finds it fascinating that search engines generally respect robots.txt. This is a norm that addresses a potential problem, and it works pretty well through voluntary compliance. It would be great to have more advanced tagging – “I don’t want this widely distributed” or something like that. A gentle opt-out – this might be embarrassing if it got passed around widely – could be a helpful extension of the robots.txt success.
He also likes the idea of reputational bankruptcy, or at least something like eBay’s system in which more recent data counts for more in a reputation. IP addresses have reputations that attach for far too long, because they shift: we should be able to flush the cache.
Web 2.0 can mean UGC, or it can mean a shift of computing cycles from the desktop to the web. The aggregators who do that create new points of control that could be salutory for privacy/control: Google Maps reserves the right to stop your mashups of maps for any reason. Facebook can control how Facebook Apps uses sensitive personal data – it’s extra control, but possibly useful. (As long as Google and Facebook behave as we the public want them to!) Google now allows subjects of news stories to add comments – this is a promising development.
Question: some of these things require technical ability. Does this increase the digital divide? Also (perhaps contradictorily) are there architectural techniques for improving norms, e.g. for anti-harassment?
Citron: Section 230 fosters irresponsibility and discourages the development of such tools.
McGeveran: Sometimes technical and industry customs can help, though not in the situations Citron is talking about.
(But isn’t that part of her point? That is, the differential impact of various technical innovations gets played out on real human beings. Robots.txt is awesome for Yale and the New York Times. Less so for people whose concerns are harassment. Reference to robots.txt is sort of like looking for your keys under the lamppost because that’s where the light is, unless we can brainstorm other ways to use it that aren’t obviously useless (“no-harass.txt”).)
ETA: McGeveran thinks that softer solutions, like asking nicely for material to be taken down, can work in certain circumstances, especially if you can ask an intermediary. Formalized procedures for doing so might help.
Solove: We can’t have lawsuits over everything anyone says. But law can help shape norms. Maybe a limited cutback on 230 would be to remove immunity from sites that abuse 230 by being set up precisely to take advantage of it and soliciting gossip. (Wow, do I not like that idea. “Abusing” an immunity -- that would get litigated every time.)
People are sometimes nasty. The best way to protect against that is to keep your information away from them.
Zittrain: Some of the viral nature of this is people, not tech. People like watching the Star Wars Kid. Mashup culture makes these practices more acceptable (not clear to me whether he was speaking normatively or descriptively) – data is data, anything is input. An ability to remind people of the human dimension, to hear from the people who are in the photos or took the photos, would be really helpful. Maybe the viral takeup of Star Wars Kid wouldn’t have happened if the Star Wars Kid had had an ability to attach a statement about what he wanted early on in the process. (A special kind of digital attribution right?) That’s the utility of a CC license: it expresses preferences that travel with the digital content.
But such preference statements can’t stop the bad apples, like the people who harassed Kathy Sierra. (I’m interested in this statement – implicit seems to be that a reminder of her humanity wouldn’t have mattered – and I agree, though it might have mattered early on to some of the jerks who piled on her. The reason this statement troubles me, though, is that it seems at least consistent with a “men will be men” attitude – we can think about how to help Star Wars Kid, but Kathy Sierra is just out of luck? Not all problems can be solved by technology, but I don’t think we ought to accept that internet affordances can’t work even on bigots.)
Zittrain continues, in response to a question: Something just as basic as “ask me first” might have a shot at shaping norms.
3 comments:
Thank you for posting this summary of what looks like a fascinating discussion. Several of the solutions panelists proposed to what might seem to be intractable problems are indeed troubling. I agree that Solove's suggestion to cutting back § 230 immunity made me raise an eyebrow. So does Zittrain's proposal to put attribution rights on personal information, like a CC license, a viewpoint that seems in keeping with Lessig's stance (as I understand it) of conceiving of privacy rights as property rights that can be negotiated in contracts.
I wish Pam Samuelson had been on this panel, as she offers a good critique of applying a property rights system to information privacy, primarily because making personal information alienable is problematic. Limiting transferability would be a central concern. A CC-like license might take care of that, at least so long as the license doesn't become detached from the information (e.g. through a jump from online to offline), but it doesn't solve the deeper concern that commodifying personal data offends the notion of information privacy as a civil liberty. Recasting privacy as property, nothing more, is a major shift and is obviously not lossless. I suppose it's analogous to reconceiving of mind/soul as just meat, which isn't a radical idea anymore, but "propertizing" privacy troubles me (and people far more thoughtful than me, such as Samuelson) all the same.
I was one of the speakers at this seminar. Not only did Rebecca's presentation itself rock, she had this very professional summary of panels 1 and 2 up within hours. Amazing!
One clarification about panel I: one of the audience responses (infact by me) to the question on positive impacts was that Google/Yahoo etc are excellent examples of very useful reputation systems.
very interesting discussion. It made me remember an article a read a while ago (http://ksgnotes1.harvard.edu/Research/wpaper.nsf/rwp/RWP07-022/$File/rwp_07_022_mayer-schoenberger.pdf).
It proposes something that might provide a (partial) solution for some of the things discussed here. Basically the proposal is to create "electronic forgetting" of data. Data will not be automatically stored for ever, but some combination of policy and technology makes it disappear after a certain time. users can set policy, and have a say in how long their data will be kept.
Post a Comment