Friday, January 17, 2014

World Without Privacy 4

Session IV: Enough About Me: Why Privacy is About Power, Not Consent or Harm

Lisa M. Austin, Associate Professor, University of Toronto Faculty of Law

Consent is often seen as the heart and soul of models of privacy protection. These regulate personal information, not private information (information about you). A very large amount of info is potentially regulated by these kinds of regimes, and the dominant idea is that it’s up to you to figure out how much you want to share.  If you think privacy is complex, contextual, varying among people, this looks incredibly promising because we don’t predefine what private is.  “Self-determination” or self-management models.

We should be very skeptical of these models.  Standard problems: people can’t read policies, don’t read them. But her problems are deeper/more structural. Cautionary tales from Canada.  Pay attention because one might think Canadian privacy law is comprehensive and strong.  Our legislation covers public sector and increasingly the private sector as well; privacy commissioners are dedicated regulators; easy to make complaints; not just procedural but substantive—consent doesn’t matter, you can only collect info for reasonable purposes.  Situated in broad constitutional framework including a right to privacy, which rejects the third-party doctrine. Reasonable expectation of privacy standard is normative, not descriptive.

Problems: First, designed to regulate relationship between individual and a service.  Certain model of fair and appropriate in that info flow. But internet companies are intermediaries—they mediate multiple relationships, including interpersonal: platforms for interaction, deeply bound up with their own business practices.  Second, collect so much information about us that they’ve become treasure troves for state agencies.  Those things affect the structure of privacy.

If you see the benefits of consent based model as individual self-management, that reflects very individual, subjective view of privacy. But implemented as law that invokes two pressures: other competing interests.  It can’t just be up to the individual to do what’s best for her—law enforcement/national security push back, as well as business interests. No matter how implemented, you find various places where that balancing is in play—proportionality analysis, objective standard. Pressure is to move away from individual self-determination.

Also, people who have to obtain consent are now under obligations—they need to seek out information about the user’s consent. If the collection involves really private information, maybe it has to be opt in; less private, maybe implied consent works. Once that categorization is occurring, we’re not doing individualized subjective preferences any more. Retreat to ideas of privacy as sensitive information occurs, narrowing the scope of what we’re protecting. Retreat to accounts of social/reasonable expectations of privacy, but very different from the constitutional version.  Social norms instead of normatively reasonable expectations. Both work to undercut the promise of this type of privacy law.

Recent privacy decision about FB: should you be able to opt out of targeted ads?  Initially targeted marketing was considered a secondary use, and required another consent.  If true in retail context, why not in FB context?  Answer: no!  Accepted that FB’s business model was based on ad revenue, and business interests had to be balanced with privacy.

Defaults make a huge difference. The charge made was that they should be more restrictive than FB sets, which would then force people to think harder about their privacy settings.  Privacy commissioner rejected this too.  Why? Commissioner said, people join FB to share information. The idea that it can’t be deeply sensitive because you’re there to share information—privacy as secrecy, sensitive information.  Also influenced by expectations of FB users that this was reasonable within the community—but no discussion of how this was a shift from general cultural expectations formed independent of something like FB to descriptive account of expectations of FB users within an architecture that was formulated based on the business model. Intermediaries participate in shaping social norms but are not noticed to be doing so.

Multiple courts have upheld warrantless access to subscriber info. Usually about child porn, which makes courts reluctant to touch this. But also they say that the legislation creates carveouts so that they don’t violate statute when sharing information with law enforcement without consent. Could say, re: the carveouts: Not up to ISP to vet the credentials of LEO/whether they need a warrant.  Could still have separate inquiry into whether warrant was required.  State might still need a warrant under Canadian law because the third-party doctrine isn’t part of Canadian law. But instead: courts say that the legislation permits sharing, thus you have a diminished expectation of privacy. And the contracts with subscribers vaguely, buried deep, have a clause that say that they can share info with law enforcement, diminishing reasonable expectation of privacy. Terms imposed by company in standard form contract now alter relationship between individual and state, and the courts keep saying this.

To think that consent mediates all this is mistaken, not just for regular problems of reading the forms but because consent as a concept facilitates broad collection of info and easy access by law enforcement in a disturbing way.

What to do? Analytic framework: privacy is no longer that helpful; lost analytic rigor.  There are two lessons from privacy’s roots in trespass that are underappreciated and that could be reappropriated/revised. (1) The “power to” view: trespass is not an injury based tort.  Always looking for the harm of privacy violations leads to trouble—so diffuse, balanced against very pressing concerns.  When we can’t find specific harms, we discount it.  Property law is often not about protecting people from injury but about giving people powers to do what they couldn’t otherwise do—e.g., transfer property after death.  Law as facilitative, not just obligations/remedies for injuries. 

What’s the legal architecture we need to facilitate privacy? What do we think privacy norms allow us to do?  Audience norms of tact: pretending not to notice something that we do.  That could help us think about FB’s obligations beyond securing consent—it too could be required to exercise tact.

(2) Public context—early search and seizure cases are trespass cases.  Some accounts of trajectory of privacy say that we went from protecting property to protecting more.  But early trespass cases aren’t merely upholding private property; they were about concerns over arbitrary exercise of state authority/rule of law. Explicit focus on rule of law can be helpful in getting us off focus on individual (consenting or not) to the surveilling party—who’s exercising power and how can we constrain it so it’s exercised in an accountable/transparent matter. That’s the central Q, not a side show. That helps explain what’s wrong with the standard form contracts nobody reads. We each have roles in holding each other and the state accountable.

In earlier privacy debates there was a lot of debate about practical obscurity—records on you held in paper file cabinet are different because they’re harder to access and link to other information.  Paperless = loss of practical obscurity.  (Pseudonymity is one way of restoring that, at least as against other individuals!) Similar phenomenon going on with law enforcement. Police require cooperation of community in so much; you need trust. We all know what happens when a community stops trusting the police.  RCMP built trust with community and got tips that led to apprehension of people who planned terrorist attack.  That requires responsible action by law enforcement.  We all exercise judgment about when to act (call in the police) or not. This plays a role in practical constraints on police action. When info is no longer held in community, but by an intermediary with different rules about sharing, we need to think about what accountability looks like.

Growing scholarship in rule of law: not just about constraining state authority, but also about constraining private actors.  Is rule of law consistent with administrative state? Information law is another major shift in the nature of the state, and we need more than privacy to think about it.  Consider other legal vocabularies about the nature of law.

Moderator:      Tanya Cooper, Assistant Professor of Clinical Legal Instruction and Director of Domestic Violence Law Clinic, The University of Alabama

The poor have no power over their privacy. We see this in family law, disproportionately affecting racial minorities. Example: child welfare/dependency courts where children are adjudicated abused/neglected. Families routinely invaded by state actors/agencies, lawyers, judges. Their information is collected and used against them. No meaningful consent/ability to opt out because the countervailing interest is protection of children. Irony: these courts are closed to the public. They use privacy as an umbrella term for confidentiality—mask a wide array of abuses against due process and families are routinely separated forever.  So she likes thinking not about consent but about power.  So what to do?  How would we apply your concepts about power? 

Austin: one possibility: disclose facts without personal details. Program in Canada to train judges to voluntarily redact information in their judgments, because they throw in all sorts of unnecessary details.  Get them to reflect on what they can and should put in the judgments as explanation for the reasoning versus what should be left out, like a house address.

Sarat: was there a golden age for consent, when it was meaningful?  Likes idea of obligations imposed on intermediaries; empirically, on what basis would you believe that society as a whole would endorse this view of affirmative obligations.  The way in which US citizens think about these problems is almost entirely about consent, even when the consent wasn’t meaningful. 

A: there was no golden age.  Looked at privacy commission findings over a number of years about consent.  Striking result: every example was basically resolved on the logic of business interest—reasonable purpose got interpreted as reasonable business interest, and then consent was implied; consent didn’t do any work.  Need more normative discussion to have any bite.

Sarat: if I was a consent advocate, I’d say we need to get away from implied consent.

A: has philosophical concerns.  What does it mean to consent to give up privacy, if privacy is about consent?  What are you consenting to give up?  There must still be some independent concept of what it is that you’re giving up when you consent.

In Canadian human rights law, we impose affirmative obligations to reasonably accommodate religious views, disabilities, etc. up to the point of hardship.  Completely accepted now, with lots of debate over what reasonable accommodation/hardship means, but the basic principle is now grounded.  People accept the ethical settlement.  Unthinkable to build a new building that wasn’t wheelchair accessible/build curbs without curb cuts.  Positive obligations around privacy could also be built into infrastructure—a building code for privacy.

Sarat: draw on analogies in American law to obligations that prisons have to prisons: social relationships they create based on dependency; remove the effective capacity to opt out.

No comments: