Session 2 - Privacy and Technology
Discussion Leaders: Ryan Calo, Aaron Perzanowski, Woody Hartzog,
Danielle Citron
Matwyshyn: attempts to create commonality/familiarity
w/consumer—good feeling. Can also be a
part of having ads be artistic expression.
Ryan Calo: important role of information in figuring out how
to deceive well. Deception is a goal that someone might have, but it often
fails. The more you know about a person, the better you are able to lie to
them. That’s an important reason to
guard information about yourself. If you can monitor a person very closely,
including their “honest signals,” you can tell when a person is being deceptive
even in the absence of being able to check independently. Increasingly ways to
determine deception, some of which feel awfully invasive. [Here his definition of deception requires
belief.]
Said: differences b/t influencing, persuading, deceiving
someone. Manipulation might be in
between persuading and deceiving.
Puffery: when does info take advantage of foibles like wanting to believe we’re enjoying NY’s
best coffee. When is something just vague
and unverifiable and what happens when an advertiser knows we will fall for a
puff?
Klass: do you mean believe or do you mean change your choice
when you say “fall for”?
McGeveran: point out the small-l liberal supposition that there
is a true self making pure choices. If that’s not true then we have a problem.
Klass: if the law feels comfortable marking out statements
that cause false beliefs as illegal, that presupposes a highly cognitive
liberal subject. That’s not necessarily bad but it has a lot of assumptions
baked in.
Citron: if you don’t know the info the salesperson has about
you, you may be deceived into thinking that you’re dealing on more equal terms
w/her than you are. Adding privacy to
deception gives us greater purchase on what’s troubling/deceptive. The disguised expert who knows how to
schmooze you. [But would disclosure fix anything?]
Lipton: Someone who pretends to like the same movies &
sports you do is setting up an affinity fraud. [Hee!] At least I understand that Amazon is tracking
my search history.
McGeveran: good salespeople create affinity in the store w/o
prior information; they just chat you up. The background information doesn’t
make it any more deceptive, if it’s deceptive.
Calo: machine learning is more powerful; can leverage social
cues with bots. There are limits on what
people can do in the moment; designing the interaction is more powerful and you
can compare it to what other people are doing.
McKenna: if what you’re talking about works, then it can be
people giving you information consistent w/what your true self wants. And yet
it feels yucky: the manipulation isn’t tied to changing your decision from what
you’d otherwise make, it’s something else.
Hartzog: authenticity fraud: you don’t understand that there’s
a mechanism hidden behind the interaction by which this stuff is delivered to
you.
Calo: the Truman Show: if you didn’t know you were on a reality
show, you’d be under a deep deception about what was happening and who was
watching even if you weren’t manipulated or making decisions you wouldn’t
otherwise make.
McKenna: that’s a different understanding of deception/a different
set of harms.
Silbey: social trust: salesperson is talking to you under background
assumptions.
Citron: is this more dignitary?
McGeveran: if the interaction w/the digital all-knowing
salesperson is wrongful, is it still wrongful if you don’t ultimately buy
anything?
[from several] Depends on what the harm you fear is.
Klass: Salesperson treats us as means not ends, but we have
a background understanding about that.
Violating the background: whenever you cheat/don’t play by the rules,
there’s deception. The wrong is not the deception that comes w/cheating, but
rather that you’re not playing by the rules you’ve agreed on; the deception is
only needed so you get away with it. The wrong is that others think you’re
playing by the same rules. This is Sam
Buell’s Badges of Guilt: how can we tell whether someone’s violating social
norms? When they have to hide what they’re doing.
Silbey: but when doctors hide information from you, is that
a violation of social norms?
Klass: not all hiding is violation, but when you’re
cheating, you will have to hide. So if
they know you’re a Cubs fan and send a real Cubs fan out to work with you, that
is manipulative; the role that deception plays here is that the concealment of
why they chose her is a signal that they’re violating the rules of the game. [But is it?]
McKenna: suppose the salesperson isn’t really a Cubs fan but
pretends to be. Where’s the harm?
Matwyshyn: phatic communication—communication is substantive
info transfer + phatic communication, which is relationship-building; creates friends
and colleagues. Cubs fan matching may not be bad, just a communication building
measure. The not apparent assistance
from tech is where the problem arises for privacy. When you walk in with a Cubs shirt, you’re
projecting your poor taste in baseball teams. But if you walk in w/your
cellphone and they mined your fandom from your phone, the loss of info control
going into creation of phatic bond is jarring.
McGeveran: Calo’s
article is about power imbalances; deception is subsidiary if even
important at all. Whether people know that this info is held by other party or
not; how much detail they have; all is subsidiary to the main problem of extra
leverage given to people who already have too much power. Deception is far down the list of problems,
and not very persuasive.
Gadja: how about dignity?
Said: Quiz Show scandal: game shows rigged by sponsors so
winners favor sponsors in particular ways. American Idol: judges drink drinks
provided by sponsors; waiting room is painted Coca-Cola red to create positive
associations. That seems to be different from Quiz Show, but it’s also phatic,
and also disturbing to many people, even though it may not be a cognizable
harm. Is that parallel to privacy, where hidden decisions are being made?
McKenna: privacy’s concern is w/interpersonal harm from the
deception/interaction, not from subsequent actions/harms. More dignitary than
consequential.
Silbey: but background assumptions about rules of game may
also break down. So that’s about consequences.
Klass: in Minority Report,
when the Gap ad scans the eyeball, there’s no deception, just information
collection and use.
RT: there is deception! Tom Cruise isn’t Mr. Hashimoto! And that’s not just a joke—it’s important
that deception here appears as a privacy strategy.
Hartzog: Deception as weapon of the weak. A certain amount
of power you have about your personal info: other people want it b/c they don’t
have it, and you can use that as a weapon.
Klass: it’s great if you enter into an illegal contract you
don’t intend to perform—disrupting trust among thieves is a good, not a bad.
Hartzog: delay, disrupt, disperse.
Matwyshyn: Surveillance by city: used Google Maps logo to
create a false sense of security about who’s doing the surveillance. Goes to double-edged sword of deception;
police can lie all the time. What’s permissible deception? Does interference
w/TM interests matter any more than other deception? Political activists use deception all the
time to disrupt control over information.
Lipton: if army pretends to be journalists, journalists aren’t
safe; military can’t use red crosses on military vehicles—it’s destructive to
the larger enterprise.
McKenna: in a perfect world your lie would go
undetected. [Though the market for
lemons means that even if undetected it might fail, along with the truth.]
Said: whose perspective are we adopting when talking about
consumer/subject interests? If we take
individual preferences, we need to know something about those, but from a more
paternalistic/value-driven view we might not.
Silbey: one of the productive comparisons b/t Anita Allen’s
and other work was that Allen discussed harm to individuals v. harm to systems
or organizations. Privacy harms need to be identified as structural/social v.
individual.
Citron: of course it’s both.
Klass: more one than the other.
Lipton: Disclosing an invasion may mitigate the harm to the
individual but creates the harm of people feeling invaded.
McGeveran: deception as interface presenting itself as
neutral when it’s really not neutral—google search results, FB news feed, etc.
etc. That sharpens the problem of
backdrop assumptions and what they communicate to you. To what extent do people
approach tech interactions differently from interpersonal, and which assumptions
are we willing to honor? Intuitions will differ in new spaces. McGeveran doesn’t mind the FB emotions
experiment b/c he has a set of assumptions about the news feed (it’s always
already curated).
Lipton: Craswell’s
cost-benefit analysis is what tells you what’s deceptive in the first
place. Show the alternative disclosure that would have made it less
deceptive.
[RT: Lipton’s point v. McGeveran & Hartzog’s: what is the
alternative to having FB control? Very hard to think through what the
difference might be if users had “more control.” Evgeny Morozov might have some
ideas.]
Hartzog: in mediated environments online, there’s one entity
in charge of the experience, so there’s more opportunity for wrongful control. Images of little padlocks are everywhere—what
does that mean? It signals and
imposes/relieves transaction costs, whether through symbols or sign.
Lipton: you think it’s a sign but it’s not, is the problem.
Hartzog: sometimes the lock signals safety (https) and
sometimes it’s privacy settings (notoriously bad). It’s a bait/invitation. Ambiguity in design:
designers can use that to their benefit, and they know people won’t investigate
even if there is a full explanation somewhere.
Said: could trustmarks do work online? These things do catch consumers’ eyes.
Klass: formal definitions from government, like “organic.” We do give certain signals fixed legal
meaning.
Lipton: then companies lobby to change it, and also people
evade it.
Klass: nonsophisticates don’t understand the law.
Lipton: what happens when everything is disclosed? The person being watched now wants to create
defensive deception. Teenagers and people in China use codes to talk in front
of other people. Disclosure of one inspires deception on another side.
Matwyshyn: one person’s deception is another person’s
safety.
McGeveran: privacy as set of norms eventually legally
enforced. You have to have a policy; the next step is to hold you to the
statements you make in your privacy policy and then say any departure from the policy
is deceptive. Yet we know that end users do not read such policies.
McKenna: just a baseline-setting exercise. The FTC becoming the regulator in privacy,
using deceptiveness to do it, was our starting point. But is there any real
deceptiveness there?
Hartzog: FTC is trying to have it both ways. Fissure that must ultimately come out. A line
of FTC cases say that consumer expectations are the key; it doesn’t matter what
you disclose in the fine print. Sears
case: can’t disclose spyware in fine print. On the other hand, FTC says that if
you lie in the privacy policy you are deceiving people.
Silbey: two different values: protecting consumer
expectations, and then the benchmark thing is different—we care about you
standing by your words.
RT: and if the FTC had statutory authority to set benchmarks
that would be ok.
Hartzog: the fine print stuff is also unfairness, which they
do have statutory authority.
Klass: two audiences: many people don’t read, but a few
people do and will be norm entrepreneurs.
RT: but then that argument should apply to all ToS/fine
print issues; those silly FB memes about “giving up your ©” show that.
Hartzog: David Hoffman just
published a paper w/empirical work on what people think aboiut enforceability—generational
divide; older people assume ToS don’t apply, but younger people think it is
enforceable but will never be applied to me.
Klass: 10 years ago the shrinkwrap cases, pay now/terms
later, were very offensive to my students and now they’re totally ok.
McGeveran: regulatory shift to looking at interface issues
that deal w/implications about security, e.g., Snapchat. FTC is consciously
picking cases and moving internal jurisprudence away from boilerplate.
Citron: Google spoofed browsers to turn off no-tracking
settings. FTC brought a case against
Google w/thin theory: you promised not to track cookies when they had no track enabled. State AGs said this was inherently deceptive
even w/o a promise to respect people’s privacy.
McGeveran: that’s unfair not deceptive.
Eric Goldman: Audience heterogeneity means a lot—privacy discussion
turns on consumer expectations, but they don’t mean a single thing to the wide
range of consumers, based on their particular community/background. Information
truthful to some may be not heard by some and deceptive as to others. When we shift from face to face to mass audiences
we have to account for heterogeneity.
Klass: but law is not exogenous. False advertising law: FTC’s reasonable
basis/substantiation rule. Per se implied representation that you have a
reasonable basis for your factual claims. That’s not based on empirical
evidence of how consumers read ads, but saying we want a marketplace where, if
you make those claims, you have evidence for them. Maybe state AGs are making the same move
w/r/t certain kinds of privacy activities.
Advertisers impliedly represent that they aren’t changing your privacy
settings unless they say explicitly that they are.
Matwyshyn: Design flaw that happens all the time in products:
Sears: though there was language about spyware buried in the terms, there wasn’t
even an opportunity to read the terms until the end of signup.
Goldman: probably some consumers did read to the end and
weren’t deceived. [In this particular
case, I’m not sure that’s true.] You have to decide whether you’re going to
protect the subset of deceived consumers.
McKenna: how is the default rule about deception? It doesn’t have a meaningful existence
outside the rule.
Klass: Disagree. First question is: what was said. Second:
is it true or false? We typically answer the first question by asking what a
reasonable person would have understood.
We could say, as a matter of law, that the default representation is X,
allowing people to opt out if it’s not true.
McKenna: if not based on what people actually receive, you
can’t show reliance and harm. The Sears
case proceeded from the assumption that people don’t know or read the privacy
policies.
Perzanowski: but we might be so confident about the answer
to the empirical Q through repeated experience that we can define the default,
just like some matters are per se material.
McKenna: that’s different from what Klass was saying. That’s a good justification for the
reasonable basis substantiation requirement.
Klass was saying something different.
RT: quite often consumers may have not formed assumption at
all about the privacy policy. One way to
cash that out is that they haven’t thought about it because they presume that
the policy is acceptable. And if they
knew that the program would turn their camera on surreptitiously they’d
definitely care, so there is a material omission.
Lipton: if you spoof the computer (as Google did) have you deceived? Have I engaged in insider trading if I broke
in? If I fooled the computer into giving
access, then yes, the Second Circuit said, there’s deception. But if I just broke it open w/a hammer, no.
McGeveran: you can only find omission by having an
understanding about what information you were owed. Are these empirical
definitions or information-forcing legal rules as Klass would say? Moreover, dynamic changes in the situation—extremely
difficult to have stable understanding of stable assumptions are; dangerous to
use deception reasoning to get to them.
Said: zoom out to larger q: aims of deception law. Market governance, shaping corporate
behavior?
McKenna: you could start from consumer protection and seek
information forcing measures out of a broader consumer protection goal to
improve the environment in the long role.
Klass: formal rules in securities law: structured to create
certain info enviro; individual harms are much less important, unlike
corrective justice/common law tradition.
RT: you can’t really tell the difference b/t empirical
definitions and information forcing legal rules, in part b/c of the issue
w/things like “organic.” I don’t fully understand the definition, but I know
there is one, so I can act w/relative confidence in the market and deception is
possible w/r/t “organic.”
Said: we ought to disagreggate b/c of the heterogeneity
problem, which we take more seriously if we start w/the consumer. Sophisticated investors v. nonsophisticated
investors.
No comments:
Post a Comment