Discussant: Felix Wu
Bomb-making and other informational speech: disclosure of
security vulnerabilities. Mixed in with security vulnerability is the issue of
code as speech. Courts don’t have great answer on when code is speech; muddied
areas put together produce more mud.
Literature on how security works: iteration of discovering
vulnerabilities in order to fix them.
Notice practices are important.
This info is particularly important when only a few people are capable
of engaging in this kind of work, and their specialized knowledge leads to real
research results.
Proposes weighing four factors: What are the speaker’s own
goals (improve security v. enable criminals to exploit vulnerability); what are
the circumstances of the disclosure (to whom being conveyed and under what
circumstances: security conferences v. selling on market to nonowner of system);
scarcity (is this specialized expert knowledge whose dissemination is
beneficial or known/knowable information that script kiddies could use); what
steps did speaker take to minimize affirmative harms as opposed to potential
benefits (particularly contacting owner ahead of time or disclosing it in form
that it is easier to understand than to use as an actual exploit).
Where would this test be applied? Constitutionality of laws
restricting disclosure; and in criminal prosecutions for aiding and abetting
criminal activity.
Wu’s thoughts: Not clear operationally how this works in
those situations. A narrowly crafted
prohibition on sales to foreign gov’ts would likely survive 1A scrutiny, but
what if there’s a mix of characteristics?
Which factors are more important/crucial? Hard time coming up with any example of sale
to nonowner entity that should ever be permitted.
Is balancing the right way to go? This is very fact-intensive balancing
suggested. Reminds him most of multifactor test for likely confusion (and that’s
not a compliment)—no one factor is dispositive and it depends on individual
facts. Odd to use that as a model for running a 1A analysis. What bad policy but constitutional law would
look like: what meets the constitutional floor but would be a bad idea
anyway? Reasons for security testing
were good reasons, but it wasn’t clear they were 1A reasons (to choose openness
over security by obscurity, for example).
Matwyshyn: these are ways to think about intent. Not a
balancing test, but not a tally. It wasn’t
clear to me which factor if any should be dispositive. Trying to respond to caselaw and the line
between speech and commodity.
Sale to a nonowner: wasn’t willing to take a firm stand on
because of hot debate in info security community. Not convinced it should be
dispositive. The argument is, for example, let’s say the owner refuses to take
an interest in fixing the system, and so someone has written the code and is
interested in selling it, but is willing to sell it to a nonprofit that wants
to fix the system. Vulnerability markets
are developing; Google will buy vulnerabilities as part of a norm emerging that
you can get paid for your work. Worried
about foreclosing that reality. Still,
any time you take speech out of the public eye, it does become more dangerous.
Q: nontrivial set of cases where owner’s interests aren’t
aligned with public interest. Sale to
gov’t—the intelligence community has a kind of startup incubator that sort of
does this. Sale to the media—a little
more unclear. If there’s going to be disclosure to a party that can force the owner
to fix it, that’s probably not the nonprofit directly. But if a media source
says it will run a story, the owner may act. But what if they publish “troop
movement” analogues in the process?
A: that’s why it’s hard to argue in favor of any one element
being dispositive.
Q: to sustain a market, given economic downturns, you have
to have a model that allows incentivizing of risky activities (risk of DMCA
prosecution, for example).
Wu: law banning all sales to nonowners might be bad policy
but constitutional law.
Margot Kaminski: time place and manner usually works as
limit on gov’t, not limit on what speaker can do.
A: using that as a reference but not a direct model.
Kaminski: O’Brien: look at whether gov’t is targeting
nonspeech elements. Moving TPM analysis
to speaker’s intent makes it riskier for speakers to speak in certain contexts,
but not clear. Certain forums might die because of a chill either on the
speaker or the forum. Robert Post:
taking it out of the public forum makes killing a private forum ok. But if the
issue is autonomy, speakers should be free to choose a forum. Say that I go to a Communist meeting hall,
and that hall turns out to have a history of speakers who actionably advocate
imminent overthrow of the gov’t. If presence at the hall is a relevant factor,
then I may fear speaking there.
A: so the analogy would be that going to 4chan where you
know black hats are is a factor. She’s
ok with the speaker pausing longer before being willing to speak in certain
forums. Say the speaker has contacted
the owner and warned them and been ignored—is the speech happening in a
reasonable place?
Kaminski: Brandenburg: likely to produce unlawful action—and
the place may be relevant. But if that goes into whether it’s protected speech
at all—
A: separating whether it’s speech from whether it’s fully
protected.
Kaminski: understands O’Brien differently. Doesn’t think it’s
about dual-purpose speech, but rather first looking at whether there’s a
particularized message and then looking at whether gov’t is only regulating
function rather than speechy elements.
A: Focusing on different elements of O’Brien. This is not a direct reading, but trying to
extract its essence.
Kaminski: the theme you extract from O’Brien is the part
that placed restrictions on the gov’t and you put that in evaluation of speaker’s
intent, and she finds that troubling.
Risk mitigation idea is cool—the community norm is that you
should engage in risk mitigation (provide notice, don’t do a zero-day
exploit). Does that put unacceptable
friction in the speech process, though?
That is, is that an acceptable burden on the speaker?
A: yes, but worthy of further discussion.
David Goldberg: Phone hacking scandal in UK. There appears to be some discussion in the
tech community that it wasn’t really hacking—reaction?
A: the term, in the US, has changed—hacking used to mean
recombining elements in a creative way v. cracking, which was criminal
intrusion. Now blending it. Trying to
pun on the term in her title.
Goldberg: when you get info but the info per se isn’t
published, what is that?
A: that would be intrusion—if used knowingly subsequent to
intrusion, we have a different set of problematic issues.
Bryan Choi: trouble w/premise that there’s single use speech—even
bomb info has multiple possible uses/purposes.
Helen Nissenbaum: contextual privacy—seems similar, where context and
intent of sharer matter; breadth of distribution matters; building in
protections matters to whether privacy protection is justified. Is that a way to preserve appropriate
information flows? Another example:
anonymity—we have certain instincts about good and bad uses thereof. Patents too: patents promote disclosure in
certain ways; if we don’t allow certain patents, does that implicate the same
interests?
A: would think of patent as privilege bestowed by gov’t
rather than free speech right.
Choi: but that’s not motivating ban on human cloning.
A: sure, that’s moralistic, and gov’t has said certain
inventions are too sensitive. Certainly a normative choice about values being
made, but the context embodies different concerns.
Choi: anonymity: do we bar it when we think the person has bad
intentions and allow it with good intentions?
Cases seem to look at intent, but anonymity is always dual-use depending
on what people are using anonymity to do.
A: Prior restraints—limiting access to the speech—are more
troublesome than after the fact prosecutions for things like using anonymizing
technologies. Scope and scale of damage
that could happen in the future is on a new level, so we should be prepared.
Kevin Bankston: concerned that standard systemically
disfavors young, inexperienced, amateurs who participate in hacker subculture—assumes
access to reputable public forum; assumes that jury will believe that DefCon is
a reputable forum (M. says it’s covered by the press making it public and thus
favored) when a prosecutor could show jury a lot to convince that it’s
disreputable; junior folks can’t get slots at DefCon. Ironically these young
people are the ones who eventually become experienced professionals. Scarce specialized knowledge—again, expert v.
novice. Disclosure of zero-day by expert
v. reuse by novice. Would distinguish use of exploit from publication; wouldn’t
otherwise punish publication by amateur.
Standards also favors disclosure of the most damaging
speech, to the extent that well-known vulnerabilities are less damaging than
unknown zero-day vulnerabilities. Assumes you can meaningfully talk to company;
also assumes you have a lawyer, because you definitely shouldn’t talk to a
company whose program you have a security exploit for without being worried
about being sued or having a prosecutor sicced on you. If you’re not willing to
work for free, they say you’re extorting them—we saw that at EFF. Lawyer might help you get the boilerplate
statements of purpose, whereas teenager will have more inchoate motivations and
be less articulate; may just want to show how awesome s/he is. Ed Felten has a credible track record on his
purpose; teenager won’t. This test will
therefore disfavor the population most in need of protection.
Wants a factor to way the harm of the speech and the value,
though maybe they cancel out—a vulnerability at a nuclear plant is very important
to know but also very risky.
A: Admits the approach isn’t perfect. As to novices, there’s
something to be said for creating a structure to encourage junior people to
talk to senior people and to build an ethic around care. Not a fan of the
16-year-old trying to be leet and dropping zero-day exploits. Should build knowledge that they could cause
real harm. Good idea to encourage access
to EFF; researchers should be represented by counsel. Companies don’t
necessarily have great reporting channels. If you have a track record of
attempts to report, cooperate, work with owner of vulnerable code, that’s an
attempt at mitigation; creates a record of reasonable conduct that would vote
in favor of protecting speech even if there wasn’t successful mitigation.
Bankston: but the primary mode of mitigation you recommend
would require obtaining counsel before you speak, because otherwise you can get
sued/FBI set on you.
A: that’s why we need 1A protection. If company calls the FBI and starts
prosecution, that’s an attempted mitigation that was cut off by the company.
(How do you distinguish that from extortion, anyway?) Was the desire to fix the problem or to cause
harm? Sensitivity of the information and
likelihood of repurposing makes putting a burden on the speaker more
acceptable.
Kaminski: helpful to give statutory frameworks to
operationalize the intent.
Bankston: if the mitigation attempt is meant to be a proxy
for intent, recognize that there are legit reasons not to attempt to mitigate
in the way you suggest, given the legal risks you may be taking by attempting
to communicate w/the company.
A: another way might be that you wrote the exploit in a way
that shows the vulnerability but isn’t the easiest way to cause harm. Types of conduct that could logically be
viewed as a form of attempted mitigation.
Ashutosh Bhagwat: several of the factors point in both
directions: the publicness of the forum increases the risk of harm;
specialization of knowledge also increases the risk of harm; how should a court
figure out what’s positive? Understands
difficulty with trying to do this, but these factors have strong built in
normative assumptions such as the reputation of the forum. Whether DefCon or
Wikileaks is a reputable forum depends on whether you think information should
be free.
A: look at whether press covers it; whether gov’t goes there
to recruit employees—demonstrates credibility.
Bhagwat: credibility to whom?
That’s highly subjective. Needs greater
defense of your definition of reputation.
Understand why you don’t want to measure value of speech in abstract,
but when building assumptions about acceptable use of knowledge, not sure it’s
possible to fully avoid that.
A: scarcity: drives value of information in markets. But
here, info already in existence out there, republishing increases likelihood of
misuse for criminal purpose. If your speech is critical new info that could
improve a system (or harm it!) you take on a greater risk by being the lone
wolf who howls. Often there’s only one
person who sees the vulnerability. Many
researchers do desire to do the right thing by coming forward; want to create
an environment that makes attempted responsibility easier.
RT: how do you
distinguish your mitigation from extortion?
A: fact intensive.
Call the company and say “you have a problem, I can help.” If they have a track record of being Ed
Felten, that’s more credible. If
presentation has been accepted at conference—Bankston steps in to say they’ll
sue to stop the presentation—but then M. says the info will come to light in
the court case—Bankston says it would be more fully into light if the
presentation had happened. A: if you
deal with a company that litigates, then do something else to minimize possible
negative effects of your speech (like what?).
Desire to help v. desire to line pockets. (But I’m stuck on the question of why you can’t
do both—the “security researchers are being directed to work for free” point is
very compelling, it seems to me.)
Piety: in other areas, often see arguments crafted around
knowledge of law. Bankston’s concern is
maybe some of the most positive work comes from people least knowledgeable of
the law.
A: Even if you fail one prong, still have three out of four;
err on side of protecting speech.
Christina Mulligan: mitigation and public forum suffer both from heavy reliance on existing reputation of the individual being such a big factor--being Ed Felten is ok, but sketchy people are two hops from Felten. Unconnected/new people will have trouble.
Bankston: overall concern is that, though you want to err on the side of the speaker, you are starting with what the speaker can do rather than with what the gov't can do and you seem to create a default rule that vulnerability speech is unprotected unless you follow a rather specific path, which seems unprotective/chilling. So what should Congress do/not do?
A: not a specific model, but encouraging thought about implications/norms of community.
Rebecca I didn't know you were live blogging this! That's great because now I can know what went on at the sessions I missed, particularly Seama's which was scheduled against mine.
ReplyDelete