Library of Congress DMCA exemption hearings
Proposed Class 25: Software – security research
This proposed class would allow researchers to circumvent
access controls in relation to computer programs, databases, and devices for
purposes of good-faith testing, identifying, disclosing, and fixing of
malfunctions, security flaws, or vulnerabilities.
Copyright Office: Jacqueline Charlesworth
Michelle Choe
Regan Smith (main questioner)
Cy Donnelly
Steve Ruhe
John Riley
Stacy Cheney (NTIA)
Charlesworth: goal is to clarify record, hone in on areas of
controversy rather than restating written comments. Interested in refining/defining
broad classes in relation to the support in the record. Court reporter (so my report is far from
definitive!). (Also note that I am not good at ID’ing people.)
Proponents:
Matthew Green, Information Security Institute, Department of
Computer Science, Johns Hopkins University
Research in area of computer security and applied
cryptography. Risks posed by DMCA to
legitimate security research: discovered serious vulnerabilities in a computer
chip used to operate one of the largest wireless payments systems and widely
used automotive security system. Naïve:
didn’t know what expected to happen when notified manufacturer, but believed it
would involve discussion and perhaps repairs and mitigations we developed. That’s
not what happened. Instead, a great deal of resistance from chip manufacturer,
and active effort to get us to suppress our research and not publish
vulnerabilities. Instead of repairing
the system, mfgr spent considerable resources to stop us from publishing,
including raising specter of expensive lawsuit based on 1201. Small component
was reverse engineering of software and bypassing extraordinarily simple TPM.
1201 was never intended to prevent security researchers from publishing, but it’s
hard to argue merits/intent of law when you’re a penniless grad student.
Charlesworth: why isn’t 1201(j) enough?
A: My understanding is that there’s the bypass issue and the
trafficking issues. Both potentially an issue depending on what it means to
traffic. Bypassing the TPM was raised to
us at the time.
Blake Reid, Samuelson-Glushko Technology Law & Policy
Clinic at Colorado Law: Existing exemptions for (j), (g), and (f) for
research/reverse engineering, but as we detailed in comments, there are
shortcomings in each. (j) fails to
provide the up-front certainty needed for an exemption, because, e.g., it’s got
a multifactor test that depends on things like how the info was used and
whether the info was used/maintained in a manner that doesn’t facilitate
infringement. We might well try an
argument for applying the exemptions if god forbid he was sued, but as we’ve
asked the Office for before we want further up-front clarity for good faith
security testing/research. That was the basis of 2006 and 2010 exemptions and
we hope for them again.
Green: my incident was 2004.
Charlesworth: would the activities described fall into one
of the exemptions?
Reid: Don’t want to opine—again if we were in court I’d
absolutely they were covered, there was no ©’d work, etc. but if advising Prof.
Green beforehand, hypothetically, there would be reason to be nervous b/c of
the ambiguous provisions of the law. The issue of certainty.
Green: we were advised at the time by the EFF, pro bono. We were told they could provide no guarantee that any of the exemptions would protect us if we were sued. They didn’t say we were violating the law, but the complexities of the exemptions were such that they provided no guarantee.
Charlesworth: did you know that before or after?
Green: before, during, after.
Charlesworth: you sought legal advice before?
Green: yes, my prof had a similar experience.
Charlesworth: but you proceeded anyway.
Green: yes, b/c we believed it was necessary. We were
fortunate to have the EFF, which gave us the confidence to go forward; we felt
that the probability was relatively low.
The system’s been repaired. But w/out that the system might still be
broken today. I now begin every project w/ a call to a lawyer for a 1201
mitigation possibility. I still get pro bono representation, but many
researchers aren’t so fortunate. Also, good faith research shouldn’t require
lawyers; increases the cost of every project.
Reid: We predicted in 2006 that Sony rootkit wouldn’t be the
last dangerous/malfunctioning TPM. We vastly underestimated the widespread
vulnerabilities that can be caused by and concealed by TPMs—intermingled with
everyday consumer goods including cars, medical devices, internet
software. Chilling effects have become
ever more pernicious—a roomful of nation’s top security researchers stand
before you today highlighting the threats they, their colleagues, and their
students face in trying to make America a safer place to live. Existing
exemptions show security to be a priority but are not enough to avoid attempts
to silence their work, which is protected by the First Amendment and protects
the public. In 2006, Peters rejected
projection of worsening TPMs and recommended against a broad exemption, but
that prediction was prescient. Lengthy
record of security vulnerabilities that could have been avoided w/a workable
exemption. Researchers before you today
are the good guys. They care about abiding by the law and they need breathing
space. W/out your help they will lose an arms race to bad guys who don’t care
about violating 1201.
Q: 1201 exemption for video games—was that too small?
Reid: the issue was not with the piece of the exemption that
was granted, but that the vulnerabilities around DRM patched w/video games was
just one piece of evolving threat. Evolving
piece was in things like cars, medical devices. It was the narrow piece that
said security researchers could look at TPMs only for video games.
Q: but the exemption had other limits—info must be used
primarily to promote security of owner/operator, and must not be used to
promote copyright infringement. Does
that restrict research?
Reid: it’s hard to tell you—the subsequent vulnerabilities were
not necessarily in video games. Folks took a look at exemptions and said video
game exemptions were too narrow to do research.
If added to broad exemption, we’d have some of the same concerns—don’t
have certainty w/words like “primarily.”
Q: your proposal is “for the purpose”—how much more
certainty does that provide you? Existing statute says “solely”—congressional intent?
Reid: the more certainty we can get, the more mileage
researchers will get. Post hoc judgments are problematic b/c it’s hard to say
up front what the primary purpose is.
Q: but there will always be post hoc judgments. We also have to ask what is good faith. We have to draft language for an exemption—we
want to understand what kinds of limitations might be appropriate in language
that balances need for less post hoc analysis with some definition of what it
is we are allowing. Congress did act in
this area, which is guidance about what Congress was thinking at the time. [But the exemption procedure is also guidance about what Congress was
thinking at the time.]
Reid: clarity about what these limits mean: being used to
facilitate © infringement—opponents have said that simply publishing
information about a TPM/software might facilitate copyright infringement.
Guidance that the acts we’re concerned about here, outlined in the comments:
investigating, research in classroom environment mostly, being able to publicly
disclose in responsible way the results are covered. If you enable that, that’s
the most important piece we’re looking for in limitations.
Charlesworth: tell me more about a classroom environment.
Should an exemption be tied to academic community.
Reid: Student ability to work on this is really important,
but we wouldn’t support a classroom use limit. Private sector and amateur
security researchers are really important, building skillsets.
Charlesworth: should a university researcher oversee all of
this research?
Green: very concerned about that. The most dynamic/important
research is being done by people in the private sector, commercial security
researchers. The vehicle security research is funded by DARPA but worked on by
Charlie Miller, unaffiliated w/university. Very similar with other kinds of
research. Some is authorized, but the vast majority is done by private
individuals w/access to devices. Recent cases: researchers told to back off b/c
of DMCA. One happened just a couple of
weeks ago.
Andy Sayler, Samuelson-Glushko Technology Law & Policy
Clinic at Colorado Law: Heartbleed, Shellshock, numerous vulnerabilities in the
last year. Logjam—a week ago.
Q: was that done without circumvention?
Green: we don’t know. Some public spec, some looking at devices.
Sayler: note that much security research is funded by the
gov’t. 1201 is used to discourage independent security research. Congress didn’t intend good faith research to
be suppressed, but they’re ambiguous/undue burdens. Ioactive researcher was threatened w/DMCA for
exposing vulnerabilities in Cyberlock locks.
Significant personal risk/unreasonable legal expenses to mitigate risk.
Mark Stanislav, Rapid7 security consultant: Last year
assessed Snort, a toy that lets
parents communicate with children over the internet. Oinks to signal new message. Child can reply.
The security features were flawed; unauthorized person could communicate
w/child’s device and could access name, DOB, and picture of child as well.
Contacted the vendor; despite my offer to go into details w/engineers, vendor
wouldn’t engage and made legal threat, saying I must’ve hacked them. Productive dialogue eventually occurred and
resolved issues. Situation made me fear for my livelihood.
Q: did you discuss DMCA exemptions?
Stanislav: I wasn’t privy to the lawyers’ conversations. I
understood that I was at risk. My goal
was protecting children, but it wasn’t worth a lawsuit. I found vulnerabilities in my own webcam that
would allow a criminal to access it.
Direct risks to privacy and safety. I contacted the vendor and offered
assistance. Final email, after friendly to threatening, wanted me to meet
w/them b/c they said I might have accessed confidential information. Entrepreneurs who made Snort have gone on to
win numerous awards. What if a criminal
had abused these and put children in harm’s way? Webcam: new leadership came in
and apologized. Research prevented harm, privacy violations, allowed businesses
to fix critical flaws before adverse impact. We help people/businesses who don’t
know they’re in harm’s way/putting people in harm’s way. We live in a time when a mobile phone can
control an oven. Smart TVs have microphones listening to us all the time.
Please help widen the collective efforts of security research; the researchers
who stay away from research b/c of DMCA are problems.
Steve Bellovin, Columbia University: Researched in private
sector for decades. Academic research is generally concerned with new classes
of vulnerabilities. Finding a buffer overflow in a new device is unpublishable,
uninteresting. Most of the flaws we see in devices we rely on are serious but
known vulnerabilities. Not the subject of academic research; the independent
researchers are the ones actively protecting us. Students unlikely to do it; I
discourage my PhD students from looking for known vulnerabilities because it
won’t get them credit.
As a HS student, I wrote a disassembler so I could study
source code. That’s what got me to where I am today. Arguably would be illegal today, if I wanted
to look at a smartphone. Four years later, I caught my first hackers. I teach
my students how to analyze and attack programs. I coauthored the first book on
firewalls and internet security. You
have to know how to attack in order to secure a system. To actually try an
attack is a hallmark assignment; it’s not the only way, but it is one of the
ways. Is a copyright owner who profited
a great deal from copyright, but wants a balance. 1853 treatise on whether it’s ok to discuss
lockpicking: truthful discussion is a public advantage. Harm is counterbalanced by good.
Andrea Matwyshyn, Princeton University: These questions are
about frivolous litigation that attempts to suppress discussion around existing
flaws that may harm consumers, critical infrastructure, economy. Help curb frivolous 1201 litigation.
Charlesworth: on the issue of disclosure: you’re suggesting
that mfgrs tend to shut down the conversation, but isn’t there a countervailing
interest in giving mfgr some time to correct it before public
dissemination? I understand bad hats are
out there already, but hacking into something more mundane like a video console
there are probably people who don’t know how to do that who might be educated
by disclosure.
Matwyshyn: there are two types of companies. Some are very
receptive to this—FB, Google, Tesla have bounty programs who compensate
researchers. Processes in place w/clear
reporting mechanism on websites and internal ID’d personnel. The second type
has not yet grown into that sophisticated model. So it’s this second category that doesn’t
possess the external hallmarks of sophistication that react viscerally, through
overzealous legal means and threats. The
Powerpoint I shared has a copy of one of the DMCA threats received Apr. 29,
2015. Hearing Exh. 10, letter
from Jones Day to Mike Davis, security researcher at Ioactive, a security
consultant. Regards repeated attempts to
contact Cyberlock about their product.
They used DMCA as a threat.
Q: Cyberlock seemed to have taken the position that Davis
insufficiently disclosed. [Actually it indicates that he didn’t want to talk to
Jones Day, not that he didn’t want to talk to Cyberlock, which makes sense.]
Matwyshyn: he was ready to share that with technical
team. Subsequent followup email in
record explains and identifies prior instances of threat.
Q: if you granted the proposed exemption in full, would that
change the outcome? If a company is going to engage in frivolous litigation, we
can’t stop that.
Matwyshyn: I believe it would help a lot. The note from Ioactive’s general counsel: GC’s
perspective is that it seeks a strong basis for defense. Expresses concern that litigation to the
point of discovery can cost $250,000. When we’re talking about a small security
consultancy or independent researcher, engaging w/the legal system is cost
prohibitive. A roadmap exemption would
give a one-line statement of reassurance that a GC or security researcher could
send to a potential plaintiff. W/exemption, Jones Day would be less likely to
threaten DMCA as basis for potential litigation. Provided that Cyberlock has in place a
reporting channel that the researcher used, and researcher disclosed the list
of disclosables we proposed, that would provide a clear roadmap for both sides’
relationship in the context of a vulnerability disclosure. Significant improvement in murkiness, more
easily discernable question of fact.
Q: One of the elements is that the manufacturer had an
internal management process. How would a researcher verify that?
Matwyshyn: the researcher needs a prominently placed
reporting channel. The additional requirements are not researcher-centric, but
assist figuring out what happened if something went awry. The researcher need
only assess whether there is a prominently placed reporting channel.
Q: you want a front door, but you’ve put other elements in
your proposal—the creation of an internal corporate vulnerability handling
process. Opponents have said a researcher wouldn’t even know if the company had
such processes in place. How would they know?
Matwyshyn: the later parts are only for a subsequent finder
of fact. Supposed the researcher used the front door and then the sales
department loses the report—the exemption protects the researcher.
Q: but does it give ex ante comfort? The researcher won’t know if that will
happen.
A: if it goes off the rails b/c the internal processes weren’t
in place, the researcher has a second
tier ability to defend if the disclosure results in a threat.
Bellovin: In almost 30 years, it’s been remarkably hard to
find ways to report security vulnerabilities. I know security people and can
generally find an artificial channel.
But put yourself in the position of someone who has found a flaw and
doesn’t know me. If this vulnerability
is a threat to safety, public disclosure is a boon.
Matwyshyn: Henninger attempted to contact 61 companies about
a vulnerability. 13 had contact info; the rest she had to guess at a point of
contact. 28 humans responded out of 61. A
different 13 said they’d already fixed it.
6 subsequently released security advisories b/c of her report. 3 were
after the intervention of ICS-Cert contacting the provider in question.
Q: the suggestion is that she made a good faith attempt that
she documented attempts to notify. Isn’t
that a more objective standard than having her know the internal
processes.
Matwyshyn: the judgment point for the researcher is “is
there a front door”?
Q: in many cases they may have a front door [note:
contradicted by the record], but you could try to figure that out and keep a
record if your attempt was unsuccessful. You shouldn’t have to know the internal
workings of the company. [This is the proposal, though! Right now you have to know the internal
workings to reach someone if there’s no front door. Under the proposal, you don’t
have to know the internal workings, but you know that if you deliver through
the front door you are protected!]
Matwyshyn: right now you get a chill even with documented
attempts.
Q: but some companies will just threaten you no matter
what. You won’t avoid that entirely. If
we go down this road, how will people know?
If the standard relies on how people handle things, how will they know?
Matwyshyn: if the front door exists, the researcher should
use it. If the disclosure goes off the rails, the researcher gets an extra
boost. W/out legal team, you can assess
whether there is a front door and thus whether you should use it.
Q: why shouldn’t they try other methods if there isn’t a
front door? You try to figure out who owns something in copyright all the time.
You’re saying we should have a standard that everyone has to have a front door.
[Why is this a copyright issue? What is the nexus with copyright infringement?
Why not follow the ISO security recommendations? Why would the Copyright Office
have a basis for knowing better than the ISO standard how vulnerabilities
should be reported?]
Matwyshyn: The ISO standard was negotiated across years and
stakeholders.
Q: those are big companies, and this law would apply across
the board. Manufacturers who don’t have the
resources might not know. We have to
think of them as well. [B/c of their copyright interests?]
Matwyshyn: could identify copyright contact as point of
contact.
Q: for DMCA that’s a statutory requirement. [Um, “requirement”
if the companies want the DMCA immunity. If they want people to report
vulnerabilities to them, why not have them follow a similar process?]
Matwyshyn: Congress did discuss security as well—you can
expand the concept/clarify it.
Matthew Blaze, University of Pennsylvania: History of
security research, including on Clipper chip and electronic voting
systems. Two specific examples of DMCA
issues, though it loomed over every nontrivial work I’ve done since 1998. Analogous to Ioactive/Cyberlock issue, in
2003 I decided to look at applications of crypto techniques to other types of
security: mechanical locks. Discovered a remarkably similar flaw to that
discovered by Ioactive: could take ordinary house key and convert it into
master key into one that would open all locks in a building. Real world impact and interesting use of
crypto; master key systems need to have their security evaluated. Purely
mechanical, no TPMs. And so publishing was simple and without fear. But other work is chilled by the DMCA.
Example: in 2011, w/grad students studied P25, a communication system used as a
digital 2-way radio by first responders, including federal gov’t. Examined standards for the system as well as
the broad behavior of a variety of radio products that used them. Discovered a
number of weaknesses and usability failures, and discovered ways in which the
protocols could lead to implementation failures. To study those failures, we
would’ve needed to extract the firmware from actual devices. But we were
sufficiently concerned that in order to extract the firmware and reverse
engineer it, and in particular develop tools that would allow us to extract the
firmware, we would run afoul of the DMCA. So we left a line of research
untouched. If we had the resources and the time to engage a large legal effort
to denote parameters, we could possibly navigate that, but under the DMCA as
written we decided it was too risky.
Q: why not 1201(j)?
Blaze: w/o getting into atty-client privilege, the essential
conclusion was that we were in treacherous territory, primarily b/c we would
have needed to reverse engineer, see if implementation failures we anticipated
were present, and effectively build our own test bed along the way. We
approached a few manufacturers and attempted to engage with them and were
ignored or rebuffed every time. We realized the relationship would be hostile
if we proceeded. The anti-trafficking
provision would have been particularly problematic b/c we needed tools for
extracting—a colleague in Australia examining the same system had developed
some tools and expressed interest in working w/us, but we couldn’t.
Q: is there a norm of trying to disclose before publication?
Blaze: certainly there are simple cases and hard cases. In
simple case, we find particular flaw in particular product w/well defined
manufacturer w/a point of contact. Sometimes we can find an informal
channel. As someone who is an academic
in the security community and wants to work in the public interest, I don’t
want to do harm. Disclosing to the
vendor is certainly an important part. But in other cases, even identifying the
stakeholders is often not so clear. Flaws found in libraries used to build a
variety of other products: we won’t always know what all, most or even some of
the dominant stakeholders.
Q: when you do know, is it a norm to disclose in advance as
opposed to concurrently?
Blaze: it has to be case by case. There is a large class of
cases when we have a specific product that is vulnerable, and we can say “if
this mfgr repairs, we can mitigate.” But other cases it’s less clear where the
vulnerability is present and it may be more prudent to warn the public
immediately that the product is fundamentally unsafe. Reluctant to say there’s
a norm b/c of the range of circumstances.
Green: In some cases like last week there’s mass disclosure—you
can’t notify the stakeholders all at once. If you notify, they may leak it
before you want it public which can cause harm.
Sometimes you want to be very selective.
If, let’s say, 200 companies are affected, you can go to Google/Apple
and trust the info won’t leak, but beyond that the probability that the problem
becomes public before you want it to is almost one—I’ve had that happen. Heartbleed was an unintended leak—too many
people were notified of a mass vulnerability, and many systems including Google
and Yahoo! were not patched as a result of the two weeks early disclosure.
Charlesworth: so are you saying that disclosure should be
limited?
Green: there is no single answer you can write down to cover
it all. Heartbleed: massive vulnerability affected 1000s of sites. Google =
Google would fix and protect maybe 50% of end users on internet. Yahoo! =
protect 25%. As you go to a smaller website, now you’re protecting 200 people
but probability of leak is fairly high. Then criminals can exploit
vulnerability before it’s patched. Has
to be customized to potential vulnerabilities.
Reid: you’re hearing a theme that this is an issue for the
judgment of security researchers, and it’s only b/c of the DMCA that suddenly
this is the realm of copyright law. Getting pretty fair afield of Congressional
intent to mediate these judgments and their complexities, which take a lot of
negotiation, as Matwyshyn underscored w/ISO. We would strongly caution the
Office against being too prescriptive. (1) If there wasn’t a lock involved, we’d
just be talking about fair use. In that case it would be up to the researcher
how to disclose. Whatever copying was necessary for research would be the only
issue; the fruits of the research would be free and clear. (2) Remember the
First Amendment interests and the prohibition on prior restraint. Rigid
structure on when someone is allowed to speak, even if the policy judgments
weren’t complicated.
Charlesworth: did you brief the First Amendment issues?
Reid: not in detail.
Charlesworth: Congress considered this in making disclosure
a factor. What you’re saying is that
sometimes you should disclose, sometimes not.
Reid: Congress can’t contravene the 1A, even in enacting the
DMCA.
Charlesworth: but looking at disclosure to manufacturer is a
factor—maybe that’s not such a bad way to think about it.
Reid: factors mentioned in (j), to extent compatible w/1A,
can be read as probative of intent to do security testing or something else.
Reading them as limitations of speech after circumvention performed is
constitutionally troubling.
Charlesworth: that’s a brand new argument, and I’m not
troubled by (j), but there’s a lot of commentary about disclosure. Google has a 90-day disclosure standard; you’re
saying there should be no standard, though Congress clearly had something in
mind. [Would having a front door be
consistent with being the kind unlikely to leak?]
Blake: As academics and members of the public research
community, the aim of our work is to disclose it. The scientific method demands
disclosure. Someone building tools to
infringe is not engaging in research.
The issue is whether or not the work is kept secret or disclosed to the
vendor, not whether it’s disclosed to the vendor in advance. No one here is
advocating keeping research secret—trying to protect research we will publish
and will benefit everyone.
Mellovin: twice in my career I’ve withheld from publication
significant security flaws—once in 1991 to delete a description of an attack we
didn’t know how to counter. Because security community wasn’t aware of this
publicly, the bad guys exploited the flaws before fixes were put in place. It
was never seen as urgent enough.
Published the paper in 1995, once we saw it being used in the wild and
b/c original memo shared only with a few responsible parties ended up on a
hacker’s site. Security community didn’t care until it became public.
Other case: vendors were aware of the problem and didn’t see
a fix; once it was in the wild, others in the community applied more eyes and
found a fix. In both cases, trying private disclosure actually hurt security.
Matwyshyn: (1) 1201(i) concerns are slightly different. (2)
In our findings we did discuss the First Amendment, should the panel wish to
review the cited law review article. (3) Google’s a member of the Internet Association,
which supports our approach. (4) Frivolous litigation: the benefit of a clear
exemption allows them to feel more comfortable contacting vendors earlier,
rather than needing to weigh the risk of litigation to themselves; later
contacting is now something you do to mitigate risk that they’ll sue you before
you disclose. Providing comfort would encourage earlier contacts.
Laura Moy, New America’s Open Technology Institute
I’ve encouraged the Office to focus on © issues and not
weigh the policy issues as opponents have suggested. But consumer privacy is relevant b/c the
statutory exemption for privacy indicates Congress’s concern therefor. Some
opposition commenters have cited privacy concerns to grant an exemption—but that’s
wrong. Remove roadblocks to discover
security vulnerabilities. As many others
have pointed out, vulnerabilities often expose consumer info and they need to
be found. Malicious attackers are not waiting for the good guys; they race to
do their own research. They are succeeding. Last year, CNN reported 110 million
Americans’ info had been exposed—these are just the ones we know about. Need to find them as soon as possible,
dismantling roadblocks.
Vulnerabilities should be disclosed so consumers can
incorporate security concerns into decisionmaking. Consumers have a right to
information they can use to make informed choices. Also bolsters vendors’
economic incentives to invest in security—publicity can be harmful if a product
is insecure, and that’s as it should be. As a consumer, you should know of
known vulnerabilities to Cyberlock’s product before you purchase.
Vulnerabilities should be disclosed so that regulators
enforcing fair trade practices know whether vendors are adhering to the
promises they’ve made and using good security practices. FTC says failure to
secure personal information can violate FTCA; state laws too. Enforcement requires
understanding of security. Often rely on independent researchers. FTC recognizes that security researchers play
a critical role in improving security.
Erik Stallman, Center for Democracy & Technology:
security testing done only with authorization of network operator in statutory
exemption—in a world of internet enabled devices, it can be very difficult to
determine who the right person is.
Q: says owner/operator of computer, not owner of
software. Even if I don’t own the
software, can’t I authorize the testing?
Stallman: it’s unclear if that’s sufficient—if your banking
network is connected to a VPN, it may be the source of a vulnerability. Are the computers at your ISP covered by
this?
Q: presumably if I hire a VPN provider I can ask them for
permission to test the security.
[Really? I can’t imagine the VPN provider saying yes to that under most
circumstances.] I can buy a pacemaker
and run tests. If you own that
pacemaker, you can run tests.
Stallman: You may need to go up the chain. The moment you
fall outside, you fall outside the exemption [e.g. if you communicate with
another network].
Q: so I want to test HSBC’s systems to know if they’re
secure. Will the exemption allow this test without permission of third party
server?
A: Something like Heartbleed—a ubiquitous exploit on many
systems. You shouldn’t need to go around and get permission. Accidental
researcher: may come across vulnerability when engaged in different research.
Often researchers won’t know what they’re looking for when they start looking.
1201(j) limits what they can ask.
Q: I want to test a website’s security. Can I test it under
your proposal? Say I bank at HSBC and want to test it.
Stallman: So long as you’re doing good faith security
research, yes.
Q: but Congressional history says shouldn’t test locks once
installed in someone else’s door. Does
your proposal require any authorization, or is there a proposal requiring at
least an attempt to seek authorization?
Stallman: the problem is it’s hard for the researcher to
know/stay within stated scope. Or authorization can be revoked/cabined. You could ask HSBC, but then what do you do
if they say no? Then you’re out of luck.
Q: legislative history suggests authorization is important.
Stallman: the internet environment is very different from
enactment of 1201(j); House Report said that the goal would be poorly served if
they had undesirable consequence of chilling legitimate research activity, and
that’s the problem we have now. 1201(j)
is not providing the protection that researchers now need.
Q: nuclear power plants/mass transit systems—should we allow
testing of live systems? How would this
research be conducted?
Stallman: many critical infrastructure systems depend on the
same software/applications that run other systems. Should not be able to stop
research on systems widely in use by other people.
Q: but if this can be tested by off the shelf software,
shouldn’t it have to be?
Reid: to the extent the Office reads (j) very broadly, you
could put that in the record/conclusions in the proceeding, that would be very
helpful. The primary concern: one
interpretation is that the computer system is the bank. The concern is to the
analogy in the legislative history—the TPM on that system is not protecting the
bank’s property. It’s protecting the software. The company will claim we aren’t
the owner and that we aren’t engaged in accessing a computer system, but rather
engaged in accessing software running on that computer. That is the ambiguity
that has crippled (j) in the past. We would agree with your interpretation of
(j) in court, but when we’re trying to advise folks we have to acknowledge the
multiple interpretations.
Charlesworth: we haven’t come to any conclusions about the
meaning of (j). Your point is well taken.
We may get there, but we aren’t there yet.
Reid: Think about the standard for granting an exemption:
the likelihood of adverse effects. You’ve heard that uncertainty produces the
adverse effects. You need not have an ironclad conclusion about (j) in order to
grant this exemption. If you conclude that there’s multiple interpretations but
a chill, then you need to grant the exemptions.
Bellovin: (j) is for testing my own bank as an employee. I
might be able to take precautions, or not. Even the most sophisticated users
would have trouble mediating a flaw in an iPhone. We aren’t talking about
device ownership, but vulnerabilities not in the device but rather in the
software—not the flaw in our particular copy but the class of copies which
manufacturers often don’t want to hear about/don’t want anyone to hear about
it. If a flaw is serious enough I may not use my copy, but it’s the other
instances owned by others that need protection.
Stallman: Just note that security experts have signed on to
comments about the chill. General point is that because (j) has CFAA, Wiretap
Act, Stored Communications etc. references, it has the unfortunate effect of
compounding/amplifying uncertainty. It’s not satisfying to say that other legal
murkiness means we shouldn’t address this issue—this is one thing the Office can
do and send a clear signal that this is an area that Congress should look at.
No comments:
Post a Comment