Discussant: Frank Pasquale
Sorrell is just a
bit murky; this piece sets the agenda for resolving conflicts between privacy
and the First Amendment. Definitions:
data is fixed record of fact. Fred
Schauer on coverage v. protection; Robert Post on transfers of information that
are regulated, like securities law; sexual harassment; antitrust—Eugene Volokh
says that antitrust couldn’t be constitutionally applied against Google. Bambauer’s piece can be correct without those
arguments, because the essence of sexual harassment is the use of data to make decisions (have sex with me or lose your job),
and antitrust bars the collusion and not the speech. Conduct/speech comes up throughout the paper.
Is all data equally expressive? There is clearly protected speech, but
sometimes you need data to create or target speech. Sometimes you need to
create the data so you can target the speech. Similar to campaign finance:
restricting money ends up restricting speech.
How is data treated by 1A law right now? Clear coverage:
Transunion v. FTC, allows restriction of credit report data; US West v.
FCC—effort to stop use of commercial records is troubling; Virginia Board of
Pharmacy—prices are protected speech.
Copyright: facts are uncopyrightable; that furthers First Amendment
purposes. Facts can’t be
controlled. Interaction between Va.
Board and copyright: some cases see price as a copyrightable opinion: CDN v.
Kapes.
Noncoverage: Dietemann case, involving gathering info on a
quack—court says pictures of him at home being a quack don’t count as speech.
Look at purpose or intent of regulators; could also go
through marketplace of ideas, checks on state power, self-determination
theories—in each case, allowing data to be protected furthers those theories.
Potential directions: role of privacy law in keeping us from
making medical discoveries.
Vioxx—adverse reaction reporting. Trade secrecy is perhaps a more
important barrier, Ben Goldacre’s work etc. So more scrutiny of trade secrecy
laws. 5th Amendment property
claims are available in trade secret context; does privacy have a
constitutional standing that would enable us to deal with that?
Many times, when people are surveilled, their speech can be
chilled; the paper could deal with that more in terms of things like people who
are afraid of supporting Occupy Wall Street for that reason.
James Grimmelmann, Speech
Engines
Discussant: Jane Bambauer
Debates about how to regulate search engines, particularly
with respect to bias. Two theories, each
supported by its own venerable camp. Conduit: search engines simply facilitate
sites meeting users. Editor: more like newspaper editors, using judgment and
discretion to figure out which stories are best for particular readers. Eric Goldman, Eugene Volokh, probably Jane
Bambauer. Suggests it’s hard to regulate
search engines.
Editor perspective focuses on search engine’s interest, and
the conduit focuses on the website’s interest.
She’s not sure that’s entirely fair, but going with it: whiny websites
with meritless claims that they should be higher-ranked, and search engines
that may have self-interest. Gov’t intervention should be on behalf of users,
the only ones whose interests aren’t represented here. Both theories have contributions to make.
There’s no absolute objective perfect search results; users’ searches and goals
are constantly changing so there’s no platonic measuring stick. Judgment is
required. But even if there is a wide range of acceptable results, conduit camp
is right to suggest that unacceptably manipulative results do exist. Right concept: advisor. I trust Google to
make a good faith attempt to match what I’m asking for and find relevant
websites. The better Google knows me the better my results will be. Fiduciary duties to users of confidentiality
and loyalty, protecting users in the instances where their interests tack apart
from Google’s. Loyalty: Google should
make good faith effort to order organic results according to users’
preferences, with room for differences of opinion. Minimum threshold: even if there’s no
external bound on reasonable searches, if the company’s algorithms diverge
significantly from internal assessment of relevance, then it violates the duty
of loyalty. Which aspects of agency law
we ought to embrace and apply: useful framing.
But we will still disagree about which fiduciary duties will apply.
Comments: If we’re serious about this duty, will Google ever
fail this test? Will we ever know? As long as Google is careful in describing
its users’ interests—how does this apply to Aug. 2012 decision to use high numbers
of DMCA takedown requests as a signal in the algorithm? Does that interfere with its own
determination of relevance?
Not entirely convinced that advisor is meaningfully
different from conduit. Once you sweep away the ludicrous idea that there’s a perfect
unbiased search result, a conduit theory looks similar—looks for ranges of
acceptable search behavior. Most
important feature of conduit/advisor theory: embrace regulation without lots of
1A friction, compared to editor theory.
Slots it into fiduciary duty.
A little quick to arrive at position that gov’t can impose
duty on search engines as it can on doctors and realtors. Dependence, trust,
and vulnerability—takes sweeping definition of vulnerability to encompass
search. Trust is doing a lot of the work. But if trust makes a fiduciary, it’s
hard to find a limit. Is Yelp also a
fiduciary?
Grimmelmann: on Aug. 2012 change—that was important, because
it was a change not motivated by legal compliance requirements or users’
interests—was an intent-thwarting move on Google’s part. It wouldn’t trigger his test in the end
because of the influence of copyright law, but it is a hard point.
Range of acceptable behavior: you get more payoff from
users’ point of view in areas outside search bias. For something like copyright control over
search engines, or thinking about spam/SEO, user perspective is radically
different from conduit perspective.
Duty of candor: if people were afraid to search “am I gay?”
that’s really important.
Frank Pasquale, Platforms, Power, and Freedom of Expression
Discussant: James Grimmelmann
Attempt by Dan Savage to define “santorum.” What happens
when Google accepts attempts to link the term to its new definition? Google
says “it’s not our fault because it’s not our speech.” But if the gov’t said
not to link to those pages Google would say “that’s our speech.” Google’s critics make the same contradictory
moves. Pasquale’s paper is about the way
the 1A takes away important tools for resolving these problems. Institutional power is with Twitter, Google,
etc. to make algorithms, or it’s with public authorities—Pasquale says we need
to retain the option to push the power up to public authorities. The 1A is a
neutron bomb: it kills the people and leaves only the machines making decisions.
Is speech platform speech or provider speech? Finding services—Apple, Google, Twitter—make
disturbing decisions, like Apple not allowing a track drone strikes app; Google
refuses to change autocompletes that suggest someone was a sex worker; Twitter
doesn’t promote Occupy hashtags as others think it should be. Not sure it makes sense to lump these all
together.
Must-carry makes sense at the network level, but hard to
translate to steering. Easy to say Drones+ app must be at the store, but hard
to say it should come up first or at all if you only search for “drones.” On the open web, you can search for Drones+;
not everything has to be an app.
Right of reply: this has some force in a 1A context, if
Google wants to say it’s not speaking but only reflecting users’ searches. If not speaking, don’t attribute to Google
the fact it’s making a correlation; its speech interest in making the
correlation is attenuated. The argument for §230 is no longer protecting
Google’s speech but collateral censorship, which would justify limiting §230.
Disclosure to consumers so they know what they’re getting;
disclosure to regulators so they understand what’s in the black box. EC
settlement w/Google for independent monitor is a good example.
Antitrust: apply it normally without the 1A. Grimmelmann
thinks it’s more complicated because of the algorithm: what does the algorithm
do to our understanding of speech?
Programmer; choice of platform to use it; data; users; unpredictable
results that programmer didn’t intend. Next great free speech challenge.
Pasquale’s theory is that platforms control and tweak the algorithm to make
money—reminded him of vicarious liability in copyright. So should we condemn or
celebrate the copyright filter? Since that starts to look like what would be required
in the absence of §512.
Algorithmic authority: trust Wikipedia because it’s on
average right. Skeptical of authority residing in network users through
noncentralized process; vulnerable to manipulation; privileges those who can
spend the most on manipulation. But then the entities he worries about are
standing between us and the spammers. As between spammers and filters, which is
better for free speech? With no baseline, we need some way to know what a good
outcome looks like in the war of spammers and filters. Pasquale’s direction
sounds like media literacy—no unthinking reliance, but labeled outputs.
If we’re concerned with platform power, ask why platforms
are so prevalent on the internet. May not be natural. May be in part because of
capital markets: investors want platforms so people build them and cash out.
Results: (1) Obsession with scale, and with not having humans doing work. (2)
need a network of dependency so that you can make money from content providers.
(3) lock-in: DRM on iPhone and integration of multiple products as w/Google are
designed to make it hard to leave, not naturally occurring features of tech.
More speech friendly attack might be to go after some of the structural
economic tendencies rather than regulating them after we’ve gone down that road
already.
Pasquale: agrees with the capital markets point entirely.
Part of my paper is about 1A law and part is about internal
norms that big companies should adopt, or what Tim Wu describes in Master Switch as a constitutional
principle w/in the DNA of a company. That’s where Twitter ends up with Occupy;
maybe where Apple and Drones+ ends up.
Discussant: Rebecca Tushnet
Arbitrarily starting with Grimmelmann: Advisor theory would
lead to different treatment of search engines and other recommenders/guidance
providers than the conduit and editor theories—but a key part of Grimmelmann’s
theory is that he looks for some fault or deception in the advice and doesn’t
consider falsity enough to justify liability.
I would suggest that an advisory opinion designed to convey information
about a fact in the world can be more like product design than it is like First
Amendment speech. If a thermometer is
miscalibrated so that it showed water as warm when it was in fact scalding, why
should there be First Amendment coverage at all, despite the fact that the thermometer
is reporting information in semantic units and despite the fact that the thermometer
was designed by human beings who wanted it to report information?
Grimmelmann suggests that the provision of such false information
is covered by the First Amendment but unprotected, but this requires some careful
parsing of falsity: false might just mean not true, but that’s a very different
meaning than it has in other traditional areas of First Amendment jurisprudence
(or common-law fraud, for that matter) where it means knowingly untrue. These issues are taken up in Bambauer’s
paper, which contends that data is of its nature speech—meaning that bans on
radar detectors, for example, must survive First Amendment scrutiny—and since
these would report true information, the true/false distinction would give such
bans an uphill battle.
More generally, the more our levels of protection depend on
whether or not the speech is untrue as opposed to some other characteristic of
the speech, the more of a premium there is on who gets to decide what counts as
true—and in addition, the more of a premium there is on whether the falsity has
to be knowing, which is often though not always bound up in whether the government
can pick some proposition and find it
to be a fact. What counts as “organic”
for food production, for example? What
happens when someone sincerely believes that the USDA definition is unduly
restrictive?
Grimmelmann suggests that inherent in the nature of the
search engine task is the unity of falsity and intent in cases where a search
engine’s recommendation is insincere.
But that won’t always be the case even in search engine type situations—consider
bad map directions from Google, for example.
It’s not insincerity that creates the potential misleadingness there.
Another key point of the advisor theory: Grimmelmann says: “users
themselves are better placed to know what they want and need than anyone else is.” Sergey Brin said: The perfect search engine
would be like the mind of God. Yet who is this user who already knows what she
wants and needs before encountering what’s out there? Eric Goldman has argued instead: users don’t
know what they want. Julie Cohen, from a very different perspective, has made a
similar point: users are constituted by the search and the fortuitious—or
designed—encounters they have along the way.
If the mind of God answered your questions before they were asked, what
would free will mean? I don’t think it’s
sufficient to say, as Grimmelmann and I think Bambauer both contend, that the
user is in a relatively better
position than anyone else to know, because we often take other normative
positions to the contrary (for example, the positions we regularly take as
teachers—how many of you would think that whether you do your job well should
be entirely determined by student evaluations?--and parents; likewise the
positions of innovative designers—the standard example here is Steve Jobs, who
taught us what we didn’t know we wanted).
Meanwhile, Pasquale contends that the focus on technology
has “obscured the social and political aspects” of search technology. Search
and other technological collection/choke points should “shoulder some burdens
(rather than just pocketing the benefits) of serving as an infrastructure of free
expression.” Google wants to have it
both ways: editor when that serves its interests, and mere conduit when that
serves its interests. Similar dynamics
are at work in large companies’ claims about their rights to collect data
versus their rights to protect their own data against collection or disclosure
through various means from the DMCA to the CFAA and plain old contracts, an
issue raised by Bambauer’s paper.
Pasquale discusses Tarleton Gillespie’s question, Can an Algorithm Be Wrong? Gillespie’s answer is no (it can only be
contestable)—and Grimmelmann agrees; there is no such thing as a wrong search
ranking result, only a misrepresentation about how that result was arrived at. But if that’s the correct position, then we
need to rethink many claims for the transformative power of big data. Transformation is possible and perhaps even
likely, yes, but what new monsters might yet emerge from that transformation? If the answers produced by big data
algorithms won’t be right, only contestable, then the normative claim that we
need those answers is trickier. And of
course we might profitably classify different types of algorithms instead of
saying that no relevant algorithms
can be right or falsifiable—but then we’re back to context instead of rules
based on the nature of some speech-artifact.
All the papers, in their own ways, force us to confront the
risks and benefits of multivalence and contextual analyses—which might not be
the same thing as balancing—compared to categorical approaches that focus on
the logical entailments of propositions about what speech is and what
restrictions on speech are. Both ends of
the spectrum have their attractions, which is why it is and will remain very
hard to declare a winner.
These papers ultimately go to the relationship between
theories of what is protected by the First Amendment and theories of what a
human person is. Bambauer in particular
works from the proposition that the First Amendment has already settled the
question: humans are autonomous and by default should be left to their own
devices (or the devices that non-governmental entities have created for them,
anyway) in the absence of sufficient justification for direct government
intervention into the existing knowledge environment. Because different companies compete with each
other, she suggests, big data doesn’t give them special advantages—AT&T
fell to Microsoft, which fell to Google.
But note who’s left out of that—all of these entities have pretty
significant advantages over their
customers, who now face different
entities capable of controlling their behaviors.
Bambauer worries about privacy laws slowing “the inevitable
rise of some new, as-yet unheard-of company that can make even better use of
consumer data.” But what makes the rise
of a new company (Skynet?) the goal for which our First Amendment policy should
aim, as opposed to moderating and balancing private actors’ ability to control
each other? This is especially true if
we do not in fact live in a perfectly efficient market: Bambauer contends that
one reason not to worry about a free speech right to collect data is that
“credit decisions made on the basis of factors other than income will have the
salutary effect of reducing interest rates for poor-but-creditworthy
consumers.” I don’t think that’s what we
saw with subprime mortgage interest rates, especially for minority borrowers;
if free speech policy is merely economic policy, then inefficiencies in
non-free-speech related areas of the economy will distort the effects of
unregulated speech as well.
So what do we do? Julie Cohen, as Bambauer
acknowledges, offers a challenge to the idea of the individual as autonomous
unit who exists before and independent of data collection: if privacy is
important because it “shelters dynamic, emergent subjectivity” from efforts “to
render individuals and communities fixed, transparent, and predictable,” and if
privacy therefore also protects innovation by protecting play and
experimentation, then many of the promises of big data won’t be realized;
destroying privacy is similar to eating one’s own seed corn. The risk is not inaccuracy but preventing change: the self-fulfilling prophecy;
when manufacturers predict that parents will buy pink princess products for
their girls and fill the stores with pink princess products, the sales results
do not actually speak for themselves.
And the private power point matters particularly because, if
data is speech, then disclosure of data is forced speech. This has significant implications: We are not
equally free to make judgments about which is the best lender for us as the
lender is to make judgments about us as potential borrowers, except insofar as the
rich as well as the poor are free to sleep under bridges. As I understand it, Bambauer thinks the
compelled speech doctrine is a mess, but enshrining data as speech with the
compelled speech doctrine in the condition that it’s in has distributional consequences.
Many complaints about data collection and use are not about inaccuracy; they
are about the ways in which such uses are, in practice, arbitrary and
self-fulfilling (redlining) and unaccountable (the camera points one way). One benefit of putting these papers in
dialogue is that they bring issues of accountability of private power sources
front and center.
Bambauer: it’s very hard to tell when people are stuck in a
particular platform through market manipulation or through preferences. Worth
noting. Our instincts us point us in
different directions. She believes it’s preferences because we’ve seen so many
corporations rise and fall. Facebook didn’t kill MySpace; consumers killed
MySpace.
Inefficiencies in lending: they do happen. There should’ve
been banks eager to find good risks and give them lower interest rates than the
other banks, and it didn’t happen. When we find such concrete harms, there’s
good reason to intervene. But she differs in predictions about what’s going to
happen. Would rather see the harms before taking preemptive action and stifling
innovation.
Grimmelmann: common-law baselines. There’s a 1A interest in data collection that
B says can be checked by property and contract baselines. Is that true?
Effective data protection requires more than purely bilateral
contracts—if you go bankrupt or break your contract, I may have no recourse if
my only protections are contractual. Also raises question of how contract
default terms are structured—can system be structured so that users who ask for
confidentiality can create a right to it if their signal is unanswered.
Trespass now imposes restrictions on gathering data. Why
does it receive special treatment for anything other than historical reasons?
One reason is exclusive use promotes efficiency, but drones don’t interfere
with use in the way that other trespasses might. If privacy is the only reason left, then
trespass’s absoluteness no longer makes 1A sense.
Bambauer: now agrees that property won’t necessarily prove a
compelling/important gov’t interest.
Trespass ends up dissolving into the idea of seclusion.
If we agree that I won’t give info away and it leaks, yes,
your only recourse is against me. That’s a risk the 1A expects with the public
disclosure tort, etc. If there’s enough reason to think that my disclosure
harms you too much, the gov’t should be able to meet its burden in the course
of scrutiny.
Neal Richards: Not persuaded that the question “is X
speech?” is helpful for all three papers.
Is search speech? Is data speech? Is programming speech? Are capital
markets speech? If the answer is yes, regulation is harder: this idea reminds
him of Lochner era conceptualism: is
a railroad commerce? Is a garden commerce?
All of this is w/in the 1A, same as all laws discriminate but few
violate equal protection. Equal
protection asks for offensive kinds of discrimination—what gov’t acts threaten
the values? Similar Q under 1A: what
kinds of regulation threaten 1A values? What would each of you allow?
Bambauer: what we need is a purposive test that asks whether
gov’t purpose is consistent with 1A. My
answer might be more expansive: as long as gov’t is trying to prevent someone
from knowing something, then the test is triggered. Can’t use prevention of creation of knowledge
as waypoint to avoid some other harm w/out scrutiny. Can always regulate
endpoint: use of knowledge, transactions.
Grimmelmann: the underlying theory is that we would modify
our treatment of listeners’ interests in the 1A by adding not just their
interest in receiving speech but element of volition and autonomy in selecting
what speech they’ll see. Undertheorized. A lot of hostile audience cases are
better understood as listener/listener contests where some want to receive the
speech and others don’t. What’s on and
off limits: my approach would privilege users exercising tools to choose which
speech they hear. Do not call is entirely unproblematic: well informed choice
made by listeners who don’t want to hear with no effects on other listeners.
Makes more difficult the justification of regulating conduits in the interests
of assuring diversity for listeners. (Not to mention compelled speech!) Once you have available info—conduits
themselves neutral and not blocking speech—the idea that conduits need to
balance out speech is less persuasive—shouldn’t be regulating transmission end
if listeners can be selecting what they want to hear. (Internet exceptionalism? How do we think
about cigarette warnings? Is choosing to
go into a grocery store the same thing as running a search on Google?)
Pasquale: Bambauer wants to look at use. We don’t stop companies from getting data,
but could stop it from using it in making credit decisions. But we’re
regulating in the dark if we don’t simultaneously question trade secret protections
that keep us from finding out what’s being used. Fear is that partial realization of various
agendas, including Bambauer’s, may be worse than no realization at all.
Grimmelmann: from a freedom of thought perspective, not
obvious that limiting capacity to know is less invasive than techniques we’d
need to use to determine whether they used forbidden reasons to make
decisions—very intensive scrutiny of their reasons/records would be required.
Bambauer: if that’s how they feel they can just not collect
the info.
G: how would we know?
B: if state requires revelation of what info they used,
might wonder whether compulsion is worse. But it’s easier than ever to reverse
engineer what companies are doing—harder to discriminate on the basis of race
because of information.
Kevin Bankston: Must-carry.
Pasquale discusses dominant, giant, large platforms but doesn’t
explicitly hinge remedies on size. To what extent does or should regulation
hinge on size/market dominance? Must
carry could be applied to Facebook too, but FB wouldn’t have gotten to its size
if it had to host any and every piece of speech anyone wanted to post.
Pasquale: size matters a lot. Would go to antitrust law, not necessarily US
law; has a 2008 piece on this. If you
think regulation is a drag on market entry, that is a concern; but the size
issue can also be used—if we do burden only the big guys, that may encourage
market entry/smallness.
As to FB itself, “Through the Google Goggles” describes ads
not allowed on Google—including criticism of China, done because Google wanted
more Chinese business. Would consider it, though not all speech. In false
advertising, there is a symbiotic relationship between gov’t entities and
private advertisers—FTC and NAD try to weed out troublesome situations. History of doctor rating sites—these little
sites have lots of obligations. NY AG
doesn’t trample on Cigna’s 1A rights when it says they can’t rank and rate
doctors solely on the basis of cost just because they’re an insurer and that’s
what they think matters.
David Thaw: Do not call as costless? It is costless in the sense that it doesn’t
directly prevent people who don’t opt out from receiving the speech, but has
nontrivial transaction costs that may make less of that speech come through.
G: that’s reasonable; varies with media. We’ve found that cheap speech means spam;
it’s not necessarily speech positive to bring cost of speech to zero because of
increased listener sorting costs.
Andrea Matwyshyn: what is the point at which code—is that
the same thing as data?—should be analyzed under traditional or tech-specific
frameworks? Credit reports now have
mental health information; in a world of maximum data aggregation, there are
real economic losses to excluded individuals.
Children and large-scale data aggregation: the right to forget relates
to developmental psychology—it is important to let children get over
bullying/space to learn and forget identities—other bodies of law, like
contract, allow that.
Bambauer: discrimination against people w/mental health data
on file shows that gov’t can do a lot without scrutiny: can be done through
employment law. Though you may have
liability for negligent hiring, so that piece of info can be important
too. Policymakers are free to regulate
how such info can be used. Regulation of
SSN display might survive scrutiny, but would have to go through scrutiny.
Deven Desai: Caveat emptor doesn’t work for search. But are we all in relations of dependence,
trust, uncertainty with respect to the black box of search? What would the
remedy be? Often, if the agent
discloses, that’s all they’re obligated to do, but that doesn’t really
work—disclosure isn’t enough.
G: to the extent that bad faith is a problem, there’s a
merger between meaningful disclosure to the user in a way that satisfies
standards of informed consent and adhering to the duty. Christian Singles search engine: that’s ok
because they’re upfront about relevance criteria, and such filtering is
socially beneficial.
Desai: weird definition of fiduciary.
G: using fiduciary duty also gives protections for data
disclosure.
Pasquale: analogizing to Wall St. cases on duty of
suitability v. fiduciary duty. Most financial advisers only have the
former—don’t need to give you the best product, just one that’s generally suitable. Similar to Dean Post’s work on bankruptcy law
& lawyers—what lawyers can and can’t say.
S&P was sued by DoJ for its bad, bad faith job of rating securitized
debt obligations. S&P says its claim to be independent and objective was
mere puffery. What fiduciary duty does is that it doesn’t allow these large
entities to coast on the wave of trust they developed over decades and decide
that they didn’t mean that they were actually giving reliable advice.
Leslie Kendrick: B’s paper defines free speech as kicking in
when there’s purposeful gov’t interference w/development of knowledge—what about
government secrets? Can we really draw a line between a negative and positive
right here given the strength of your concept of the right? Does the gov’t have a proprietary interest in
its data?
B: I don’t mean to say we can regulate what the gov’t says
or knows. But she is concerned w/gov’t secrets. Too much deference to the
executive.
Tabatha Abu El-Haj: What are the 1A values in the listener
perspective? Brings up conflict between marketplace of ideas and autonomy
interests. Many people don’t want to hear about politics. Marketplace of ideas
was not brought up as a market but as a clash
where people would contend with each other’s arguments, not just go and pick
from a stall.
G: listener approach can’t supplant a speaker approach
entirely. Within that, it’s true that by
definition the listener isn’t fully aware of the content of the speech,
creating an info gap; you can’t perfectly judge what’s out there; will have echo
chambers and biases. We need some value
given to things other than finding truth. But speaker-focused environments
distort the function of a market of ideas; people w/outsized influence and
access, especially money, shaping who gets heard. More empowerment for
listeners to select for themselves instead of just the defaults provided by
those with access to the means of speech, that corrects for a failure in the
truth-seeking function of speech.
Christina Mulligan: Google says santorum isn’t its fault
because it just reflects what users say, then says it’s choosing its algorithm—but
it’s allowing speakers to use Google to speak through Google. Google can still be protected because of what
it facilitates while still saying “it’s not my speech”—can’t that intermediary
function resolve the apparent tension?
Pasquale: but where on the hierarchy would it fit? Tornillo v. Turner: Of the two, Google looks more
like the cable broadcaster than like the newspaper. Newspaper has recognizable editorial opinion,
whereas choice of search engines is like choosing a cable provider. There is a right for cable companies to have
their own messages, but hesitant to say that they can stand for everyone since
they reserve the right to kick anyone off.
Mulligan: book distributor in Smith
v. California—should it be free from obscenity liability? Affirmative choice to pick what’s in the
store, but still free from liability (nb: unless it has specific knowledge of
obscenity).
Pasquale: downranking might be ok, but public should have an
indexical interest in finding something if they specifically search for it.
Felix Wu: Each paper privileges listener more than has been
before, either in info collection or in imposing advisory/must-carry duties.
How do we integrate listener and speaker perspectives? Sometimes you seem to suggest we can
downgrade the speaker, but what do we do when we can’t? For Bambauer, how do we run the scrutiny
analysis when someone says “seclusion matters for my own speech interest”? For Grimmelmann, is there some residual left
over for the advisor to have affirmative speech rights? For must-carry, is there something left over
for compelled speech?
G: the home as a place where listener is engaged in
self-development is at its strongest.
B: but when you invite someone into your home, it’s
different.
Pasquale: trying to integrate must-carry with strains of
feminist/civil rights work about the need of platforms to downplay/exclude
hateful groups. There the platform is
speaking on some level. He would also look at the commercial
interests/decisions v. noncommercial values.
B: in terms of scrutiny, she intentionally avoids questions
of level of scrutiny to be applied. But the reason coverage is appropriate is
that the 1A experiment in which the gov’t’s argument in favor of what it wants
is treated skeptically. We’re doing the same experiment with guns, but this one
is a good one. We often undervalue information.
No comments:
Post a Comment