Opening Keynote by Irene Roche-Laguna of the European Commission’s DGCONNECT group on origins and aspirations for the DSA
People thought it couldn’t be done; didn’t know whether it would
be a directive or a regulation. But took only 6 months to agree and 22 months
to be adopted. We also managed to keep the substance—3 red lines: country of
origin, safe harbors, prohibition of general monitoring obligations—widely acknowledged
for its balance. Council often turns a beautiful baby into a Frankenstein’s
monster; it was very close to being much worse. In Council, we had proposals
for staydown; Netherlands asked for a modest duty of care; Germans wanted a
24-hour deadline copying NetzDG; Parliament wanted liability exemption for marketplaces
except for illegality; staydown; and prohibition of automated filtering that
would have prevented spam filters.
Now the EU is ahead of the US—in Gonzalez, SCOTUS is being
asked about recommendation systems; but we’ve already answered that.
Recommending videos on the basis of user behavior is not enough to show
specific knowledge of illegality. Twitter: does a platform that scans for
terrorist activity become liable merely b/c it could have taken more aggressive
action? EU has answered in eBay case, no, it has to have actual knowledge,
usually triggered by a valid notice. “Good Samaritan”: implementation of
measures detecting content that may infringe the law does not constitute
knowledge and control over that which escapes deception. We win!
DSA improvements on Ecommerce directive: new, clarified and
linked to due diligence obligations.
New is important b/c it’s a democratic endorsement. Qs: “These
companies know everything about us; how can they not know what’s illegal? How
can they not know when illegal content is reposted? They make money from third
party content and should be responsible for it. The Ecommerce Directive you
point to is so old, adopted when the internet was new.” This legal ageism was
the last resort of critics. Although it was opening Pandora’s box, a hornet’s
nest, and a can of worms at the same time, it was worth redoing. Fortunately
the red lines were respected as the democratic mandate of the safe harbors was
respected. This was not easy—the first question asked in committee was what
about staydown. But Art. 8 prohibits general monitoring obligations, which is a
success.
Clarified and a regulation instead of a directive: That’s
important b/c a directive is transposed into national law, w/27 potential
different means. Some member states would define actual knowledge to be limited
to manifestly illegal content; some wouldn’t; some would have notice and takedown,
others didn’t; etc. They tried to get DSA to be directive, but it’s not—the end
of legal fragmentation. And the rules are clarified by incorporating longstanding
caselaw in relation to safe harbors over 22 years of application/interpretation,
especially about when a provider plays an active role leading to knowledge and
control. Active/passive is not an important distinction; notice has to show
content is manifestly illegal w/o need of detailed legal examination. Suspicion
of illegality is not sufficient.
Liability exemptions are independent of a full set of due
diligence obligations. This is the major DSA regulatory contribution—splitting due
diligence from liability for third party content. National courts have pushed against
safe harbors to get platforms to do something—immense pressure on the safe
harbors. Liability and social responsibility were mixed in debates. National
courts had to impose duty of care or accept safe harbors as hands-off approach.
But DSA allows protection for third-party content while expecting the platform
to act diligently. And it’s fully harmonized, meaning that states can’t try to “top
up” the DSA. If the platform is diligent, it is protected from liability even
if the content is illegal. There were attempts to make safe harbors conditional
on due diligence, but they were not accepted. Judges will not be auditors of
DSA compliance.
Due diligence obligations focus on procedures, not on
content—what is illegal. No admin oversight of content. The majority of
moderation decisions are not about removal, and not about illegality. Users
also need redress/transparency about those decisions.
Three characteristics of DSA that are building blocks: (1)
single market nature, (2) proportionality, (3) process effect. Single market
effect: harmonization of national rules, like US federal preemption. Helps
service providers pay engineers instead of lawyers; uses country of origin
provision where they’re subject to compliance only in that member state. Legal
fragmentation is bad for businesses and legal certainty. Easier said than done,
but DSA centralizes and neutralizes enforcement against systemic risks posed by
VLOPs and VLOSEs.
Balanced approach: if we regulate only with Google in mind,
we will only have Google, so rules needed to be proportionate to size and
capacity of providers. Higher responsibilities for services that are higher in
the food chain—infrastructure providers are different than consumer-facing
providers; this creates the distinction between transmission and hosting and
VLOPs. And startups/small providers have more protections.
Brussels effect: is this worth exporting? GDPR was taken
with skepticism, caution, and then emulated around the world. DSA could be the
same. Could be worth exporting even to less democratic countries b/c of the
checks and balances and judicial control. [This seems in tension with the claim
yesterday that the DSA looks for good guys and bad guys—a system that works
only if you have very high trust that the definitions of same will be shared.]
Panel 1: How the DSA Shifts Responsibilities of Online
Service Platforms
Moderator: Erik Stallman, Berkeley Law School
Designing Rules for Content Moderation: The Shift from
Liability to Accountability in Europe
Martin Husovec, London School of Economics
Principles that could be useful in trans-Atlantic dialogue:
Many provisions are too European for US, like risk mitigation. [Ouch.]
Framework was validated over time as right one: liability safe harbors is a success
story for the internet b/c it created breathing space for expression and new
services. Ecommerce directive was regulating the member states, not the
services—trying to coordinate how they could regulate in their own
jurisdictions; national regulation and self-regulation was the intent. Second
generation in DSA: try to turn previously unregulated industry into regulated, especially
the largest subset.
What are the building blocks that could be useful abroad?
Four principles:
(1)
DSA has horizontal rules, not sectoral
fragmentation; covers all areas of law. [But see yesterday’s discussion of ©.]
That made it easier to adopt. Art. 17 does interact, but DSA creates safeguards
that member states might not have wanted to enact. Avoids problems of
regulatory arbitrage. 230 v. DMCA—one set of horizontal rules avoids that. And
proportionate rules are easier b/c they look at all sides, not just complaints
of one industry. Risk mitigation allows you to think both about overblocking
and grievances of © owners.
(2)
Builds on liability safe harbors: we regulate by
allocating responsibility and sharing burden, not pinning blame on one actor. Victims
are partly responsible for mitigation of harms, providers, and users. DSA
renews democratic support for this, which is not a small thing among publics
and courts.
(3)
Look at ecosystem, not platform; everyone should
be part of the solution. Users, providers, notifiers, and more need tools. DSA
promises priority for high quality notifications, and notifiers that misbehave
can be suspended. Instead of focusing on damages, we’re focusing only on
suspensions and giving both carrots and sticks.
(4)
Separating new regulatory expectations from underlying
social contract around liability. In DMCA, repeat infringer policy is connected
to liability protection; in DSA it is not. DSA prioritizes taking action over
compensation. Lack of statutory damages/attorneys fees is an improvement.
US caselaw was instrumental in early days, as were DMCA
notification standards. At this point the EU approach has matured and many DSA
tools can’t be transplanted into the US First Amendment environment, but these
four principles could help guide thought about reform.
“Human review” as the New Panacea of European Platform Law
and Beyond? The Emerging European Standards for the Interplay of Algorithmic
Systems and Human Review in the DSM-Directive, the DSA and the proposed AI Act
Matthias Leistner, LMU Munich Faculty of Law
Algorithms are strong at pattern recognition and identifying
protecting content and to a certain extent the degree of similarity to
protected content. Encourage best possible human/AI models; we know too little
to decisively regulate. Need to keep it flexible and encourage competition.
Art. 17 was problematic due to heavy political lobbying. DSA
stands a chance building on transparency obligations. Red flag: maximum
transparency isn’t optimal—information overload and maximum transparency in
content moderation can lead to users gaming the system and create a battle of
algorithms. Some transparency is needed for users, others for researchers and
auditors. Notice and action mechanism/internal complaint handling system
requirements of DSA relate to algorithms.
Proposed AI Act is a sector-specific regulation of AI
techniques that might overlay onto the DSA; also GDPR might have an impact.
Art. 17: on the one hand, accepted algorithmic blocking, on
the other, tried to make sure wouldn’t affect legit users but only by way of
the redress mechanism which often comes too late. German implementation: manifestly
illegal, blocking; if unclear, notice and delayed take/staydown—only ex post. This
is easier where we have a remuneration provision for content owners when the
content remains online. Easier for music than movies which depend on
exclusivity (well, whole movies).
DSA starts from premise that algorithms will be used; notice
and action can be purely algorithmic, w/o human review, just statement of
reasons. But internal complaint-handling must be taken under supervision of
appropriately qualified staff and not solely on the basis of automated means.
Human content moderation isn’t necessarily better: of course the audits can
also relate to the status and situation and role of human content moderators.
So noticeĂ delayed
blocking/staydown, while algorithmic decisions can lead to blocking first.
Problem of belated complaint handling in regard to dynamic, potentially viral
content is ignored.
DSA covers all illegal content without prioritization, but
there might be greater/lesser ones. There is a flexible standard for reaction
times—expeditious/timely. The only prioritization is for trusted flaggers, but
how to specify those standards and roles? Need to prioritize certain policy
issues, but DSA doesn’t seem to allow this. Is there leeway to limit trusted
flaggers to offenses of certain substantiality? Art. 22 says that trusted
status “shall be awarded” on certain conditions; raises possibility of trolling
business models.
Proposed AI Act: risk-based approach; based on sector of use—critical
infrastructure, access to essential services, law enforcement, health services.
But also tech based: stricter w/r/t to biometric identification and categorization
of natural persons. Requires human supervision which might interfere w/automated
systems for, e.g., identifying a person. That interferes w/DSA system.
Interventions
Xiyin Tang, UCLA Law School
230 reform has also focused on accountability, human review,
and transparency. Most content that is taken down is for copyright reasons: why
not talk about copyright along with other content moderation? In part b/c of
agreements b/t large platforms and large © owners. These agreements are highly
confidential, which makes it unclear what counts as “infringement” under this
privatized system. When we think about platforms engaging in content
moderation, they don’t have carte blanche: when there are © claims, legit or
otherwise, there are other claimants setting policy, then passed down to users
through platform as intermediary. When Art 17 was adopted, including good faith
efforts to get authorization from © owners, the largest platforms had already
done so. [As I say, the copyright industries hated Content ID so much they made
it a universal law.] They’re rewriting © policy altogether.
Big problem for transparency. During covid, when live tours
were cancelled, artists broadcasted themselves from their bedrooms. FB Live let
users do this for a minute or two at the time, then user accounts were blocked
or suspended; Instagram, in a rare act of transparency, disclosed that Meta had
agreements with large content owners requiring this blocking. But it didn’t
disclose any guidelines—we can’t tell you what they say; use less music, but we
can’t say what the threshold is. Leaked agreements online show the deal parameters
at which a user is deemed to be a bad faith actor leading to suspension,
muting, blocking. But no party wants to disclose those terms. Transparency
requires us to decide how much platforms are required to disclose. E.g., what
constitutes a clear infringement? Copyright owners don’t want to transpose public
law; what would be the point of private ordering otherwise? So they rewrite the
law. Crops up in the US w/fair use—rightsholders don’t like the idea of fair
uses. The Sony presumption of commercial use being unfair was rolled
back in Campbell, but privatized © agreements override Campbell.
Delineate b/t users that can pay and users that can’t. Rightsholders allow
latter to be covered by large lump sum from platforms; no one was going to pay
anyway. But commercial users, in the leaked agreements, had their uses blocked/put
into commercial review queue to allow rightsholders to go into system and
identify high-value users who could afford a license. Substitutes for fair use.
Eric Goldman, Santa Clara Law School
Implications of DSA on legacy © industries—unintentional benefit.
DSA is written w/expectation that companies will keep doing what they’re doing today,
but level up certain practices. But laws have unintended consequences; what will
change? Seems obvious that platforms will change their behavior, b/c DSA
increases costs of doing business. Minor changes: cost of ADR, cost of audits.
Content moderation is no-win since you can’t make anyone happy; appellate
rights are structural costs, as are transparency mandates. How will services
decrease these costs?
Community of “authors” and “readers”: people flip between
those statuses, but only a small percentage of people who have accounts act as authors
consistently. General rule: 10-20% of content creates 80-90% of revenue. The
DSA will affect the treatment of the long tail.
As practical matter, most authors are in long tail w/ relatively
small audiences that aren’t commercially valuable. Increased costs of catering
to them makes their content less profitable or even unprofitable. Obvious
reaction: cut off long tail. Alternative: charge authors to contribute b/c we
can’t make money in existing business model—Musk’s moves w/Twitter Blue.
Over the long term, hits come from pro producers, despite
occasional viral hits. So services will look for hits; will structurally shift
from prioritizing amateur content to professional content.
His predicted countermoves: web was predicated on amateur
content; producers who lacked mechanism to reach an audience would provide that
content for free—massive databases of free content that could be ad-supported
b/c it didn’t cost much to obtain. DSA shoves that model towards professionally
produced content, making services need something more than ad-supported
business, resulting in more paywalls.
Why does Hollywood oppose 230? Systemic battle to reduce the
overall amateur content ecosystem. That’s why they supported FOSTA—changing the
overall ecosystem.
Losers: niche communities. Fewer places to talk to one
another; hits will focus on majority interests.
Stallman: Statements of reasons for certain types of
takedowns—will that help? [Who doesn’t do that already? Even if you find current
statements vague, the DSA mandate doesn’t seem to create anything new.]
Leistner: these statements will be algorithm-written and thus
at a rather high level. Sometimes this makes sense so the system can’t be
played. The algorithm will just come up with the part of the policy that was
violated, and if the list is long that won’t help much. Still an improvement
b/c the platforms don’t do anything more than they have to. Compare FB/Google
to Amazon: Amazon is efficient on TM but not ©, whereas FB/Google are efficient
w/© and not TM—might be more accountability. [Isn’t this justified by the kinds
of harm that the different services are more likely to cause? That seems like
good resource allocation, not bad.] No standardized complaint procedure/no
human to speak to—the jury is still out on whether the DSA will help.
Goldman: statements of reasons are great example of
accuracy/precision tradeoff. Services will emphasize simplicity over accuracy.
Have seen lawsuits over explanations, so services will want to be as generic as
possible. Appellate options: for every good faith actor who might be appealing,
we should expect X bad faith actors to use the appellate process to hope that
they can reverse a decision by draining service resources. For more precise
explanations, assume that bad faith actors will exploit them; explanations for
them just drain resources.
Justin Hughes: don’t understand why long tail content would
disappear—assuming a person puts unauthorized long tail content online, that
won’t be as common by hypothesis, but why would it decrease authorized long
tail content?
Goldman: turn off authorship capacity for many existing
users. Twitter has taken away my blue check b/c I’m not of sufficient status to
retain the blue check & I’m not willing to pay. More of those kinds of
moves will be made by more services. Don’t think that existing userbase will keep
authorship/reach powers.
Tang: Art. 17: more money in authors’ pockets by requiring
licenses is the aim. But that concentrates money and which authors get paid.
Makes legacy © holders stronger; creates antitrust problems.
Husovec: Would resist Goldman’s view. Companies that produce
externalities are being asked to pay for them where others are paying now. When
FB doesn’t do proper content moderation, creates externalities for users, so
newspaper has to moderate the comment section on its FB page. Forcing FB to
internalize the costs just means a different entity is paying. [I think that’s
Goldman’s point: FB will try to reassert its position and if it can it will make
the newspaper pay directly.] We might go towards more subscription products,
but not necessarily only b/c of regulation but also b/c we’ve reached the
limits of an ad-supported model.
Q: What about misuse/trolling? Will Art. 23 of DSA address
this? Allows temporary suspension for abuse of process. If you as rightsholder
already have access to Content ID, will you have an incentive to become a
trusted flagger/subject yourself to this regime?
Husovec: DSA is just a bunch of tools; outcomes are up to
the actors. Does have tools to disincentivize—suspensions are superior to
damages. Also applies to appeals, and collective action for consumer
organizations if companies don’t terminate repeat offenders. The problem is
whether the supervision of this will be sufficient. Regulator’s role is obvious—can
strip trusted flaggers of status if not good, but will they be monitored? If
services don’t tell regulators b/c of private ordering, or if they don’t become
trusted flaggers b/c they already have Content ID, then it won’t wore
Leistner: questions are interlinked: if the trusted flagger system
is rigorously policed, it’s less attractive to rightsholders. In theory we want
a Lenz type system with human review for exceptions, but maybe they’ll
be more comfortable with private system. NetzDG was of limited effect b/c just
adapts existing policies; maybe privatized systems remain preferable to this
regulated system, but may offer opportunities to those beyond © like human rights
organizations—should be more opportunities for non-© owners to achieve same
results. Small © owners are relatively disadvantaged where large © owners have
access to monetization and they don’t—we already have this problem.
Tang: Songwriters have complained that authors get worse
outcomes through direct licensing. Under consent decrees, they have to report
to authors first. Under direct licensing w/platforms, large publishers skim off
the top first.
Leistner: extended collective licensing would be the European
answer. [My understanding is that those also overallocate to the top of the
distribution.] Would increase costs, but introduce more fairness. Doesn’t fly
right now b/c lack of supranational ECL. But he’s certain Europe will look into
this b/c the link is so obvious. But that would also mean that every post could
cost money.
No comments:
Post a Comment