Chair: Daphne Keller: EU heavy compliance obligations + a bunch of other laws coming into effect right as platforms are laying off people who know how to do that—a bumpy road.
Impulse Statement: Rachel Griffin: Technocratic approach to
regulation; we associate auditing with clear success metrics (did you make a
lot of $ or not) versus these very political issues with no yes/no answers—what
does it mean to be successful? Rhetoric of auditing, but even in finance
auditing is not that objective; the problems multiply when applied to political
speech. “Rituals of verification” substitute for results in lending legitimacy.
What goal is the rhetoric and framework of auditing actually serving? “others
doing the Commission’s job for it?” Maybe it should be to provide the minimal
value of accurate information—not just making up numbers. If so, it would be
more helpful to have transparency reports audited rather than risk assessment.
Are we focusing too much on auditing and too little on
platforms’ internal risk assessments, which are a precondition to the audits? Realistically,
any audit report will take what companies have been doing as their point of
departure and give them feedback on improving.
Risk of regulatory capture by corporations. Wants to push
back against civil society involvement as a solution—up to a point, but that’s
not automatic or easy and has its own limitations. Civil society doesn’t
represent everyone equally; it’s prone to corporate capture too.
Impulse Statement: Eric Goldman: Hypotheses about what he
thinks will happen, supposed to be provocative but also sincere: Homogenization
of services’ practices: companies will watch each other and figure out what
satisfies the necessary audiences. Ossification of content moderation processes
once blessed: it won’t change further. Cut as many corners as they can: much depends
on how regulators push back on that—seen that with GDPR and will see that here
given the scope for judgment calls. In the US we would know that doing the
minimum would suffice, but the expectation here is that would “prompt a
dialogue,” though what happens then is unclear. Many of these provisions will
be outdated soon or w/in years—fighting the last war. Will see the
weaponization of options—everything we’re doing is put through a partisan
filter, and the longer we’re in denial about that the worse things will
ultimately get. Raise the costs of the industry, rewarding big and punishing
small, so we’ll see a shrinking number of players who offer UGC as a matter of
economics. Switch away from UGC to professionally produced content,
w/significant distributional effects.
Frosio: We already saw a number of major newspapers
eliminating comment sites after liability to monitor comments sections was
imposed on them.
Senftleben: A system blessed by audit will continue: is that
ok? If European legislator wanted to open spaces for startups, the best thing
you can do is make established broad services as boring as they can be to make
space for niche services. [That assumes that the system will continue to function
as content evolves, which does not track past experience.]
Comment: platform w/UGC component and walled garden component
could easily make the conscious decision to grow only the latter—that’s why
Spotify is so cautious with podcasts.
Discussion about what it means for content to be UGC—at the
request of the recipient of the service. Monetization can still be UGC but some
forms of monetization may take it out of UGC when it’s done at the request of
the service itself.
Platforms likely to define risk assessment by looking at the
minimum they need to do under the audit, so there are feedback loops.
Elkin-Koren: there will also be pressure to move to sites
that are currently unregulated: WhatsApp viral distribution has been used in
many countries, and it’s under the radar of the DSA. We should also keep an eye
out for that. Generative AI may also change this as people don’t access the UGC
directly. New paths of access and consumption require new thinking.
Schwemer: platforms/hosting services host content and users
provide it. Netflix isn’t covered by the DSA at all—licensed content provided
by the producer. Podcasts=interesting case.
[If you need an invitation to provide content, how do you
count that? Radio over the internet where they select specific shows to stream,
Bluesky? Is the answer how much prescreening goes into the invitation?] Answer:
may need to be litigated. Key definition: Whether hosting is at the request of
the user or of the service. May depend on targeting of users as well. [I can
see how my pitch of content to Netflix doesn’t depend on me having my own Netflix
account/being a Netflix “user,” but I wonder how that generalizes.]
Cable started out as super-open infrastructure—you could put
your own content into Amsterdam cable from your own rooftop. Then the economics
of consolidation took over. Happening on YouTube here—line between UGC and “professional”
content are very blurry. Are they asking YT or is YT requesting them to provide
content? And requiring licensing from YT providers, including individual users,
blurs this further.
Keller: advertisers will also say they don’t want their
content next to spam, porn, etc. That has influence over policies, usually
restrictively. YT agreed not to consider fair use in content takedowns from a
major movie studio—a concession that affected other users.
Samuelson: We have a more direct interest in researcher
access than we have to industry reactions: in public they will say “we are
doing all we can to comply,” so you have to read the public performance. The
private face looks quite different—a lot of hypocrisy, understandably, because
you don’t want to appear contemptuous of something even though it’s not well thought
through and you don’t think you can really comply.
Keller: then other countries look and say “oh, we can impose
this too because they can comply.”
Samuelson: don’t take at face value statements by the big
companies. That cynicism is itself a concern for regulators. Another thing
under the hood: how are the platforms redesigning their technologies and
services to minimize compliance obligations? The easy one to see is eliminating
comment sections. We won’t see the contracts b/t platforms and other entities,
which is an issue, bypassing regulatory control.
Dussolier: sanitization rhetoric is very different from © licensing.
Don’t invest too much copyright thinking into this space.
Matthias Leistner: there is at least one element w/a clear ©
nexus: data related issues. Inconceivable to subsume fundamental issues like
creative freedom behind copyright; this is a systemic risk. If the duties also
related to the practice of dealing with data from consumers, couldn’t you at
least control for systemic risks in licensing data, e.g., homogenization of
content? Or would that carry the idea of systemic risk too far? Journalism that
tells people only what they want to hear is a known risk; so are there uses of
data which you must not make?
RT: Casey
Newton just wrote about Meta’s new system cards (Meta’s
own post on this here):
Written to be accessible to most
readers, the cards explain how Meta sources photos and videos to show you,
names some of the signals it uses to make predictions, and describes how it
ranks posts in the feed from there.
… The idea is to give individual
users the sense that they are the ones shaping their experiences on these apps,
creating their feeds indirectly by what they like, share, and comment on. If
works, it might reduce the anxiety people have about Meta’s role in shaping
their feeds.
… Reading the card for Instagram’s
feed, for example, the signals Meta takes into account when deciding what to
show you include “How likely you are to spend more than 15 seconds in this
session,” “How long you are predicted to spend viewing the next two posts that
appear after the one you are currently viewing,” and “How long you are predicted
to spend viewing content in your feed below what is displayed in the top
position.”
Note what’s not here: demographics. How did it assess
your likelihood of spending more than 15 seconds/watching the next two posts,
etc? And did it assess others’ likelihoods differently depending on categories
that humans think are relevant, like race and political orientation? By
contrast, one source Meta cited in support of these “model cards” was an
article that explicitly called for
model cards about demographics. (My favorite bit from this
Meta page: “the system applies additional rules to ensure your feed contains a
wide variety of posts, and one type of content does not dominate. For instance,
we’ve created a rule to show no more than three posts in a row from the same
account. These rules are tested to make sure that they positively impact our
users by providing diverse content that aligns with their interests.” Diversity
as completely empty shell!) This is a really clear example of how they’re
trying to get ahead of the regulators and shape what needs to be disclosed etc.
in ways that are not actually that helpful.
Dussolier: how do we deal with two trusted flaggers, one of
which is a conservative Catholic group and one a LGBTQ+ rights organization?
You can trust them to represent their own positions, but what does that mean?
Keller: they have to be gov’t vetted and they can be kicked
out if they submit too many invalid claims—they’re supposed to be flagging
genuine violations of the platform’s TOS. But different gov’ts might approve
different entities, which will create conflicts. They don’t have to honor flags.
But when it goes to litigation, national courts will interpret TOS in light of
fundamental rights, which will lead to potential divergence.
Senftleben: We also don’t have trusted flaggers to support
content as permissible.
Keller: risk profiles don’t match statuses in system:
Wikimedia is a VLOP but not 4chan or 8chan.
Griffin: who’s going to be doing this trusted flagging? It’s
not something that scales very well. Assumes that civil society will be sitting
there all day. What is the funding model? The answer is obvious in ©, but not elsewhere.
It’s worse than that, since in © you don’t need to be a
trusted flagger b/c the © agreements are broader.
Schwemer: risks of rubber-stamping flaggers’ flags. But
might be able to get more insight from transparency. National differences in
Europe could be very powerful in who is designated as trusted flagger;
potential crossborder effects.
Dusollier: entitled flaggers v. trusted flaggers—© owners
are entitled to flag their content claims; is that the same as trusted?
DSA was thinking about security agencies/police forces as
trusted flaggers—clearly the plan.
Hughes: will law enforcement agencies want to have to
publish what they did and what happened, as contemplated for trusted flaggers?
Would rather have side agreement w/Meta. Both pro- and anti-gay forces might be
able to fundraise to participate in flagging, so maybe it’s a successful
mechanism to generate flags. And putting out a report every year is a positive
for them to show what funders’ money is funding.
Leistner: concerned about this—modeled on existence of
active civil society w/funding—not in many member states, where there is no
culture of funding proto-public functions with private $ (US has many nonprofits
because it has low taxes and low public provision of goods). These may be
pretty strange groups that have active members. Worst-case scenario: Orban
finances a trusted flagger that floods the European market with flags that are required
to be prioritized, and flaggers can flag across nations.
Hughes: does have to be illegal content.
Griffin: good point that especially many smaller EU states
don’t have that kind of civil society: France and Germany are very different
from Malta.
Keller: nobody knows how often flaggers accurately identify
hate speech, but every current transparency report indicates that complying with
more notices = improvement. We don’t know how many accurate notices v.
inaccurate there are.
Quintais: It’s worse b/c of broad definition of illegal
content. The definition of trusted flagger is about competence and expertise—you
can have competence and expertise without sharing values. If LGBTQ+ content is
illegal in one country, not clear how to prevent a trusted flagger from
receiving priority throughout EU.
Schwemer: There can also be orders to remove, though they
have to be territorially limited to what’s necessary to achieve the objective.
Those are not voluntary.
Griffin: Using Poland/Hungary as examples is not fully
explanatory. France has a lot of Islamophobic rules and isn’t getting the same
pushback.
No comments:
Post a Comment