Resolving Online Content Infringement Disputes with the Use of AI Technology
Faye Wang
European perspective: Corporations manage notice and
takedown, but should a government authority be involved? Large platforms use
voluntary automated filtering/content moderation to minimize legal risk.
Previous EU legislation was tech-neutral and did not impose a general
monitoring obligation. New © Directive does require prevention of future
uploads, apparent contradiction of prohibition on general monitoring. Poland
challenged; CJEU dismissed challenge; new guidance from 2021: Art. 17 should
not be transposed in a way that leads to a general monitoring obligation, but
the guidance does not explain how this is supposed to be done if they are also
supposed to use automated systems and prevent future uploads. CJEU then held
that refraining from putting into place appropriate automate monitoring/filter
systems was a factor in whether they communicated to the public. But there is
also a proposed AI Act that requires intervention of a human person, requiring
assessment of AI before adopted so that they won’t cause harm.
Remaining questions: can AI assisted content
recognition/moderation accurately monitor or determine infringement without
human intervention? Her conclusion: no. Misrecognition of audio and video may
occur. And tech has limited ability to understand context of fair use/fair dealing.
Possible solutions to reduce cost/human hours: AI assistance
should be designed to provide better understanding of appropriate context. AI
can do preliminary recognition. Then:
Automated computing investigatation process (not AI). Expert manual review
for notice and takedown. If failed, online dispute resolution/court litigation.
Good for finding substantial part, but not appropriate for
shorter excerpts. In marginal cases, human needs to be involved in reviewing, but
they need to be properly trained to identify commentary.
Appropriation of Data-driven Persona
Zahra Takhshid
Should extend privacy to cover data about us. Background in
the four torts: use the appropriation tort: one who appropriates to his own use
or benefit the name or likeness of another is subject to liability to the other
for invasion of his privacy. Expanded to other characteristics such as voice,
surroundings/lookalikes (White v. Samsung).
Data privacy as the new frontier. May help address
TransUnion case: finding a concrete injury rooted in the common law. What type
of data? PII, treated as a standard and not a rule/contextually. Commerciality
requirement? Quotes SD Fla: “the mere act of misappropriating the p’s identity
may be sufficient evidence of commercial value to survive even a motion for
summary judgment.” Consent: we’re transferring our rights all the time by
contract. But scope can be challenged.
Q: CCPA—is there a need for this in California with its comprehensive
regulation?
A: Yes, because we continue to rely on common law privacy
torts.
RT: Thinking about Dobbs and the focus on private benefit in
proposals like yours: is this supposed to be a right against the government? If
so, the regulatory state is under threat in ways you might like for Dobbs but
not so much for taxes. If not, whether categorically or because the government’s
interests essentially always count as substantial enough to override the
privacy right, that’s a reconfiguration that speaks to a profound change in the
relationship of privacy to protecting the individual versus protecting the
individual against non-government actors, which concerns me deeply. If you don’t
change contract law, you may not have bought much with this reconfiguration
either.
A: Should be against anybody including the government. [Then
I reiterate my concerns about the regulatory state: how should we handle the
gov’t’s interest? Balancing isn’t part of the tort, though it comes in for
First Amendment-inflected defenses. Does the gov’t have to show justification?
Why won’t it always win?]
Q: we consent all the time. How does that work in your
model?
A: we need to help the common law grow and not die.
Jennifer Rothman: May want to spend time defining what you
mean by privacy to deal with the gov’t q. If your project is just about the
private law appropriation tort, that can be made clear. [Can it be asserted
against the gov’t?] The real challenge is the First Amendment. Her own position
is that ROP can protect privacy in the way you’re proposing, but the First
Amendment has to be considered: how much we can tolerate this protection as
against speech interest in public data. Need to engage that. Not convinced that
you need a commerciality restriction; the common law tort doesn’t have that. [I
think that the gov’t’s interest in collecting taxes is not a First Amendment
interest; perhaps one can take the position that you aren’t entitled to not pay
taxes so that doesn’t count as an “advantage” appropriated by the government,
but that formulation strikes me as extra manipulable—and especially since data such
as “what I bought from Amazon” are co-created with Amazon, I wonder if you’re
entitled to that data as against Amazon under the same logic.]
Zahr Said: Have you developed a theory of abuse of the
right? Copyright trolling, TM trolling, what would systematic abuse of this
right look like? Sometimes that can help you build in limitations/appropriate scope
of the rights.
The Right of Publicity: A New Framework for Regulating
Facial Recognition
Jason Schultz
Sociotechnical change and the ROP: moving away from privacy
framing, even though FR does have real problems of surveillance and
discrimination. Think about when new tech comes along with ability to mass
appropriate identities: law evolves, especially w/r/t visual image and
identity, going up to videogames and films where people’s identities are reconstituted
after death.
Visual identity is central to the existing case law, so
there’s no innovation required in subject matter. Setting gov’t and university
research aside, there is also clear commercial benefit, e.g., Clearview. Clearview
also scraped and didn’t get consent; consent can also be very contextual, as
the No Doubt/Activision case indicates.
What are the damages from being used in AI? Commercial value
= damage; control; dignity.
Questions about copyright preemption: the distinction is
balancing with innovation policy; you don’t have that in ROP cases because the
ROP doesn’t care about innovation. Zacchini might be wrong, but if you take all
of someone’s identity that’s too much for First Amendment purposes. Baseball
stats cases=you aren’t taking everything about the person; taking the image
goes too far.
Q: Universities have ethical rules about human subjects;
would you change the framing under which they’d go about research?
A: could distinguish university research b/c it’s not
products and services. Under Common Rule, publicly available info is ok, so
ethical rules don’t restrain anything there. But the consent questions are real.
RT: (1) Universities benefit from research. If you stick
with the common law, universities are 100% covered. You can say either
universities shouldn’t be able to do this or that there should be a limit for
this type of claim, but you need to say something. (2) It’s not persuasive to
say that the ROP is better because it doesn’t engage in balancing with
innovation policy. [I.e., that it ignores important social concerns.] Copyright
didn’t always balance with tech innovation policy; it did so when it started to
have big conflicts with it. If this is a common law right, you need to explain
why it shouldn’t adapt to new conflicts by balancing.
Rothman: need to develop a bit further the “why.” You’re
trying to avoid that by saying you’re just dealing w/commercial appropriation,
but the traditional tort is broader; even if it was limited to commerciality
you still need a normative justification for these claims, not least b/c you
need to do so to figure out the First Amendment analysis. You can likely make
out a prima facie case; the harder part is the 1A. Is the advantage/benefit
coming from an individual’s identity or from the corpus? That’s a different
type of use in the prototypical celebrity or even ordinary person in an ad.
That gets to the why of the claim.
A: focusing on dignity and control in the paper. Argument is
that the individuals matter because each individual makes the dataset better.
Q: I’m irritated about lots of things corporations do: a
traditional phone book can expose information. Why this?
A: Individual rights focus: some people can object even if
others don’t. How do you distinguish between “too much” appropriation and fragmented
unprotectable data, which is why he’s thinking about the sports cases. There’s
something about the image that makes the courts confident there’s a tort there,
making it too much. [That ignores a bunch of the news reporting cases, but
newsworthiness might be the distinction there.]
AI as Inventor in the Cambridge Handbook of Artificial
Intelligence
Chris Mammen
Fed Cir unambiguously concluded that inventors must be
human; UKIPO, EIPO have agreed, but the conversation is not done globally/at
the marginal cases. One success in South Africa for an AI inventor, but there’s
no statutory definition of inventor there and there is no substantive
examination of patents, so TBD if/when litigated.
Humanists on one side, industrial policy advocates on the
other: both can claim some justification in past statements about patent
policy. Practical reality: AI is lab equipment; assisting a human researcher.
Sometimes lab assistants get named on a patent but sometimes not. Not just a
formal Q of whether the statute requires a human, but the act of invention itself:
requires conception, which requires a theory of mind; diligence, which requires
a theory of work; and reduction to practice, which requires ability to build
the invention or write the patent application. We wave our hands about all of those
but we need to address each one for the “AI inventor.”
UK decision is under review: primary holding says an
inventor has to be a human. But UK application has some formalities that
narrowed the Q. (1) requirement to list inventor to best of applicant’s
knowledge. There may be room to say “I don’t know, but I have a right to apply
for this patent.” Consider accession doctrines as well.
In response to Q: If we’re going to have a rights regime for
AI, we also need a responsibility regime. We’re a long way from the point where
I can depose an AI about its claim of inventorship.
No comments:
Post a Comment