Shira Perlmutter
AI detection and AI generation of content: CO has role to
play in applications for registration and as advisors to Congress/exec branch
on © law and policy.
Use of tech measures to detect works online: we published a
long report on 512 concluding that it has become unbalanced and should be
updated, including definitions for standard technical measures (STMs). Service
providers have an obligation to accommodate and not interfere w/STMs; they hadn’t
yet been figured out yet and providers shouldn’t have to use them but should
also not interfere w/their use. 512(i) provided that they had to be developed
in a fair, multistakeholder process and had to be available fairly and w/o
undue burden on providers. But not a single tech has been designated as an STM;
we could make the vision a reality. [Ah yes, get the nerds on it.] We conducted
a public study on STMs and also panels on voluntary tech measures. We
recommended three changes to the statute: clarifying that the terms “broad consensus”
and “multi-industry” require substantial consensus and not unanimity, and only
w/r/t industries in question. Replace “developed by a broad consensus” w/ “designated
by a broad consensus” even if they were initially developed by a narrow group.
Set out factors for determining whether there were substantial costs or burdens
on service providers. We didn’t recommend gov’t designation process or repealing
512(i) entirely. Given complexities of evolving tech, weren’t convinced gov’t
process would work, but an improved consensus-based framework could play a role
in curbing infringement, but still an open question about getting everyone to
table.
Voluntary measures received 6000 public comments. No real
surprises. Diversity of online marketplace has generated increasingly wide
variety, precluding one size fits all. Effective tech measures share:
inclusivity; collaboration; communication; and transparency. Many participants
expressed frustration when these elements were missing. Future initiatives
would benefit from ensuring these attributes. As in EU discussions, other areas
of divergence related to resources and access—resource intensive measures
remain problematic for small rightsowners and small services. Small service
providers, including startups, don’t have capital to invest in expensive
technical measures. But others respond that limits on size and resources
shouldn’t excuse putting in place protections if the platform is distributing
content to the public. [Sadly, they don’t mean “if the platform is distributing
infringing content.”] Access was also controversial w/individual rights holders
who were frustrated at not being included in discussions; but there is also a
risk of intentional or unintentional misuse.
NFTs: we’re conducting a joint study w/PTO on IP issues.
Participants in roundtable didn’t ask for statutory change specific to ©;
automated royalties for token resale were exciting. But unclear if it’s
enforceable downstream particularly if it moves b/t markets. And a purported
transfer of © can be difficult if there are off-chain terms and conditions.
Sending a takedown notice to a marketplace can block sale, but doesn’t get rid
of the actual putatively infringing content. And there are jurisdictional
challenges and challenges in identifying source of content.
Current hottest area is AI. Does Act restrict authorship to
humans? Fed Cir said inventors had to be human; Thaler is arguing that © should
recognize AI authors. We don’t think we acted arbitrarily and capriciously in
so holding. What about works produced through a combination of human and machine?
What type of human contribution is enough? Comic book case: we registered the
work based on an application identifying a single human author that didn’t
disclose use of Midjourney. When registrant claimed publicly that it was
AI-generated, we asked for more information, and determined that, b/c of the
way the tech works, the individual images lacked sufficient human authorship,
but issued a more limited registration that covered the human-authored
elements: text and selection/coordination/arrangement of the images. Given the
increasing number of applications for AI works, additional guidance was needed,
so we issued a clarifying statement. Our goal was to help people avoid problems
w/validity of registration. Affirms the human authorship requirement and
instructs applicants of duty to disclose inclusion of significant AI generated
content and provide brief explanation of human author’s contributions. Further questions
will be addressed on a case by case basis since new hypotheticals keep coming
up. This is not the end of our guidance.
How do we distinguish b/t use of AI as tool, like Photoshop,
and AI-generated content? Continued work!
AI datasets trained on © images scraped from websites: is
that ok? Getty has sued Stability AI in Delaware and UK alleging that it
scraped 12 million images to train Stable Diffusion; also a proposed class
action. Key question is whether exceptions apply—text and data mining exceptions
in many countries, and fair use in US. Result will likely depend on nature of
output and effect on market; courts may view ingestion for research differently
than ingestion for producing content that competes in the market. Spectrum: output
is substantially similar; output is similar style; output appeals to same
audience—courts may treat differently. Who would be held responsible for any
infringement? Owner of computer, programmer, prompter?
We’re not done even if we clarify existing law: need to
consider whether existing law should be changed. Would it promote progress, as
Thaler says, to grant rights in AI generated content? What about the fact that
the Clause specifies granting rights to Authors as the means to promote
progress? It’s hard to incentivize a machine; do we need to incentivize the
machine’s owners more than they already are? Can “authors” include machines?
Can some other constitutional clause form the basis for sui generis rights? To
the extent that fair use or another exception applies, should there be
accommodation for human creators of the works, if not authorization then
attribution or remuneration? Certain services have established voluntary
remuneration of some kind. Questions are always easier than answers, but we’ll
be looking at all of these. Launched broader AI initiative to address scope of ©
and also legal status/implications of ingesting © works in training AI, holding
listening sessions with creators, lawyers, technologists. Will hold informational
webinars over summer and publish notice of inquiry afterwards intended to
inform a report/series of reports analyzing the issues.
No comments:
Post a Comment