Friday, February 02, 2024

WIPIP Session 1: AI

Nikola Datzov, Can AI Keep a (Trade) Secret?

We’ve funneled IP protection for AI generated inventions/information to trade secrecy w/o patent or copyright for human authors/inventors. But it’s narrow protection b/c there are no choices.

How can we trust AI generated trade secrets? Concerns for bias, discrimination, unfair competition, antitrust. Disclosure to the government has risks for the trade secret owner; Elizabeth Rowe notes that the risk falls on the owner. Will companies rely on such limited protection? Is there sufficient incentive for AI generated innovations?


Disclosure is not the same thing as transparency: having the trade secret doesn’t mean understanding it—it’s just turning a black box over to the government.


Instead, proposes trust but verify: register to certify compliance with regulations, including limited government inspection, similar to source code review in litigation. Enforced w/penalties, including litigation/whistleblower protections.


Lisa Macklem, Harnessing the Robot in the Room

Generative AI could be a boon for Open Educational Resources. Want to be globally available so need to consider more than US, EU, UK guidance. Trying to come up with best practices. International framework does consider education. Transparency requirements: disclosing that content was generated by AI, designing to prevent it from generating illegal content.


Don’t use infringing data; use databases to which you have legitimate access; edit AI generated work for accuracy and to make sure not too much of the original is used. License when absolutely necessary but watch for restrictions on purpose or geography.


In response to Irene Calboli suggesting that this didn’t seem like it would be less resource-intensive: There’s a difference in effort required for assembling materials and checking AI output for accuracy.


Victoria Schwartz, AI Virtual Influencers

ROP covers the issue of real influencers. Virtual influencer names can be trademarks; actual images/AV works are copyright-protected as long as human-created. Some VI can likely receive copyright protection as characters, though not clear what the “work” is—a body of social media posts? Really a spectrum from unfiltered person with no makeup in photos, to carefully posed in makeup, to photoshop and filters, to avatar, to “human created” using CGI, to fully AI created. Claim is that we’re at the end of the spectrum; we may be near that but not quite today (cf. George Carlin brouhaha).


If © is difficult, what about ROP? Lots of people on social media claim to be AI-generated and complain about “stealing my pics.” McCarthy and INTA say ROP is for humans; Nimmer in 1954 suggested that animals, inanimate objects, and business and other institutions could be endowed with “publicity values,” so there should be publicity rights for them. State laws tend to specify living or deceased. California common law doesn’t specify that a “plaintiff” has to be human. Most caselaw on character ROP asks whether an actor playing the character gets a ROP claim without owning the ©; not on point. © is strong enough that it’s usually superior to ROP.


Maybe this is an issue for © preemption.


Eric Goldman: animals and buildings don’t have access to the courts; and there are cases saying no ROP for corporations. (I would also note that the common law clearly doesn’t apply to deceased persons, which suggests something about the meaning of “plaintiff.”)


Tyler Ochoa: why won’t TM law be more valuable? AI generation has nothing to do with TM protectability, and TM need never expire, unlike ROP (in most circumstances). For entertainment or whatever services they offer.


Zahr Said: Precision about what we’re trying to protect is useful! Is it the money, the music, something else? Is there an equitable estoppel element if there’s something deceptive going on? If AI-generated is inaccurate/puffery, should that bother us?


A: disclosure model is already popular for influencers.


Q: will it matter if more polities grant “citizenship” to virtual AIs? Saudi Arabia already did it.


Laura Heymann: Why not start w/potential harms, and then map them onto rights/remedies, instead of starting w/ the idea that there is something to be protected?


A: good idea: we don’t think of ROP as protecting consumers.

No comments: