The First Battles in the Legal War Over the Future of AI Have Begun
Tuesday, November 21st, 2023
As of this writing, Sam Altman is still out as CEO of OpenAI, the maker of ChatGPT. The Economist depicts this firing as a battle in the war between AI “boomers” and “doomers.”
The doomers believe AI threatens humanity and must be constrained. The boomers believe this threat is overstated and that AI progress should be turbocharged to fuel productivity increases.
The control of OpenAI is an odd battle because of the quirky nature of its corporate governance. The parent company is a nonprofit that owns a for-profit subsidiary. Accordingly, the nonprofit board doesn’t answer to the investors in the for-profit subsidiary. Thus, the board can act for societal-benefit reasons.
While this drama plays out, the real war over AI’s future is being fought in courtrooms. This war is over whether copyright and the right of publicity will kill AI. This war has caused some AI makers to offer legal coverage for some AI products.
There are two copyright threats to AI. The biggest threat is over training data. AI systems need massive amounts of training data to tune themselves sufficiently to do their magic. It is widely believed many large AI systems, such as ChatGPT, have been trained on material copied off the Internet without the permission of the copyright owners of such material.
Various content creators have filed lawsuits against the makers of AI systems, contending such copying without their permission is copyright infringement. Beyond that, the same content creators sometimes contend that the AI output is too similar to their creative works and, thus, is also copyright infringement.
Beyond these copyright claims, some celebrity writers and performers claim AI systems violate their publicity rights. The right of publicity is the right to control the use of your name, image, and likeness for a commercial purpose. Many AI systems will generate content in the style of a famous author or artist.
Numerous lawsuits have been filed attacking AI systems. The plaintiffs include famous authors such as John Grisham and George R. R. Martin and performers such as actress Sarah Silverman. Also, Getty Images sued because it fears image-generating AIs will kill its business for stock photos.
These lawsuits are just reaching the stage where courts are making preliminary rulings. So far, no court has granted definitive victory to either side.
In my opinion, the AI makers have the better argument. I believe the courts will ultimately hold it’s fair use to use for training data other people’s copyright-protected material found on the Internet without their permission.
I base this on how AI systems work. These systems don’t store and regurgitate copies of the material they ingest during training. They just study those copies to map relationships between words or other data to set “weights” in their neural networks.
As for output, except in rare cases, the AI output in response to a prompt shouldn’t be so similar to any single piece of training data to constitute copyright infringement.
Still, the multiplicity of these lawsuits and the possibility that some might succeed have driven most major AI players to begin offering copyright-infringement protection to their customers.
So far, the following companies have announced some copyright-infringement coverage for some of their AI services: OpenAI (the maker of ChatGPT and DALL-E), Google, Microsoft (which reportedly owns 49% of OpenAI’s for-profit subsidiary), IBM, Amazon, Adobe, Getty Images, and Shutterstock. But some important players are not yet providing coverage, such as Stability AI, Midjourney, and Anthropic.
Is the copyright-infringement coverage from the major AI players meaningful and reliable? It’s too soon to say.
The coverage offered so far is limited to specific corporate AI products and is not provided for widely used consumer services. For example, neither ChatGPT nor Google Bard is covered.
Also, in some cases, such as with OpenAI, it’s unclear whether a copyright infringement claim over training data is covered.
In addition, some companies have not publicly released the contractual language of the coverage.
Finally, some coverage promises contain qualifiers that may be impractical to meet.
The bottom line for such copyright coverage is you have to do your homework. You need your AI vendor to tell you what specific AI products are covered. You also should have an attorney review the contractual coverage language and give you an opinion on how protective it is.
Overall, the time has come to watch the AI legal battlefield, for as George R.R. Martin famously wrote, “Winter is coming.”
Written on November 21, 2023
by John B. Farmer
© 2023 Leading-Edge Law Group, PLC. All rights reserved.