Artists celebrate AI copyright infringement case moving forward

Artists celebrate AI copyright infringement case moving forward


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Visual artists who joined together in a class action lawsuit against some of the most popular AI image and video generation companies are celebrating today after a judge ruled their copyright infringement case against the AI companies can move forward toward discovery.

Disclosure: VentureBeat regularly uses AI art generators to create article artwork, including some named in this case.

The case, recorded under the number 3:23-cv-00201-WHO, was originally filed back in January of 2023. It has since been amended several times and parts of it struck down, including today.

bybit

Which artists are involved?

Artists Sarah Andersen, Kelly McKernan, Karla Ortiz, Hawke Southworth, Grzegorz Rutkowski, Gregory Manchess, Gerald Brom, Jingna Zhang, Julia Kaye, and Adam Ellis have, on behalf of all artists, accused Midjourney, Runway, Stability AI, and DeviantArt of copying their work by offering AI image generator products based on the open source Stable Diffusion AI model, which Runway and Stability AI collaborated on and which the artists alleged was trained on their copyrighted works in violation of the law.

What the judge ruled today

While Judge William H. Orrick of the Northern District Court of California, which oversees San Francisco and the heart of the generative AI boom, didn’t yet rule on the final outcome of the case, he wrote in his decision issued today that the “the allegations of induced infringement are sufficient,” for the case to move forward toward a discovery phase — which could allow the lawyers for the artists to peer inside and examine documents from within the AI image generator companies, revealing to the world more details about their training datasets, mechanisms, and inner workings.

“This is a case where plaintiffs allege that Stable Diffusion is built to a significant extent on copyrighted works and that the way the product operates necessarily invokes copies or protected elements of those works,” Orrick’s decision states. “Whether true and whether the result of a glitch (as Stability contends) or by design (plaintiffs’ contention) will be tested at a later date. The allegations of induced infringement are sufficient.”

Artists react with applause

“The judge is allowing our copyright claims through & now we get to find out allll the things these companies don’t want us to know in Discovery,” wrote one of the artists filing the suit, Kelly McKernan, on her account on the social network X. “This is a HUGE win for us. I’m SO proud of our incredible team of lawyers and fellow plaintiffs!”

“Not only do we proceed on our copyright claims, this order also means companies who utilize SD [Stable Diffusion] models for and/or LAION like datasets could now be liable for copyright infringement violations, amongst other violations,” wrote another plaintiff artist in the case, Karla Ortiz, on her X account.

Stable Diffusion was allegedly trained on LAION-5B, a dataset of more than 5 billion images scraped from across the web by researchers and posted online back in 2022.

However, as the case itself notes, that database only contained URLs or links to the images and text descriptions, meaning that the AI companies would have had to separately go and scrape or screenshot copies of the images to train Stable Diffusion or other derivative AI model products.

A silver lining for the AI companies?

Orrick did hand the AI image generator companies a victory by denying and tossing out with prejudice claims filed against them by the artists under the Digital Millennium Copyright Act of 1998, which prohibits companies from offering products designed to circumvent controls on copyrighted materials offered online and through software (also known as “digital rights management” or DRM).

Midjourney tried to reference older court cases “addressing jewelry, wooden cutouts, and keychains” which found that resemblances between different jewelry products and those of prior artists could not constitute copyright infringement because they were “functional” elements, that is, necessary in order to display certain features or elements of real life or that the artist was trying to produce, regardless of their similarity to prior works.

The artists claimed that “Stable Diffusion models use ‘CLIP-guided diffusion” that relies on prompts including artists’ names to generate an image.

CLIP, an acronym for “Contrastive Language-Image Pre-training,” is a neural network and AI training technique developed by OpenAI back in 2021, more than a year before ChatGPT was unleashed on the world, which can identify objects in images and label them with natural language text captions — greatly aiding in compiling a dataset for training a new AI model such as Stable Diffusion.

“The CLIP model, plaintiffs assert, works as a trade dress database that can recall and recreate the elements of each artist’s trade dress,” writes Orrick in a section of the ruling about Midjourney, later stating: “the combination of identified elements and images, when considered with plaintiffs’ allegations regarding how the CLIP model works as a trade dress database, and Midjourney’s use of plaintiffs’ names in its Midjourney Name List and showcase, provide sufficient description and plausibility for plaintiffs’ trade dress claim.”

In other words: the fact that Midjourney used artists name as well as labeled elements of their works to train its model may constitute copyright infringement.

But, as I’ve argued before — from my perspective as a journalist, not a copyright lawyer nor expert on the subject — it’s already possible and legally permissible for me to commission a human artist to create a new work in the style of a copyrighted artists’ work, which would seem to undercut the plaintiff’s claims.

We’ll see how well the AI art generators can defend their training practices and model outputs as the case moves forward. Read the full document embedded below:





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest