Fast company logo
|
advertisement

An ongoing lawsuit by artists against AI’s biggest players highlights how copyright law can’t keep pace with AI.

[Source Video: Getty Images]

BY Luke Plunkett5 minute read

Artificial Intelligence is rapidly transforming many industries and communities around the world. One of the first groups to feel its impact—and also generate some of its fiercest critics—has been artists, many of whom for the past year have been undertaking a campaign against a technology they feel diminishes their work and threatens their livelihoods. 

It’s a campaign that has been waged largely on ethical grounds, in part because that’s the only place it could be waged. The broad argument—that AI has been trained on human art, and can produce work suspiciously similar to specific human artists—has had little legal precedent to fall back on, and the tech companies promoting the products were never going to stop doing it just because artists asked them to. 

As it stands, this is still the case. While some initial rulings have been made—including the fact that wholly AI-produced work can’t be copyrighted—broader questions are still being asked about the legality of building and operating AI platforms that have been trained on human work without credit or compensation. This question is the basis of a class action lawsuit filed earlier this year in California that has gained much attention. 

The suit, brought by three artists (Sarah Andersen, Kelly McKernan, and Karla Ortiz), accuses Stability AI’s Stable Diffusion, Midjourney, and gallery site DeviantArt of being “21st-century collage tools that violate the rights of millions of artists,” and that in doing so, they “will substantially negatively impact the market for the work of plaintiffs and the class.”

Because it’s one of the first high-profile cases to specifically deal with AI-created imagery—and, in particular, test how debates over a machine mimicking an artist’s style hold up against existing copyright law—it has been seen as something of a rubber-meets-the-road moment by both sides of the argument. And while a verdict has yet to be reached, Judge William Orrick has made indications that both sides’ views have merit. 

Orrick recently said, “I don’t think the claim regarding output images is plausible at the moment because there’s no substantial similarity” between the original artwork supplied to the court and their derivative AI-created images. At the same time, however, he is allowing Andersen’s claims against Stability AI to move forward, and for the other two artists to amend their complaints so they hold up better in court, showing that at least some aspects of existing copyright law are workable in these cases.

It’s complicated by the fact that current copyright law was established when the notion of who owned what was much easier to determine. The question of creation and ownership in the age of AI, where influence and output is more nebulous, is much thornier and will likely require much greater clarity under the law in the months and years to come.

While this California case may involve individual artists taking on what are now billion-dollar AI companies, those artists’ interests in combating AI now align (in some ways, at least) with the unlikeliest of groups: other billion-dollar companies, such as Disney, who has already threatened legal action against an AI provider back in 2022 and who will surely be keeping an eye on platforms. At stake is a vast catalog of intellectual property whose likeness or visual DNA can be easily replicated by AI.

“Compared to a year ago, I think we’re seeing a broader acceptance of the idea that there are legal and ethical problems with enormous tech companies training their AIs on human-created work without consent, credit, or compensation,” says attorney and programmer Matthew Butterick, who, aside from representing the three artists in this case, is also representing programmers in another case against GitHub, Microsoft, and OpenAI. “Ultimately, because generative AI depends on human-created data, I think these companies will prefer to cooperate with the creative industries,” says Butterick. “Because if the AI companies bankrupt human creators, they will soon bankrupt themselves.”

Adding to the confusion is the precedent the artists could set by winning. As attorney Kit Walsh at the Electronic Frontier Foundation has said, a ruling in favor of artists in this case may close one Pandora’s Box and open another. “The theory of [this] class-action suit is extremely dangerous for artists,” she wrote in a post from earlier this year. “If the plaintiffs convince the court that you’ve created a derivative work if you incorporate any aspect of someone else’s art in your own work, even if the end result isn’t substantially similar, then something as common as copying the way your favorite artist draws eyes could put you in legal jeopardy.”

advertisement

“Done right, copyright law is supposed to encourage new creativity,” Walsh adds. “Stretching it to outlaw tools like AI image generators—or to effectively put them in the exclusive hands of powerful economic actors who already use that economic muscle to squeeze creators—would have the opposite effect.”

One such “powerful economic actor” already taking action is Getty Images, who has also taken Stability AI to court, accusing the company’s Stable Diffusion platform of copyright and trademark violations amounting to “brazen infringement of Getty Images’ intellectual property on a staggering scale.” Getty alleged Stable Diffusion lifted more than 12 million images from its paid database without “permission or compensation” as part of an effort to “build a competing business.”

The struggle both the artists’ and Getty’s cases face is that these AI platforms aren’t exactly recreating their work. They may have learned from original artwork without permission, and can output images of a similar style and quality, but in the eyes of current copyright law that isn’t the same thing as outright duplicating it. 

As Artnews pointed out last month, “The issue is that, even as these models appear to credibly copy existing artists’ styles, ‘style’ is not protected under existing copyright laws, leaving a kind of loophole that AI image-generators can exploit to their benefit.”

How a court can wade into debates about style and make a defining judgment is anyone’s guess, and until (or unless) they do, the legal battle over AI and its output may not be settled with a single case anytime soon. 

Luke Plunkett is a former Senior Editor for Kotaku who now consults and freelances in the video game industry.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.


ABOUT THE AUTHOR

Luke Plunkett is a video games writer, author of Cosplay World and game designer based in Australia. A contributing editor for news and culture website Kotaku, Luke has also appeared in the New York Times, Sydney Morning Herald, MSNBC, USA Today, MTV and the BBC More