Please Select Your Location
Australia
Österreich
België
Canada
Canada - Français
中国
Česká republika
Denmark
Deutschland
France
HongKong
Iceland
Ireland
Italia
日本
Korea
Latvija
Lietuva
Lëtzebuerg
Malta
المملكة العربية السعودية (Arabic)
Nederland
New Zealand
Norge
Polska
Portugal
Russia
Saudi Arabia
Southeast Asia
España
Suisse
Suomi
Sverige
台灣
Ukraine
United Kingdom
United States
Please Select Your Location
België
Česká republika
Denmark
Iceland
Ireland
Italia
Latvija
Lietuva
Lëtzebuerg
Malta
Nederland
Norge
Polska
Portugal
España
Suisse
Suomi
Sverige
<< Back to Blog

Protecting Artists' Copyrights in the AI Era: Nightshade AI and the Battle Against AI Generated Images

VIVE POST-WAVE Team • Feb. 9, 2024

4-minute read

In an era where AI technology is rapidly advancing, artists are facing unprecedented challenges. Their works, including copyrighted artwork, are being used to train AI models without consent, infringing on their copyrights and threatening their livelihoods. In 2022, the art community launched a movement on ArtStation titled "NO TO AI GENERATED IMAGES" to oppose AI-generated imagery, and this similar movements have continued to expand.

Furthermore, some artists have sought legal action to defend their rights, including copyright for musicians. Earlier this year, it was revealed that Midjourney used many artworks without authorization as AI training data to train their AI models. The list of affected artists was made public and is reportedly now serving as evidence in court. However, artists still feel at a disadvantage in this David versus Goliath battle.

The NO TO AI GENERATED IMAGES movement on ArtStationThe "NO TO AI GENERATED IMAGES" movement on ArtStation.

A New Chapter with Nightshade and Glaze

As artists search for a way out of this unequal fight, a ray of hope comes from a research team at the University of Chicago. Led by Computer Science Professor Ben Zhao, the team is dedicated to using technological prowess to protect artists' rights in the fast-evolving field of artificial intelligence.

Following the development of "Glaze" in 2023, a tool designed to protect artworks from imitation, the team has developed a proactive tool called "Nightshade AI." "Nightshade AI" aims to fight AI with AI, casting a sort of "curse" on images that causes AI models that ingest these images to produce incorrect results. These two innovative tools offer artists new ways to defend their copyrights in the digital age, emphasizing their autonomy in the face of technological challenges.

How Nightshade and Glaze Work

"Glaze" works by altering the pixels in artworks, making it difficult for AI models to interpret the original style. This change is so subtle that it's almost imperceptible to the human eye, yet it effectively prevents AI from learning and replicating the style.

"Nightshade AI" operates similarly to "Glaze," but instead of preventing style imitation, it acts as an offensive tool designed to distort AI models' recognition of features. To the human eye, images processed by "Nightshade AI" look nearly identical to the originals, but AI models will recognize entirely different content. For example, what looks like a cow lying in a field to a human might be interpreted as a large handbag by the AI. After repeated "poisoning" training, the AI model begins to make incorrect generalizations, associating the concept of a "cow" with brown leather handles, smooth side pockets, zippers, and even a brand logo. This is not just a technical assault but also a philosophical one.

Nightshade AI can not only make AI mistake dogs for cats but also misjudge stylesNightshade AI can not only make AI mistake dogs for cats but also misjudge styles. (Source: University of Chicago)

"Nightshade AI" is highly potent, capable of successful poisoning attacks with as few as 100 samples. Moreover, its effects can impact related concepts, meaning that changing prompt words alone cannot avoid poisoning. For instance, poisoning the prompt "magical world" also affects "Harry Potter" and "J.K. Rowling."

When "Nightshade AI" poisons multiple different concepts (such as objects, styles, etc.), the effects can accumulate even if these concepts appear together in a single prompt, indicating that multiple attacks can strengthen the poisoning effect. Eventually, after the AI model consumes too much "tainted" data, it can lead to an irreversible collapse of the entire model.

If the poisoning is too deep, the resulting image will be completely unrecognizableIf the poisoning is too deep, the resulting image will be completely unrecognizable. (Source: University of Chicago)

Moreover, "Nightshade AI" is effective not just on a single model but can also transfer across models. Even if attackers and model trainers use different AI architectures or ai training data, Nightshade's poisoning attacks can still effectively transfer to the target model and resist defenses designed to prevent such attacks, demonstrating its powerful penetration and persistence.

The Impact and Reception of Nightshade

Since its release on January 18, 2024, "Nightshade AI" has received 250,000 downloads in the first five days, a response that exceeded the research team's expectations. Their earlier tool, Glaze, has been downloaded 2.2 million times since its release in April 2023. The team plans to develop a tool that combines the defensive capabilities of "Glaze" with the offensive nature of "Nightshade AI," allowing artists to protect their creations while disrupting the training of AI models.

However, there are voices of opposition, arguing that these tools represent a "destructive" intervention in AI development and seem ethically questionable. In response, Ben Zhao and his team have offered an apt analogy: if someone keeps stealing your labeled lunch from the fridge, adding some hot sauce might not be too much, right? They emphasize that "Nightshade AI" aims not to destroy AI models but to increase the cost of training on unauthorized data, thereby forcing companies to obtain original images through licensing, ensuring artists receive the respect and compensation they deserve.

Considering the significant investment in time and resources companies like Midjourney and Stable Diffusion put into training their AI models, the cost of a sudden collapse would be considerable. In comparison, paying artists for quality ai training data could create a win-win situation. Not only large companies need to be cautious, but also those using open-source data/models for training should be wary of residual poisoning.

As AI technology rapidly evolves, artists are compelled to find new ways to protect their creations and copyrights. The emergence of "Nightshade AI" and "Glaze" represents a new round of confrontation between artists and AI. This battle is not just about technology but also about rights and ethics. Through these tools, artists hope to maintain the uniqueness and value of their work in the AI era without being exploited. For the creative industry, this is a significant step and lays a new foundation for the future interaction between art and technology.