Protecting Art in the Age of Artificial Intelligence

Protecting Art in the Age of Artificial Intelligence

Protecting Art in the Age of Artificial Intelligence: Meet Nightshade
New tools are emerging to protect artists and their art in the age of artificial intelligence. Get ready to meet Nightshade!
Aiming to protect artists' copyrights, researchers have decided to create an "invisible shield" to protect digital art: Nightshade. Simply put, it makes small changes to images (which we, mere mortals, don't notice) but which confuse artificial intelligence models. For example, a dog can turn into a cat when the AI ​​outputs it.

The idea behind Nightshade is clear: to empower artists to fight AI companies, such as OpenAI and Google, that are using art without proper consent. This misuse of art available online is putting these companies in the crosshairs of several lawsuits filed by artists who claim that their creations have been misused.

Ben Zhao, a professor at the University of Chicago who led the team that created Nightshade, intends for the tool to be a wake-up call for big companies, reminding them to respect artists’ art, work, and intellectual property. MIT Technology Review tried to contact the tech giants, but none would comment on the matter.

The same team that developed Nightshade also created Glaze, a tool that allows artists to “camouflage” their unique style of art, preventing it from being copied by these AI companies. And the coolest thing is that they are thinking of combining the two tools.

The good news is that Nightshade will be open source, which means that other developers will be able to improve the tool. The more people use and adapt it, the more powerful it becomes, which tends to be increasingly better at protecting art and the artist.

Generative AI doesn’t just live on images. Have you heard of Riffusion? Which creates music from text prompts? Click here
Bypassing AI in favor of art

Nightshade works in a brilliant way, taking advantage of a vulnerability in artificial intelligence models. These models learn from tons of data, much of it from images that are collected from the internet, and Nightshade “tweaks” these images a little bit, which are used for machine training.

If an artist doesn’t want their art to be copied by artificial intelligence companies, with the help of a tool called Glaze, the artist can upload their work, “disguise” it with a different art style than their own, and then use Nightshade.

So, when the artificial intelligence developers scour the internet for more data, they end up finding these “poisoned” images, causing confusion in the AI ​​models.

When the artificial intelligence tool comes across these corrupted images, things get a little bizarre. The researchers tested Nightshade on a few models, and the results were surprising.

By feeding a model just 50 altered images of dogs, it began to generate strange creatures, with many limbs and exaggerated faces, and when they increased the dose of “poisoned” images, the dogs even turned into cats!

The most incredible (and a little scary) thing is that these artificial intelligence models are great at making connections, so when Nightshade “attacks” the word “dog”, it doesn’t just affect that word, but similar concepts like “puppy”, “husky” and even “wolf”. Quite a mess for a system that claims to have a work of art as a reference.

And there’s more… If the artificial intelligence gets a poisoned image related to “fantasy art”, other concepts like “dragon” and “a castle from The Lord of the Rings” can also be manipulated.

The risks

Of course, there are also risks of people using this “poisoning” technique for malicious purposes, and experts are warning that it’s time to create defenses against these types of attacks.

Gautam Kamath, a professor at the University of Waterloo who was not involved in the study, praised the research and raised a crucial issue: as these AI models become more powerful and people trust them more, the vulnerabilities become even more serious.

Power to protect art and artists

With the goal of protecting artists in mind, a US federal judge has ruled that AI-generated art cannot be secured by law. Have you seen it? Click here!

Nightshade is changing the way artists view the Internet and their creations.

Junfeng Yang, a computer science professor at Columbia University who has studied the security of deep learning systems and was not involved in the work, believes that as Nightshade gains traction, AI companies may begin to rethink their methods and perhaps begin to respect artists’ rights more. We may be one step closer to protecting art and the rights of authors.

Leading companies like Stability AI and OpenAI are already giving artists the option to opt out of having their images used in AI training models.

While this may seem like an interesting move, artists say it’s not quite the case. Eva Toorenent, an illustrator and artist who has used Glaze, believes that the usage policies force artists to overcome barriers to protecting their art, while also leaving companies with the lion’s share of the power. However, she hopes that Nightshade will make these tech giants think twice about taking work without permission.

On the other hand, some artists are feeling the positive effects of these tools. Another artist, Autumn Beverly, for example, is back online, posting her work with confidence, thanks to Nightshade and Glaze. After having her art used without consent, she now feels she has more control over where and how her art appears. She recognizes the great value these tools have in giving artists back power over their art.

The tendency is for these tools to protect artists to improve over time. Regulating art and many other issues is a challenge in a scenario of such rapid evolution as the current one with artificial intelligence. But we cannot ignore the efforts to try to make art and technology coexist, benefiting all interested parties.
Source