The Algorithm: AI-generated art raises tricky questions about ethics, copyright, security

Thanks to his distinctive style, Rutkowski is now one of the most popular claims in the new open source art generator for artificial intelligence. stable spread, which was launched late last month – much more famous than some of the world’s most famous artists, such as Picasso. His name has been used as a catalyst about 93,000 times.

But he’s not happy about that. He believes it could threaten his livelihood – and he was never given the choice between opting in or not using his work in this way.

The story is another example of AI developers rushing to come up with something great without thinking about the humans that will be affected by it.

Stable Diffusion is free for anyone to use, which makes for a great resource for AI developers who want to use a robust model to build products. But because these open source programs are created by removing images from the Internet, often without proper permission and attribution to artists, they raise difficult questions about ethics, copyright, and security.

Artists like Rutkowski have had enough. It’s still early days, but a growing coalition of artists is figuring out how to tackle the problem. In the future, we may see the art sector shift towards pay-per-play or subscription models like the one used in the film and music industries. If you are curious and want to know more, Read my story.

And it’s not just artists: We should all be concerned about what is included in training datasets for AI models, especially as these technologies become a more important part of the internet infrastructure.

in paper Released last year, artificial intelligence researchers Abeba Birhane, Vinay Uday Prabhu and Emmanuel Kahembwe analyzed a smaller dataset similar to that used to build Stable Diffusion. Their findings are sad. Because the data is from the Internet, and the Internet is a horrific place, the dataset is filled with explicit rape images, pornographic material, malicious stereotypes, and racial and ethnic slurs.

A website called Have you been trained It allows people to search for images used to train the latest set of popular AI art models. Even innocent search terms get a lot of annoying results. I tried searching the database for my ethnicity, and all I got back was porn. Lots of porn. It’s a frustrating thought that the only thing AI seems to associate with the word “Asian” are nude East Asian women.

Not everyone sees this as a problem that the AI ​​sector must fix. Imad Mostaqi, founder of Stability.AI which built Stable Diffusion, He said On Twitter, he thought the moral debate about these models is “paternalistic absurdity that doesn’t trust people or society.”

But there is a big safety question. Free, open source models like Stable Diffusion and Big Language Model Flowers Giving malicious actors tools to create malicious content at scale with minimal resources, says Abhishek Gupta, founder of the Montreal Institute for Artificial Intelligence Ethics and responsible AI expert at Boston Consulting Group.

Gupta says the sheer amount of chaos these systems allow will limit the effectiveness of traditional controls such as limiting the number of images people can create and restricting the creation of evasive content. Think deepfakes or misinformation about steroids. “When a powerful AI system goes into the wild, it can cause real trauma…for example, by creating objectionable content in [someone’s] likeness.

We can’t put the cat back in the bag, so we really have to think about how to handle these AI models in the wild, says Gupta. This includes monitoring how AI systems are used after they are launched, and thinking about controls that “can minimize damage even in the worst-case scenario.”

deeper learning

There is no Tiananmen Square in the new artificial intelligence of the Chinese photo industry

Leave a Comment