No Tiananmen Square in ERNIE-ViLG, the new Chinese image-making AI

In today’s China, social media companies usually own Property lists for sensitive words, based on both government instructions and its own operational decisions. This means that any filter used by ERNIE-ViLG is likely to differ from the filter used by Tencent-owned WeChat or Weibo, which is operated by Sina. Some of these platforms were Systematically tested by the Toronto-based research group Citizen Lab.

Badiucao, a Chinese-Australian political cartoonist (who uses the pseudonym for his artwork to protect his identity), was one of the first users to discover censorship in ERNIE-ViLG. Many of his artworks are directly critical of the Chinese government or its political leaders, so these were some of the first motives he put into practice. “Of course, I’ve also been intentionally exploring its ecosystem. Because it’s a new area, I’m curious to see if censorship has caught up with it,” says Badiucao. [the result] It’s a shame.”

As an artist, Badiucao disagrees with any form of moderation in this AI, including the approach taken by DALL-E 2, because he believes he should be the one to decide what is acceptable in his art. But he cautions, however, that censorship motivated by ethical concerns should not be confused with censorship for political reasons. “It’s different when AI judges what it can’t achieve based on generally agreed ethical standards and when a government comes, as a third party, and says you can’t do it because it hurts the country or the national government,” he says.

The difficulty of drawing a clear line between censorship and moderation is also a result of differences between cultures and Legal systemssays Giada Bestelli, lead ethicist at Hugging Face. For example, different cultures may interpret the same images differently. “When it comes to religious symbols, nothing is allowed in France in public, and that is their expression of secularism,” says Pestelli. “When you go to the United States, secularism means that everything, like every religious symbol, is allowed.” In January, the Chinese government Suggest a new list Ban any AI-generated content that “endangers national security and social stability,” which would cover AI systems like ERNIE-ViLG.

What could help in the ERNIE-ViLG case, Pestelli says, is for the developer to release a document explaining moderation decisions: “Are you censored because it’s the law that requires them to do so? Do they do it because they think it’s wrong? It always helps explain our arguments and choices.” “.

Despite the built-in censorship, ERNIE-ViLG will continue to be an important player in the development of high-volume AI systems for text-to-image conversion. The emergence of AI models trained on language-specific data sets compensates for some of the limitations of mainstream English-based models. It will especially help users who need an artificial intelligence that understands Chinese and can generate accurate images accordingly.

Just as Chinese social media platforms have thrived despite strict censorship, ERNIE-ViLG and other Chinese AI models may eventually face the same thing: it’s too good to give in.

Leave a Comment