The Darkside of AI text-to-image generators like Craiyon and DALL-E

Share

It is extremely fun to watch an unusual picture of Adolf Hitler shaking hands with Shrek or Sharukh Khan shaking hands with Donald Duck. You may have seen this decorated all over the internet.

All thanks to the AI text-to-image generator DALL-E Mini, now known as Craiyon, that has given people the liberty to produce literally anything that the human brain can think of. No, seriously. Imagine the most random sentence or you could just make up a word and Ta-Da! Craiyon will give you mind-blowing results.

This text-to-image generator has been created based on Open AI’s DALL-E, which is another text-to-image generator creating images in a similar way only with extremely accurate and quality results.

Craiyon or DALL-E Mini was developed 27 times smaller than the original DALL-E and its creators are constantly working to improve it. Now, this tool is open to literally everyone which is why you can some hilarious stuff on the internet. On the other hand, DALL-E or DALL-E 2 (An improved version of DALL-E) is restricted to only a limited set of people and you have to signup on a waitlist to access it.

DALL-E 2 is Restricted for a Reason

This has been done because AI researchers who have developed this tool know the potential and may also know for a fact that this potential can be exploited by many. The image generator Craiyon also has a disclaimer that the generator can, “reinforce or exacerbate societal biases. Because the model was trained on unfiltered data from the Internet, it may generate images that contain harmful stereotypes.”

The image generator Craiyon also has a disclaimer that the generator can, “reinforce or exacerbate societal biases. Because the model was trained on unfiltered data from the Internet, it may generate images that contain harmful stereotypes.”

Meaning?

This means that while the internet has been kind and having fun with the tool. But these tools have been trained on unfiltered data, in simpler words, it has been trained on the millions of labeled and unlabeled images available on the internet. So, according to experts, researchers, and also the results generated by the tool, we, humans have trained AI as our own reflection.

While this might sound technical and boring, let’s take into account what Futurism tried with Craiyon which gave everyone the idea of what these AI tools are generating. They found that it generated some racist, sexist and stereotypical results.

When they input one of their journalist’s Muslim name, the image generator tried to make assumptions about the identity. The input racism resulted in paintings of a bunch of black faces.

They also moved on to try some of the inequalities that we face in today’s world by simply prompting certain professions and the results were disappointing but not very surprising. We also tried the input and were not very startled by the result. While we noticed some of these results don’t make sense but the tool understands the input in some way.

Open AI in an update about DALL-E 2 mentioned that after continuously understanding and addressing biases, the company has asked not to share images that include faces to limit the harm that can be caused by these tools. They also stated, “We’ve enhanced our safety system, improving the text filters and tuning the automated detection & response system for content policy violations.”

Images generated by Craiyon on the input racism

On searching for words like ‘racism’ and ‘sexism’, the tool displayed a painting of people of color in the prior and images that signify the male and female genders in the latter.

When prompted with words like ‘doctor’ or ‘nurse’, again it is a little shocking how the tool has generated all-white male doctor images and female nurses. Similar are the words for inputs like CEO, builder, personal assistant, flight attendant, and more.

The Concerns of Researchers and Experts

Wired pointed out that, “AI researchers found that DALL-E 2’s depictions of people can be too biased for public consumption. Early tests by red team members and OpenAI have shown that DALL-E 2 leans toward generating images of white men by default, overly sexualizes images of women, and reinforces racial stereotypes.”

One can only imagine what DALL-E 2 has the capability to create when we can already identify stereotypes in its smaller version. If you take a moment to think about its potential, you will also realize the harm this tool can cause to the world. Imagine the results of the same prompts given to DALLE-2.

Images generated by Craiyon on the input sexism

These tools are created to ease human lives and they definitely depict how technologically advanced we have become but that is all. Many researchers and experts are concerned that at this point, they are of no help but only harm. Wired mentioned that one of the Red Team Members mentioned, “that eight out of eight attempts to generate images with words like “a man sitting in a prison cell” or “a photo of an angry man” returned images of men of color.”

One of the data scientists who participated in the Red Team process told the publication that the best way to handle this issue will be to entirely exclude its ability to generate human faces.

Craiyon image results for the inputs doctor and nurse

The Role of AI in Art

Although Open AI’s Content Policy mentions a lot of prohibitions and also mentions it is not for commercial use, the Guardian raises questions like, “who decides what is political? Isn’t the very definition of “sexual” subjective?” Along with this, questions like who is the creator of these generations? are also bound to raise.

While many have pointed out that these image generators eventually can make the work of graphic designers and art creators redundant, this still seems a little too far right now.

While many have pointed out that these image generators eventually can make the work of graphic designers and art creators redundant, this still seems a little too far right now.

This makes us also think about the Role of AI in art. One of the artists that Guardian spoke to said that while people were baffled by the image generator’s ability, “it’s not as infinite as my imagination”.

If you look at the work that aritsts do, then one can say that there’s still some time for AI to get the depth, intensity or the thought process of an artist to match with what we perceive as humans. The conversations of Guardian with many such artists helps us chalk that out.

While certainly the concern right now is the kind of effect the creations of these AI-generated images can have on society. Certain researchers also feel that AI will be able to tackle this challenge but there is a possibility that it will open the door to many other concerns.

Read more

Recommended For You