Dark Side of AI Art: Deepfake, Child Pornography and Sextortion

Share

Artificial intelligence is quickly gaining popularity because of its extreme usefulness and its unmatchable capabilities. AI in almost every field has started influencing people and is turning out to be a blessing. It saves time and delivers proper and excellent results. AI is providing easiness at no cost and that is the reason it is gaining immense popularity all over the globe. AI in art has been helping a lot of users and is creating better art like a professional in just a few clicks.

In the twenty-first century, artificial intelligence is an extremely disruptive innovation that has attracted considerable attention from practitioners and academics. AI provides extensive, and unprecedented opportunities for fundamental changes and extensive elevation across many industries. This disruptive technology makes incredible things possible, such as autonomous vehicles, facial recognition payments, guidance robots, etc.

Extensive attention is paid to the positive aspects of AI, while the dark sides of AI are receiving relatively little attention. AI is reportedly also used for various dark purposes. This utilization of AI might be a threat and danger to the world. Many people have actually started using AI for their wicked purposes and this has already made people believe that the future of AI might be scary. Even, on this concern, Elon Musk had stated his thoughts. He believes AI is the biggest existential threat to humanity. He believes if AI is not used properly can result in destruction.

Artificial intelligence Art is one of the most disturbing aspects. The capacities of AI art generators have grown too much in the past couple of years. Through complex algorithms, AI scans the internet and manages to make artistic composites, some sublime, others grotesque. Today, AI art generators have incredible potential, but their capacities can also be easily abused.

Many people are worried that AI image generators can and will easily create Child Pornography. It has no boundaries. It also encourages the worst human depravity. AI, day by day, is getting better at generating porn. Recently, there were many cases of deep fakes reported. Most of the deep fake sites have veered away from hosting this type of content, understandable due to the dangerous attention it could bring itself. Anyone with access to images of a victim’s face can create realistic-looking explicit content with an AI-generated body. Incidents of harassment and extortion are likely to rise, abuse experts say, as bad actors use AI models to humiliate targets ranging from celebrities to ex-girlfriends and even children. One such example is the exploitation of AI in the creation of pornography and sextortion. Sextortion is defined as the use of sexual images, videos, or information to extort money or other items from the victim. AI images are commonly used to extort victims for real photos

What is Sextortion?

Generally, ‘sextorters’ know how to invoke a fear in the victim. They might show the images or a screenshot of a specifically explicit conversation. Later, they often take off the victim’s social media accounts to find out about their near ones. In other words, they will let the victim know they can destroy their reputation at any moment, even though that’s often a lie. AI art is helping evil minds that practise sextortion in many ways. It helps people to create fake chats, generate false information about the person and makes it seem original. Many people believe these images are true and the victim gets in a worse position. This resulted in many suicide cases all around the world. There are many news and events reported about Sextortion and the cases are increasing day by day.

What is Deepfake?

The term “deepfake” refers to an AI-based technique that synthesizes media. This includes super-imposing human features on another person’s body or manipulating sounds to generate a real human experience. Recently, the “deepfake” technology used to Photoshop a celebrity’s head on a porn actor’s body has been around for a while, but recent advances have made it much harder to detect. There was news trending that the streaming community was rocked by a headline that links back to the misuse of generative AI. Popular Twitch streamer Atrioc issued an apology video, teary-eyed, after being caught viewing pornography with the superimposed faces of other women streamers.

Also, the generative AI app Lensa came into notice for allowing its system to create fully nude and hyper-sexualised images from users’ headshots. Controversially, it also whitened the skin of women of colour and made their features more European.

The technology to create deepfakes has existed for some time, AI-powered or otherwise. A 2020 report from deepfake detection company Sensity found that hundreds of explicit deepfake videos that featuring female celebrities were being uploaded to the world’s biggest pornography websites almost every month; the report estimated the total number of deepfakes online at around 49,000, over 95% of which were porn. Actresses including Emma Watson, Natalie Portman, Billie Eilish and Taylor Swift have been the targets of deepfakes since AI-powered face-swapping tools entered the game several years ago, and some, including Kristen Bell, have spoken out against what they view as sexual exploitation.

Increase in cases of Child Pornography

Paedophiles are using artificial intelligence (AI) to create indecent images of child abuse. Abusers are reportedly using AI text-to-image generators and deep fake technology to add the faces of real children onto computer-generated bodies. The amount of child abuse imagery found online is concerning; every year industry detects and reports an increasing number of illegal images. AI-generated pornography which features the faces of non-consenting individuals is becoming increasingly common online.

Britain’s National Crime Agency has conducted a review of how AI technology can contribute to sexual exploitation after the recent arrest of a paedophile computer programmer in Spain shocked the continent. The man had been found to be utilizing an AI image generator to create new child sexual abuse material (CSAM) based on abusive images of children that he already possessed. 

The normalization of AI-created pornography or child sexual abuse content does not serve beneficial purpose in society, and, in fact, can influence cultural mores in profoundly harmful ways. Having the technological capability to manufacture AI-generated CSAM has emboldened paedophile sympathisers to advocate for their inclusion in the liberal structure of sexual orientations. Pornography of any kind is inherently exploitative, the pornography industry thrives off dubious consent and, often, knowing exploitation of trafficking victims and minors. Using AI technology to create images or videos that constitute pornography or child sexual abuse material perpetuates a chain of abuse even if the new content generated is different from abuse that physically occurred.

AI-generated porn or CSAM cannot circumvent the ultimate violations against our human dignity caused by creating exploitative sexual content. Modern nations need laws that appropriately address modern concerns; while the upgradation of AI technology can, in some ways, certainly benefit society, its capability to produce exploitative content and allow the rot of paedophilia continue festering must be addressed.

Read more

Recommended For You