AI deepfake nude services skyrocket in popularity: Report

Graphika reports a surge in “AI undressing,” using AI to remove clothing from images, leading to concerns about fake explicit content, harassment and child abuse.
Graphika reports a surge in “AI undressing,” using AI to remove clothing from images, leading to concerns about fake explicit content, harassment and child abuse.

Social media analytics company Graphika has stated that the use of “AI undressing” is increasing.

This practice involves utilizing generative artificial intelligence (AI) tools precisely adjusted to eliminate clothing from images provided by users.

According to its report, Graphika measured the number of comments and posts on Reddit and X (formerly Twitter) containing referral links to 34 websites and 52 Telegram channels providing synthetic non-consensual intimate images (NCII) services, with 1,280 in 2022 compared with over 32,100 so far in 2023 — a 2,408% increase in volume year-on-year.

Synthetic NCII often involves the generation of explicit content without the consent of the individuals depicted.

Graphika states that these AI tools make generating realistic explicit content at scale easier and cost-effective for many providers.

Without these providers, customers would face the burden of managing their custom image diffusion models, which is time-consuming and potentially expensive.

Graphika warns that the increasing use of AI undressing tools could lead to the creation of fake explicit content and contribute to issues such as targeted harassment, sextortion and the production of child sexual abuse material (CSAM).

While undressing AIs typically focus on pictures, AI has also been used to create video deepfakes using the likeness of celebrities, including YouTube personality Mr. Beast and Hollywood actor Tom Hanks.

Related: Microsoft faces UK antitrust probe over OpenAI deal structure

In a separate report in October, United Kingdom-based internet watchdog firm the Internet Watch Foundation (IWF) noted that it found over 20,254 images of child abuse on a single dark web forum in just one month. The IWF warned that AI-generated child pornography could overwhelm the internet.

Due to advancements in generative AI imaging, the IWF cautions that distinguishing between deepfake pornography and authentic images has become more challenging.

In a June 12 report, the United Nations called AI-generated media a “serious and urgent” threat to information integrity, particularly on social media. The European Parliament and Council negotiators agreed on the rules governing the use of AI in the European Union on Friday, Dec 8.

Magazine: Real AI use cases in crypto: Crypto-based AI markets and AI financial analysis