Skip to content

AI-driven fake nude sites amass an estimated $36 million annually, with approximately 18 million visits, as per a recent investigation.

Artificial intelligence-made adult content websites, according to a study, rack up to $36 million annually in visits exceeding 18 million.

AI-driven profit machine: leaking fictitious nudes net $36 million annually, as revealed in an...
AI-driven profit machine: leaking fictitious nudes net $36 million annually, as revealed in an investigation, with over 18 million online visits.

AI-driven fake nude sites amass an estimated $36 million annually, with approximately 18 million visits, as per a recent investigation.

In the digital age, technological advancements have led to the creation of AI-generated content, including deepfakes - hyper-realistic fake images and videos, such as non-consensual nudes. These materials are often so realistic that it's almost impossible to detect that they are a forgery. The practice of creating fake nudes through image manipulation began to gain notoriety in 2019.

The ease with which these fake contents can be created and distributed has generated a multimillion-dollar black market. Urgent clear and forceful regulation is needed to prohibit the development and use of these technologies for purposes of exploitation or digital violence.

Global regulation of AI-generated deepfakes for non-consensual purposes is rapidly evolving. Both federal and state/provincial laws are emerging to criminalize misuse and set standards for transparency and authenticity verification.

In the United States, the recent federal Take It Down Act criminalizes creating deepfake images without the depicted person's consent, with enforcement starting within a year of May 2025. At the state level, laws such as Louisiana’s HB 178 require attorneys to verify the authenticity of AI-generated evidence to prevent false or manipulated deepfakes in legal proceedings. Tennessee’s ELVIS Act protects musicians’ voices from AI manipulation, and New York requires written consent and compensation for AI-created likenesses.

California has enacted an AI transparency law mandating disclosure when AI generates or modifies content, with penalties for non-compliance, and similar bills are under consideration in Washington and Virginia.

Internationally, there is a push for ethical AI deployment and cross-sector collaboration, with calls for treaties and forward-looking legislation to prevent harmful uses such as deepfake-driven greenwashing and misinformation. The United States and several countries have signed the Council of Europe’s Framework Convention on AI aimed at fostering responsible AI use and regulation.

Researchers and policymakers are also focusing on technological countermeasures, like AI tools to detect deepfakes, and boosting digital literacy to help the public identify synthetic media. The protection of people's dignity and rights cannot lag behind technological progress. As advancements in deep learning and generative techniques allow results that are indistinguishable from a real photo, it is crucial that regulation keeps pace with these developments to ensure the safety and well-being of individuals in the digital world.

  1. The multimillion-dollar black market for non-consensual AI-generated deepfakes highlights the urgent need for general news to cover the evolving regulation of artificial-intelligence-generated content, particularly in crime-and-justice matters.
  2. As deepfakes continue to exploit individuals through digital violence, nations are rapidly forming laws to criminalize their misuse, such as the Take It Down Act in the United States and HB 178 in Louisiana, underscoring the global commitment to ethical AI deployment.

Read also:

    Latest