Unauthorized removal of women's clothing by artificial intelligence software poses a significant issue.
In the digital age, the issue of deepfakes and nudification apps has become a pressing concern. Failure to act decisively now may result in these technologies becoming an increasingly problematic part of everyday life.
There is a growing focus on educating school-age children about the harm that can be caused by AI nudification apps. These tools create naked or sexualized images of people using everyday, fully-clothed images. Overwhelmingly, these deepfakes are used as a form of abuse to bully, coerce, or extort victims, with non-consensual, sexualized deepfakes predominantly featuring women and girls.
The Internet Watch Foundation reported a 400% rise in the number of URLs hosting AI-generated child sex abuse content in the first six months of 2025. This alarming statistic underscores the urgency of addressing this issue.
Technological solutions, such as watermarking, filtering, and collaborative community moderation, could help in detecting and potentially removing harmful deepfakes. However, the makers of these apps are frequently able to evade measures put in place by service providers to limit their reach.
Current efforts to ban or control nudification apps and combat non-consensual deepfakes include a combination of legislative actions, online safety regulations, and legal enforcement by both governments and private companies.
In the United States, some states, including California and Minnesota, have passed laws making it illegal to distribute sexually explicit deepfakes. Federal bills such as the Deepfakes Accountability Act aim to expand protections against deepfake abuse, including provisions related to national security and personal identity rights.
The UK government is considering penalties for making, not just distributing, non-consensual deepfakes and a ban on nudification apps. China's generative AI measures include requirements for built-in safeguards to detect and block illegal use, and for AI content to be watermarked for traceability.
Online safety agencies, such as Australia's eSafety Commissioner, are a novel approach but require more proactive tactics. Tech companies like Meta (Facebook) have taken legal action against app developers behind nudification tools, blocked URLs promoting these apps, and restricted related search terms on their platforms to limit accessibility and dissemination. Payment processors such as Visa and Mastercard have come under scrutiny for facilitating transactions on deepfake abuse websites.
Collaboration among governments, tech companies, and individuals remains crucial to effectively curb non-consensual deepfake abuse and nudification technologies. Individuals should avoid sharing intimate images publicly, use privacy settings on social media, and report non-consensual content to platforms and authorities. Businesses should implement proactive content-moderation and AI tools to detect deepfake/nudify content, cooperate with law enforcement, educate users about privacy and consent issues, and support victims via clear reporting channels. Governments should enact clear laws criminalizing non-consensual deepfake creation and distribution, establish or empower online safety regulators, fund research on detection technologies, and promote international cooperation on enforcement.
The post-truth age and "the end of privacy" are not guaranteed outcomes, and the future will depend on the decisions we make now and the actions we take to uphold them. The legal landscape is evolving, but the challenges remain significant. The fight against deepfakes and nudification apps is a global challenge that requires a collective effort from all stakeholders.
- The increased use of AI-based applications like nudification and deepfake apps in the digital world calls for enhanced technology to combat this issue, such as watermarking, filtering, and community moderation, as these tools can often evade measures put in place by service providers.
- As deepfakes, particularly non-consensual sexualized deepfakes, escalate in various forms of abuse and are primarily featuring women and girls, it is crucial for governments to take decisive actions like passing laws against their distribution, enforcing online safety regulations, and supporting victims via clear reporting channels.