DMR News

Advancing Digital Conversations

ClothOff Case Shows Limits Of Law As Deepfake Abuse Spreads

ByJolyen

Jan 13, 2026

ClothOff Case Shows Limits Of Law As Deepfake Abuse Spreads

An online image generator called ClothOff has remained accessible despite being removed from major app stores and banned on most social networks, highlighting how difficult it has become for courts and regulators to stop tools that create non consensual and, in some cases, illegal sexual images.

ClothOff And The Ongoing Lawsuit

ClothOff has been operating for more than two years and is still available through the web and a Telegram bot. In October, a clinic at Yale Law School filed a lawsuit seeking to shut down the service, force the deletion of all images, and end its operations. According to Professor John Langford, one of the lawyers leading the case, the company is incorporated in the British Virgin Islands and is believed to be run by a brother and sister in Belarus, possibly as part of a wider international network.

The lawsuit centers on an anonymous high school student in New Jersey whose classmates used ClothOff to alter her Instagram photos. She was 14 when the original images were taken, which makes the AI generated versions child sexual abuse material under U.S. law. Despite that classification, local authorities declined to prosecute, citing difficulties in obtaining evidence from suspects’ devices and in determining how widely the images were shared.

The case has moved slowly since it was filed, largely because of the challenge of serving legal notice to defendants operating across borders.

Grok And Platform Liability

The situation has drawn comparisons to the recent controversy surrounding xAI’s Grok chatbot, which was used to generate sexualized images, including of minors. Langford said the two cases differ because ClothOff is designed and marketed specifically as a deepfake pornography tool, while Grok is a general purpose system that can be used for many different tasks.

U.S. laws such as the Take It Down Act prohibit deepfake pornography, but they mainly apply to individual users. Holding a platform accountable requires evidence that the company intended to facilitate harm or knowingly allowed illegal use. Without such proof, companies like xAI can argue that their tools are protected by the First Amendment.

Langford said that child sexual abuse material is not protected speech, but proving that a general AI system was designed to produce it is more difficult. He added that even evidence of recklessness, such as loosening safeguards, would still lead to a more complex legal case.

International Regulatory Response

Regulators outside the United States have taken more direct action. Indonesia and Malaysia have moved to block access to Grok, while the United Kingdom has opened an investigation that could lead to similar restrictions. Preliminary steps have also been taken by the European Commission, France, Ireland, India, and Brazil. No U.S. regulator has issued an official response.

Langford said the core legal question remains what companies such as X and xAI knew about the use of their systems and how they responded. That issue is central to whether platforms can be held accountable when their tools are used to create and distribute illegal images.


Featured image credits: Pix4free

For more stories like it, click the +Follow button at the top of this page to follow us.

Jolyen

As a news editor, I bring stories to life through clear, impactful, and authentic writing. I believe every brand has something worth sharing. My job is to make sure it’s heard. With an eye for detail and a heart for storytelling, I shape messages that truly connect.

Leave a Reply

Your email address will not be published. Required fields are marked *