DMR News

Advancing Digital Conversations

Microsoft Expands Tools to Scrub Deepfake and Revenge Porn From Bing

ByHilary Ong

Sep 8, 2024

Microsoft Expands Tools to Scrub Deepfake and Revenge Porn From Bing

Microsoft has expanded its efforts to combat the spread of non-consensual intimate images, including deepfakes, by partnering with StopNCII to prevent these images from appearing in Bing search results.

How StopNCII’s Digital Fingerprinting Works

StopNCII provides victims with a tool that creates a digital fingerprint, known as a “hash,” of intimate images or videos stored on their devices without requiring them to upload the files. These hashes are then shared with industry partners who can use them to identify and remove the corresponding images from their platforms.

Microsoft’s Bing now joins several other platforms—such as Facebook, Instagram, Threads, TikTok, Snapchat, Reddit, Pornhub, and OnlyFans—in utilizing StopNCII’s database to help stop the spread of revenge porn.

Microsoft has already piloted this initiative with StopNCII, removing 268,000 explicit images from Bing’s search results by the end of August. The company previously offered a direct reporting tool but found it insufficient to address the scale of the problem. In a blog post, Microsoft acknowledged that victim reports alone have not been effective in preventing non-consensual images from circulating via search engines.

Google’s Absence Raises Concerns

Despite Microsoft’s significant steps, Google remains notably absent from StopNCII’s coalition, even though it offers its own tools for reporting and removing explicit content. Google’s lack of partnership has drawn criticism from both victims and former employees. A Wired investigation revealed that since 2020, Google users in South Korea have reported 170,000 links for unwanted sexual content on Google Search and YouTube. The company’s decision not to join StopNCII leaves victims having to navigate a fragmented system to remove their content from various platforms.

While tools like StopNCII help adults over 18 remove their non-consensual images, the rise of AI-generated deepfake nude images has become a widespread issue, affecting even younger individuals. “Undressing” sites, which use AI to generate fake nude images, are creating problems for high school students across the U.S., exacerbating the issue for younger victims. Currently, the U.S. lacks federal legislation specifically targeting AI deepfake pornography. Instead, a patchwork of state and local laws aims to address this growing problem.

In August, prosecutors in San Francisco announced a lawsuit to take down 16 sites involved in creating and distributing non-consensual deepfake images. According to a Wired tracker, 23 U.S. states have passed laws to address non-consensual deepfakes, while nine states have rejected proposed legislation.


Featured Image courtesy of Jakub Porzycki/NurPhoto via Getty Images

Follow us for more tech news updates.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *