Google has officially expanded its advertising restrictions to include a ban on promoting services and websites that offer tools for creating deepfake pornography.
Effective from May 30, this policy update specifically targets advertisements for apps, websites, and services that generate or instruct on creating synthetic sexually explicit content or nudity. The tech giant’s move intensifies its efforts to combat the increasingly troubling use of artificial intelligence in generating non-consensual nude imagery.
Google’s New Policy on Deepfake Content
The new policy explicitly forbids advertising any content that promotes the creation of deepfake porn, whether through direct service offers or instructional material on generating such content. Advertisers found in violation of these guidelines will face immediate account suspension and a permanent ban from advertising with Google, signaling a strict “no second chances” approach to enforcement.
This policy revision aims to address a significant gap in Google’s previous advertising rules, which, while banning explicit ads, did not specifically target ads for deepfake creation tools. The updated Inappropriate Content Policy now clearly prohibits promoting any synthetic content that has been altered or generated to be sexually explicit.
The urgency of this crackdown is underscored by recent incidents, such as the widespread sharing of AI-generated explicit images of celebrities without their consent, which sparked public and legislative backlash.
In response to such events, the U.S. Congress has seen proposals like the DEFIANCE Act, which seeks to provide victims of non-consensual digital forgeries with legal recourse against the creators and distributors of such content.
How Have the Public and Legislators Responded?
Apple has also taken steps to purge its app store of applications that covertly offer deepfake porn creation capabilities, and even platforms like Pornhub have had policies against deepfakes since 2018. Google’s new policy reflects a broader industry movement to mitigate the risks associated with generative AI technologies, especially those used maliciously.
While Google’s policy update marks a significant effort to curb the misuse of AI in creating harmful content, the challenge of effectively enforcing these rules remains. Bad actors often exploit decentralized channels to evade detection, creating a complex environment for tech companies trying to enforce content policies. As part of its enforcement strategy, Google will start implementing this rule on May 30, providing a grace period until then for advertisers to comply with the new regulations.
Furthermore, Google has already begun to restrict the advertising of services that generate sexually explicit deepfakes in its Shopping ads. This move aligns with the broader policy update, extending the ban to include any promotional material that facilitates the creation, distribution, or storage of synthetic sexually explicit content.
Related News:
Featured Image courtesy of DENIS CHARLET/AFP via Getty Images