DMR News

Advancing Digital Conversations

OpenAI Dismantles Iranian Operation for Misusing ChatGPT to Generate Fake News

ByHilary Ong

Aug 20, 2024

OpenAI Dismantles Iranian Operation for Misusing ChatGPT to Generate Fake News

OpenAI announced on Friday, August 16, that it had identified and dismantled an Iranian influence operation, named Storm-2035, which was using ChatGPT to generate fake news articles and social media posts.

The operation aimed to influence public opinion on various sensitive topics, particularly targeting the U.S. presidential campaign, LGBTQ+ rights, and the ongoing conflict in Gaza. According to OpenAI, the group operated at least five websites, in both English and Spanish, that posed as legitimate news outlets while spreading polarizing messages.

Storm-2035’s Attempts to Manipulate Political Discourse

Storm-2035, reportedly connected to the Iranian government as identified by Microsoft, attempted to influence both conservative and progressive audiences by generating content that catered to opposing viewpoints.

For example, Bloomberg reported that one piece of content suggested that former President Donald Trump was being censored on social media and was prepared to declare himself “king of the U.S.” Another article framed Vice President Kamala Harris’ selection of Tim Walz as her running mate as a “calculated choice for unity.”

In addition to political content, the operation produced material on a wide range of other topics, including Israel’s presence at the Olympics, Venezuelan politics, the rights of Latinx communities in the U.S., and Scottish independence. The group also generated fashion and beauty content, which OpenAI suspects was an attempt to appear more authentic or to build a broader following.

Limited Impact Despite Extensive Efforts

Despite these efforts, OpenAI reported that the influence operation did not achieve significant audience engagement. The majority of social media posts created by Storm-2035 received little to no interaction, such as likes, shares, or comments. Moreover, OpenAI found no substantial evidence that the content generated by this operation was widely shared or picked up by real users across social media platforms.

In a detailed assessment, OpenAI noted that the operation only reached a Category 2 rating on the Brookings Institution’s Breakout Scale, which measures the threat level of influence operations. This category suggests that while the operation showed activity across multiple platforms, there was no significant evidence that real people widely shared or engaged with the content.

Connection to Other Iranian Influence Activities

OpenAI’s findings also align with other recent disclosures, including the revelation that Iranian hackers have been targeting both Kamala Harris’ and Donald Trump’s campaigns. In a separate incident reported earlier this week, the FBI noted that informal Trump adviser Roger Stone fell victim to a phishing attack by Iranian hackers, who then took control of his account and attempted to spread phishing links to others. However, the FBI found no evidence that anyone in the Harris campaign was compromised.

OpenAI emphasized its commitment to preventing the misuse of its AI tools, stating that it takes any efforts to use its services in foreign influence operations very seriously, even in cases where the operation fails to gain meaningful traction.


Featured Image courtesy of STEFANI REYNOLDS/AFP via Getty Images

Follow us for more tech news updates.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *