
California’s attorney general has opened an investigation into the spread of sexualised AI deepfakes generated by Elon Musk’s AI model Grok, as scrutiny intensifies in the United States and the United Kingdom over the legal responsibility of AI platforms.
Investigation Announced By California Attorney General
California Attorney General Rob Bonta said the state is examining reports of non-consensual, sexually explicit material generated and shared using Grok, the AI model developed by xAI.
In a statement announcing the inquiry, Bonta said an influx of reports describing sexually explicit material involving women and children was alarming. He said the material had been used to harass people online and urged xAI to take immediate corrective action.
Response From xAI And Elon Musk
xAI has previously stated that users who prompt Grok to produce illegal content face the same consequences as those who upload illegal material to platforms.
On Wednesday, Elon Musk said on X that he was not aware of any nude images of minors generated by Grok. He said the model does not generate images independently and responds only to user requests.
Musk has also argued that criticism of X and Grok is politically motivated, describing the controversy as an attempt to justify censorship. Musk is a major donor to the Republican Party.
Political Reactions In California
California Governor Gavin Newsom criticised xAI’s approach in a post on X, calling the decision to host such material unacceptable.
Pressure From US Lawmakers And Platform Changes
Last week, three Democratic US senators asked Apple and Google to remove X and Grok from their app stores. Within hours of that request, X restricted its image generation tools so they could only be used by paying subscribers.
X and Grok remain available on Apple’s App Store and Google Play.
Legal Debate Over Platform Liability
The investigation has renewed debate over whether US technology companies are shielded from liability for AI-generated content. Section 230 of the Communications Decency Act of 1996 grants online platforms immunity from responsibility for user-generated content.
Professor James Grimmelmann of Cornell University said the law does not cover content produced by platforms themselves. He said xAI’s argument that users are responsible for the images may not withstand legal scrutiny, as the AI system generates the images.
US Senator Ron Wyden, who co-authored Section 230, has also said the law does not apply to AI-generated images. Wyden said companies should be fully accountable for such content and welcomed California’s investigation. Wyden was among the senators who urged Apple and Google to remove X and Grok from their app stores.
International Context And UK Action
The California probe comes as the United Kingdom moves toward legislation that would make the creation of non-consensual intimate images illegal. The UK media regulator Ofcom has also launched its own investigation into Grok.
If Ofcom determines that X has violated UK law, it can impose fines of up to 10 percent of the platform’s global revenue or £18 million, whichever is higher.
Earlier this week, UK Prime Minister Sir Keir Starmer said X could lose the right to self-regulate if it fails to control Grok’s image generation capabilities, adding that authorities would intervene if the platform does not address the issue.
Featured image credits: Heute.at
For more stories like it, click the +Follow button at the top of this page to follow us.
