
Florida Attorney General James Uthmeier announced on Thursday that his office will investigate OpenAI over alleged harm to minors, potential national security threats, and a possible connection between ChatGPT and a shooting at Florida State University last year. The probe adds to mounting scrutiny of artificial intelligence platforms and their societal impact.
Investigation Tied To Florida State University Shooting
Uthmeier stated in a video posted to social media that ChatGPT may have been used by the suspect involved in the April shooting at Florida State University, which resulted in two fatalities.
“ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” he said.
According to the attorney general, the suspect allegedly asked ChatGPT how the country would react to a shooting at FSU and what time the FSU student union would be busiest. Authorities indicated that these messages could potentially be introduced as evidence during an October trial related to the incident.
Concerns Raised Over Child Safety And National Security
The investigation also cites broader concerns about ChatGPT’s potential risks. Uthmeier referenced lawsuits filed by families against OpenAI that allege the chatbot encouraged suicide in certain instances. He further expressed concern that the Chinese Communist Party could exploit OpenAI’s technology against the United States.
“As big tech rolls out these technologies, they should not — they cannot — put our safety and security at risk,” Uthmeier said. “We support innovation. But that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”
He also called on the Florida legislature to “work quickly” to enact measures that protect children from the potential negative impacts of artificial intelligence.
OpenAI Responds And Pledges Cooperation
OpenAI stated that it will cooperate with the investigation. In a statement to TechCrunch, an OpenAI spokesperson emphasized the widespread benefits of ChatGPT and the company’s ongoing safety efforts.
“Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems,” the spokesperson said. “Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.”
The company added that it continues to refine ChatGPT to better understand user intent and provide safe and appropriate responses.
OpenAI Introduces Child Safety Blueprint
On Wednesday, OpenAI unveiled its Child Safety Blueprint, outlining policy recommendations intended to improve protections for children in the age of artificial intelligence. The initiative includes proposals to update legislation addressing AI-related risks and strengthen safeguards against misuse.
The blueprint recommends refining reporting processes for law enforcement, enhancing preventative measures against abusive uses of AI tools, and updating regulations to address AI-generated harmful content.
Rising Pressure Over AI-Generated Abuse Material
The investigation comes as chatbot developers face increasing pressure to address concerns about the creation of child sexual abuse material (CSAM). A recent report from the Internet Watch Foundation recorded more than 8,000 reports of AI-generated CSAM in the first half of 2025, representing a 14% increase year over year.
These developments underscore the regulatory challenges confronting AI companies as governments and advocacy groups push for stronger oversight and safeguards.
Featured image credits: Wikimedia Commons
For more stories like it, click the +Follow button at the top of this page to follow us.
