DMR News

Advancing Digital Conversations

DeepSeek AI’s Lack of Safeguards Raises Significant Concerns

ByHilary Ong

Feb 18, 2025

DeepSeek AI’s Lack of Safeguards Raises Significant Concerns

DeepSeek, a Chinese AI startup, is under scrutiny for its generative artificial intelligence technology, which reportedly operates without any form of safeguards. Unlike its Western counterparts such as OpenAI, Google, and Perplexity, DeepSeek has not established guidelines and policies to govern the use of its AI. This absence of protective measures is causing alarm among analysts, given the potential for misuse by malicious actors.

Concerns Over AI’s Risk of Misuse

Recent findings by Israeli research firm ActiveFence highlight the risks associated with DeepSeek’s AI. ActiveFence’s evaluation of DeepSeek’s V3 artificial intelligence focused on its responses to dangerous prompts. The results were troubling, revealing that the AI generated harmful responses 38% of the time. This lack of safeguards poses a significant threat, as it could lead to severe consequences in the future.

The power of generative artificial intelligence is both impressive and intimidating. Without proper controls, DeepSeek’s technology could be exploited to run scams and mislead the public. The growth of the misuse of AI was evident last year when deepfakes of well-known personalities were created to manipulate public opinion and spread propaganda.

Analysts express major concerns over DeepSeek’s vulnerability to exploitation by criminals. The report from ActiveFence suggests that bad actors could easily take advantage of DeepSeek’s services, creating scenarios that deceive the public. These potential scenarios underscore the urgent need for DeepSeek to implement stringent safeguards and establish clear guidelines for its AI operations.

Author’s Opinion

The absence of safeguards in DeepSeek’s AI technology is a serious issue that must be addressed immediately. The potential for misuse, as shown by ActiveFence’s findings, could lead to significant harm. As generative AI becomes more powerful, the need for robust protections becomes even more urgent. Without clear guidelines, this technology could be exploited for malicious purposes, and companies like DeepSeek must prioritize safety and responsibility in their AI development.


Featured image credit: FMT

Follow us for more breaking news on DMR

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *