DeepSeek AI Model Praised But Security Flaws Raise Concerns

Image by Solen Feyissa, from Unsplash

DeepSeek AI Model Praised But Security Flaws Raise Concerns

Reading time: 2 min

The AI model R1 from DeepSeek impresses with low-cost reasoning skills, yet some researchers argue that it produces dangerous code outputs for politically sensitive regions.

In a rush? Here are the quick facts:

  • DeepSeek’s R1 AI model was trained for just $294,000.
  • R1 excels at reasoning tasks like math and coding.
  • CrowdStrike found DeepSeek produced unsafe code for politically sensitive groups.

DeepSeek’s AI model R1 impresses with low-cost reasoning skills, yet testing shows unsafe code output for politically sensitive regions, sparking expert concerns.

The U.S. stock market experienced a major disruption when the R1 model became available to the public in January. Scientific American (Sci Am) reports that this week, the first peer-reviewed study of R1 was published in Nature.

The research reported that R1 received training at a budget cost of $294,000 while competitors spent tens of millions of dollars.

“This is a very welcome precedent,” said Lewis Tunstall of Hugging Face, who reviewed the paper. Ohio State University’s Huan Sun agreed, saying, “Going through a rigorous peer-review process certainly helps verify the validity and usefulness of the model,” reported Sci Am.

DeepSeek says R1 excels at “reasoning” tasks like math and coding by using reinforcement learning, a process that rewards the system for solving problems on its own.

But alongside the praise, the security firm CrowdStrike, from the United States, has flagged security issues, as reported by The Washington Post.

The testing revealed DeepSeek produced less-secure or even harmful code when users requested information about groups that China opposes, such as Tibet, Taiwan and the banned spiritual group Falun Gong.

When asked to generate code for the Islamic State, 42.1 percent of answers were unsafe. Specifically, even when DeepSeek provided code, it often contained security flaws that left systems vulnerable to hacking.

Experts warn that deliberately flawed code is subtler than back doors but equally risky, potentially enabling unauthorized access or manipulation of critical systems.

“This is a really interesting finding,” said Helen Toner of Georgetown University, as reported by The Post. “That is something people have worried about — largely without evidence.” CrowdStrike warned that inserting flaws may make targets easier to hack.

The Post says that DeepSeek did not respond to requests for comment. Despite growing recognition of its technical achievements, the company now faces tough questions about whether politics influences the safety of its code.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback