top of page
Search

DeepSeek: Is It Safe for UK Organisations?

Updated: 2 days ago

ree

The world of Artificial Intelligence (AI) moves fast, but sometimes it pays to slow down and ask the awkward questions. One of the most talked-about new players is DeepSeek, an open-source Chinese Large Language Model (LLM) that’s generating excitement.. and a fair bit of unease among security professionals


What is DeepSeek and Why Is It Making Waves?


DeepSeek is a high-powered LLM developed by a Hangzhou-based team in China, making headlines in the AI world for two big reasons:

  1. Performance: It benchmarks close to top models like GPT-4.x, at a fraction of the hardware cost.

  2. Open-Source: Its code is public, which has turbocharged global adoption among researchers and developers.

On paper, it looks like a dream come true; affordable, scalable AI without vendor lock-in. It’s already being picked up by universities, start-ups, and even some larger firms looking for an alternative to Western AI products.


But Here’s the Catch: Who’s Really in Control?


If you’re reading this blog post - You may have already been asking :“Is this just another cool free tool, or are there real governance risks here?” and maybe you've asked "What happens to my data when I've handed it over to the people who made this AI".


Here’s where things get uncomfortable:


Understand State-Controlled LLMs:

State-controlled or state-sponsored LLMs are AI systems developed under the direct influence or funding of a government.


Stop and think about that for a second.... They've invested billions into this technology, and they're gonna want that return on investment - you best believe it.


These models may be intended to:

  • Carry out the national security interests of the host nation.

  • Promote state narratives domestically and internationally.

  • Gather and analyse vast amounts of data from users.


Unlike independent or commercially-driven LLMs, these tools can have hidden objectives aligned with the sponsoring state’s interests. Back to DeepSeek:

  • State Influence DeepSeek operates in a regulatory environment where government oversight is the norm, not the exception. Chinese tech firms are expected to align with state policy, sometimes overtly, sometimes less so.


  • Censorship in the Model Reports show DeepSeek actively censors queries related to politically sensitive topics (Tiananmen Square, Taiwan, etc.), and can exhibit “hidden” behaviours designed to comply with Chinese law.


  • Data Sovereignty Hosting or even running an LLM with potential state access or mandatory data handover rules raises red flags for any UK-based organisation, especially those handling sensitive data or working with the NHS or MOD.


Why It Matters for UK Organisations


  • Compliance Risks The UK GDPR, NHS DSP Toolkit, and MOD supplier requirements all hinge on maintaining control over where data lives, who can see it, and how algorithms make decisions. DeepSeek’s provenance could complicate DPIAs, procurement, or contracts—especially if you can’t guarantee data won’t leak or be monitored.

  • Ethics & Trust If a tool’s behaviour can be quietly shaped by external actors, how do you explain its outputs to patients, clients, or regulators? Transparency and explainability are hard enough in AI; adding state-driven opaqueness is another layer of risk.

  • Supply Chain Security The MOD and other critical infrastructure sectors now treat AI as part of the supply chain. Any software—especially from high-risk jurisdictions - needs to be thoroughly vetted for security and trustworthiness.


Practical Advice: Eyes Wide Open


Here’s what I’d recommend to any UK company eyeing DeepSeek (or similar models):

  1. Think: Do you want your intellectual propterty or sensitive personal data in ANY LLM, regardless of the company/jurisdiction/state sponsor/country...

    (Spoiler: The answer needs to be "NO".)

  2. Due Diligence: Treat this like any third-party software. Who built it? Where is it hosted? What legal and technical controls are in place? What data are we putting into this system?

  3. DPIA/AI Risk Assessment: Go beyond standard templates... explicitly address the potential for state interference or undisclosed censorship.

  4. Technical Safeguards: Run models in isolated, sandboxed environments and never process live personal data until you’re satisfied with security testing.

  5. Stay Updated: The regulatory landscape is shifting rapidly - both in China and the West. What’s compliant today might not be tomorrow.


If you want pragmatic support navigating AI compliance and security, or just want a second pair of eyes on your supplier list, get in touch. Better to ask the hard questions now than answer then when the ICO comes knocking.


Jeff Pullen, Information Security & Data Protection Consultant

 
 
 
00011-2939233035.png

DID YOU FIND THIS USEFUL?

Join our mailing list to get practical insights on ISO 27001, AI, and data protection; No fluff, just useful stuff.

You can unsubscribe at any time. You are welcome to read our Privacy Policy

bottom of page