top of page
Search

The Opportunities, Red Flags and Reality of AI in Primary Care

Updated: Oct 10

ree

AI in primary care is no longer optional. It is already being built into the systems GP surgeries are expected to use daily. And when things go wrong, as they already have in multiple NHS organisations, the responsibility does not fall on the suppliers, it falls on you.


The potential is extraordinary. When done well, AI can cut administrative burden, free up clinicians’ time, improve access for patients, and even spot risks earlier than a human could. These are benefits we cannot afford to ignore. At the same time, we cannot turn a blind eye to “AI Washing” where suppliers rebrand ordinary software as “AI-powered” to justify higher costs while hiding real risks.


What if, in chasing efficiency, we lose sight of our duty of care, our control, our accountability, our budget?

“A misjudged procurement decision can mean not only ICO fines or legal action, but also sleepless nights explaining what went wrong to partners, staff, and patients.”

We’re running a 5-hour, workshop on exactly these risks. Places are limited. Details at the bottom of this article.
We’re running a 5-hour, workshop on exactly these risks. Places are limited. Details at the bottom of this article.

Across the NHS, the drive to digitise has outpaced our ability to fully understand what we are adopting. Tools arrive with glossy assurances but the reality on the ground often feels different. Practice managers are being asked to commission systems they cannot interrogate and to sign off on contracts written in technical and legal language that obscures more than it clarifies.




In my research on automated decision-making in public authorities, I found this pattern recurring. A system designed to make life simpler ends up quietly redistributing responsibility. The machine becomes the decision-maker, the clinician becomes the executor, and the patient becomes the data point. Behind this dynamic lies what I called a “culture of indifference,” a learned comfort with opacity. We accept that the technology works, even when we cannot quite say how.


GP practices sit in a particularly fragile position. They are expected to lead the shift to digital care while also safeguarding the most sensitive personal data in the public sector. When things go wrong, as several NHS Trusts have discovered, the fallout can be severe: ICO investigations, CQC scrutiny, reputational damage that undermines years of patient trust. Red flags are often ignored until it is too late. Unexplained data sharing, “black box” triage tools, and supplier-led assurances are too easily missed.


This is what my workshop is designed to address. It is not a technical deep dive. It is a space to foster understanding and confidence. We will unpack what AI really means for general practice: the systems already in use, the pressures driving adoption, and the gaps in oversight that leave surgeries exposed. We will look at NHS examples where automation went wrong, the impact, and the lessons.

“The session was so helpful and I now feel able to properly scrutinise AI systems to prevent us wasting time and money on systems that will not deliver.” Practice manager feedback

Most importantly, we will focus on human oversight and the patient's right to question, to explain, to intervene. Article 22 of GDPR gives individuals protection from being subject to purely automated decisions, but in public services those safeguards often weaken when processing is “authorised by law.” In practice, that means the burden of accountability shifts back to people like us: the managers, partners, and IG leads who sign off on procurement and governance decisions.


If there is a single message I hope participants take away, it is this: AI in healthcare is both an opportunity and a risk. It is not just about new technology, it is also a change in culture. It tests our assumptions about trust, expertise, and responsibility. Only by understanding the systems and the values behind them can we protect our patients and ourselves.


📅 10am–3.30pm, Wednesday 26th November

📌 Ipswich, Suffolk.


Early bird: £500 + VAT (October). Full price: £650 (Nov–Dec).


⚠️ Places are limited and we expect this session to sell out quickly. .


Agenda


AI in Healthcare: Avoiding Breaches, Fines, and Loss of Trust

  1. AI in the NHS today: what’s already here

  2. The hidden risks: fines, breaches, and reputational fallout

  3. The legal landscape: GDPR, ICO, CQC, and where practices are most exposed

  4. Accountability: who regulators hold responsible (and why)

  5. Privacy and security: protecting your patients and your practice

  6. Red flags in procurement: spotting unsafe systems before you sign

  7. Human oversight and Article 22 rights: what “in control” really means

  8. Managing the use of ChatGPT and Co-Pilot

  9. Practical tools: Impact Assessments and onboarding checklists

  10. Protecting your surgery: the role of Practice Managers as the last line of defence


If your practice wants to join, please book using this link.




 
 
 

Comments


00011-2939233035.png

DID YOU FIND THIS USEFUL?

Join our mailing list to get practical insights on ISO 27001, AI, and data protection; No fluff, just useful stuff.

You can unsubscribe at any time. You are welcome to read our Privacy Policy

bottom of page