top of page
Search

Is AI Consent putting your GP practice at risk?

Updated: 3 hours ago

ree
Don't use the word consent, it's misleading

AI in healthcare is powerful—and different. It’s new to many patients, and care settings come with an inherent power imbalance: people are unwell, worried, time-pressed, and reliant on clinicians. Because we are not permitted to use consent (the Information Commissioner has confirmed this to us), the ethical—and practical—answer is more transparency, earlier, and in layers.


Putting it bluntly: it is not acceptable to spring a one-line disclosure at the start of the consultation—“I’m using AI, is that ok?”. Patients must be informed before they arrive, reminded when they arrive, and supported as care begins. That is better for patients, clinicians, and the system.


What “multi-layered” actually means

Layered transparency is about giving the right amount of information at the right time, in the right place, without overwhelming people.


  1. Before the appointment (essential)

    • Appointment invitation/booking: Add a clear notice to letters, SMS, and online booking confirmations that an AI system may support care.

    • Purpose: so patients have time to read, understand, and raise objections or ask questions ahead of time.


  2. On your website (always-on)

    • A dedicated page (“How we use AI in your care”) with plain-language detail: what the AI does, why it’s used, benefits and risks in context, human oversight, data use, opt-out routes, and how to get help.


  3. On-site signage (ambient reminder)

    • Posters in waiting areas and corridors, and a small desk sign at reception and clinical desks to reinforce the message and point to the website and leaflets.


  4. At the appointment start (just-in-time)

    • A short, humane reminder that AI support is in use (like a desk sign), with a simple invitation to ask questions or raise concerns—not a perfunctory “ok?”.


This layered approach respects patient vulnerability, reduces surprise, and builds understanding without putting pressure on the patient in the first minute of a clinical interaction.


Why consent isn’t the answer here

Because of the power imbalance in clinical settings, data protection “consent” is rarely freely given. That’s why most healthcare AI use rests on an alternative appropriate lawful bases other than consent. When consent isn’t the basis, the duty to inform and empower becomes even more important. This is how we earn trust and protect patient agency.


Minimum standard (what “good” looks like)

  • Tell people when you schedule them. Put a clear line in the appointment letter/SMS. Don’t wait until they’re in the room.

  • Signpost everywhere. Website page + posters + a desk sign at reception and in clinics.

  • Say it again, simply, face-to-face. Start of the consultation: a short, calm reminder and an explicit chance to ask questions.

  • Make it actionable. Every notice should include: what the AI does, who’s responsible (a human), how to opt out or object, and where to get more detail.

  • Use plain language. Aim for a reading age ~11–12; keep sentences short; avoid jargon.

  • Give time. Patients must have enough time to read, understand, and raise objections before the tool is used.


What not to do (common pitfalls)

  • Don’t rely on a last-minute “I’m using AI—ok?” verbal question. That is not meaningful transparency.

  • Don’t bury information in a privacy policy only. Policies are necessary, not sufficient.

  • Don’t overload the clinician’s opening minute with a legal script. Keep it clear and kind; the detail lives on your leaflet/website.

  • Don’t make objection routes hard to find. A direct phone number/email/web form should be obvious.

  • Don't use the word consent. This is misleading and can place the practice at risk.


Possible Content

Appointment letter / SMS (booking confirmation)

“We use AI Note Takers. Questions or objections? More: [link] | [phone]”

Clinician opening line

“Before we begin, you may see me using a tool that helps summarise information—it’s an AI assistant. I remain fully responsible for your care. If you’d rather we don’t use it, or you have questions, please tell me now or anytime.”

Building blocks for compliance and trust

  • Purpose clarity: Name the task (e.g., triage support, summarisation), not vague “AI use.”

  • Human oversight: Explicitly state that clinicians review and remain accountable.

  • Risk honesty: Acknowledge limits (e.g., may miss context) and how you mitigate them.

  • Data transparency: What data is used, retention, and suppliers—kept simple and linked to fuller details.

  • Objection route: Clear, quick, and documented; ensure no detriment to care for raising concerns.

  • Assurance signals: Reference clinical safety, DPIAs, and governance in accessible terms; link to summaries.

Emma Kitcher, Privacy and AI Nerd
Emma Kitcher, Privacy and AI Nerd

 
 
 

Comments


00011-2939233035.png

DID YOU FIND THIS USEFUL?

Join our mailing list to get practical insights on ISO 27001, AI, and data protection; No fluff, just useful stuff.

You can unsubscribe at any time. You are welcome to read our Privacy Policy

bottom of page