top of page
Search

The LENS Journey #2: Clinical Safety Is a Team Sport

  • 1 day ago
  • 3 min read


One of the things that has become really obvious while building LENS is that AI governance cannot realistically sit with one person.


We’ve spent the last few months talking to Clinical Safety Officers, DPOs, digital leads, GP practices and suppliers, and the recurring theme that keeps coming up is that the amount of coordination involved is huge.


People are trying to pull together supplier information, governance concerns, workflow understanding, technical limitations, clinical risks, operational pressures and deployment decisions, often through PDFs, spreadsheets, meetings and long email chains.

And when AI is involved, it gets even more complicated.


This is because the risks are operational and not just technical. They are human and workflow-based and even sometimes psychological. And they can sometimes entirely depend on how the system is actually being used in practice.


What’s become really clear to us is that no single person realistically holds all of that context, and we really don’t think they should be expected to.


Everyone Sees a Different Part of the Picture

One of the things we’ve been reflecting on this week is how differently people look at the same system.


The supplier might be thinking about:

  • model testing

  • edge cases

  • performance metrics

  • updates

  • confidence scores

  • known limitations


The commissioner is thinking:

  • why are we introducing this system?

  • what risk are we trying to reduce?

  • what controls do we need?

  • how do we deploy this safely?


Meanwhile the GP practice is thinking:

  • how does this actually fit into workflow?

  • what happens when things get busy?

  • what if staff rely on it too much?

  • what happens when something looks wrong?


And then DPO conversations often go somewhere completely different again:

  • transparency

  • supplier accountability

  • incident management

  • data sharing

  • patient expectations

  • governance responsibilities


The more conversations we have, the more convinced we are that good AI governance has to be collaborative but because practically speaking, nobody sees the whole picture alone.


The Hazard Log Started Feeling Different

One unexpected thing that has happened during the LENS development is that we’ve stopped thinking about the hazard log as just a document.


It’s started feeling more like a connected working space where:


  • suppliers contribute controls and evidence

  • commissioners apply contextual safeguards

  • practices identify operational realities

  • governance teams raise concerns

  • incidents and learning feed back in over time


That feels very different from the traditional approach where one person is often expected to coordinate or “complete” the assessment and hold everything together manually.


We’re not saying LENS solves all of that overnight.


But we are trying to create something that makes the process feel more connected and less isolating for the people carrying the responsibility.


Pulling Together the Pilot Team

We’re at quite an exciting stage now because we’re starting to bring together the people who will shape the pilot properly.


For the pilot, we’re gathering:

  • Clinical Safety Officers

  • DPOs

  • AI suppliers

  • GP practices

  • operational and governance stakeholders


and honestly, we’re really looking forward to seeing the conversations that emerge because we know everyone is going to spot different things.

That’s exactly the collaborative environment we want to create.


We’re Learning As We Go

One thing we’ve tried to be open about throughout this project is that we’re learning too.

Some of the best ideas so far have come from conversations during demos where somebody has said: “Have you thought about…?”


And suddenly we’re adding:

  • new hazards

  • new governance prompts

  • change management ideas

  • operational controls

  • incident pathways


because somebody from a completely different perspective saw something we hadn’t.


That’s probably the biggest thing we’re taking away from this stage of the journey:

AI governance works much better when people build the picture together.

And that’s really what we hope the LENS pilot becomes.


Emma Kitcher, Lens Founder and AI Privacy Nerd
Emma Kitcher, Lens Founder and AI Privacy Nerd

 
 
 

Comments


00011-2939233035.png

DID YOU FIND THIS USEFUL?

Join our mailing list to get practical insights on ISO 27001, AI, and data protection; No fluff, just useful stuff.

You can unsubscribe at any time. You are welcome to read our Privacy Policy

bottom of page