The LENS Journey #1: Why We’re Piloting a New Approach to AI Governance in Healthcare
- 1 day ago
- 3 min read

After more than a year of development, testing, discussion, and redesign, we’re now approaching go live for our NHS pilot of LENS (previously CleanAI), a platform designed to support practical AI governance and clinical safety oversight in healthcare settings.
LENS stands for Lawful, Explainable, Necessary and Safe.
The pilot is focused on a question that feels increasingly important across healthcare:
How do organisations safely understand, assess, and oversee AI systems in day-to-day practice?
Over the last couple of years, AI adoption across healthcare has accelerated rapidly. From ambient voice technologies and clinical documentation tools through to decision-support systems and workflow automation, organisations are being presented with more and more AI products.
But many teams are still working out how to assess these systems consistently and safely in practice.
One of the biggest themes we’ve seen during development is that organisations are often replacing one operational risk with another when adopting AI systems, but without always having a clear way to judge whether the new risk profile is actually safer.
For example, an AI system might:
reduce clinician workload,
improve consistency,
or save time,
while also introducing new risks such as:
over-reliance on outputs,
missed information,
incorrect summaries,
reduced visibility of errors,
or uncertainty around when human review is needed.
The challenge is not simply deciding whether AI is “good” or “bad”.
The real challenge is understanding:
what a system actually does,
how it fits into clinical workflows,
where human oversight is needed,
what limitations exist,
and how risks should be managed over time.
One of the reasons we developed LENS was because many current approaches seem to sit at one of two extremes.
Some are highly technical and difficult for operational teams to apply in practice.
Others rely heavily on static documents and checklists, without giving enough visibility into how systems actually behave in real-world settings.
We wanted to explore whether there was a more practical middle ground; something that supports:
clinical safety,
governance,
transparency,
and accountability,
while still being usable for commissioners, providers, practices, and frontline teams.
A key part of the pilot involves understanding AI systems through their actual capabilities and workflows.
Not all AI systems behave the same way. Not all systems fail in the same way. And not all systems need the same level of oversight.
For example:
a speech-to-text system creates different risks to a generative summarisation tool,
and a triage support system carries different clinical considerations to a workflow automation tool.
Understanding those differences is really the premise of LENS.
One of the most interesting early observations has been around supplier transparency.
In several conversations, suppliers have seemed surprised to be asked for:
real-world testing evidence,
performance information,
known limitations,
or details about how their systems behave in practice.
That in itself has been quite revealing.
It suggests that while AI adoption is moving quickly, practical governance and assurance processes are still catching up.
The aim of this pilot is not to slow innovation down.
If anything, the opposite.
The hope is that more practical, proportionate, and consistent approaches to governance will help organisations adopt AI more safely and confidently over time.
Over the coming months, I’ll be sharing more reflections from the pilot, including lessons around:
clinical safety,
human oversight,
supplier assurance,
workflow visibility,
operational risk,
and real-world implementation.
We are still early in this journey, and there will undoubtedly be things that work well and things that need to evolve.
But it feels like an important conversation to start having openly.





Comments