How do I know if an AI tool is safe to use?
- Kafico Ltd
- Oct 24
- 3 min read

I find that, when small organisations start exploring AI, whether to automate recruitment, triage enquiries, or analyse customer data, the conversation usually begins with excitement.
New efficiencies, lower costs, smarter insights are all on the table. But the more important conversation, the one that rarely takes place early enough, is about safety. Not safety in the technical sense of cybersecurity, but in the broader human sense: is this system safe to use on the people you serve?
From the outside, AI safety can seem like something only regulators, lawyers, or computer scientists can assess. But safety is also cultural, it depends on whether your organisation knows what it is buying, can explain how it works, and accepts accountability for what follows.
In my own research on automated decision-making systems in UK public authorities, the biggest risks didn’t come from malevolence or malfunction, but from indifference. Public bodies deployed tools they didn’t fully understand, outsourced decision logic to opaque systems, and left citizens without any meaningful way to challenge the outcomes. That same dynamic can quietly take hold in any organisation when convenience and cost savings eclipse scrutiny.
A safe AI tool is one that can be understood, explained, and challenged. Anything less is a black box with your organisation’s name on the front.
The illusion of oversight
Many vendors promise “human-in-the-loop” systems which is reassuring language that implies control. In practice, though, the human often has no authority or competence to intervene. Oversight that isn’t informed is not oversight; it’s tokenistic. Before adopting any AI tool, make sure someone in your organisation can meaningfully question its outputs and that they are given 'permission' to challenge it (whilst not an AI system, we can take lessons from the Horizon scandal, around Computer Bias). That might mean upskilling a staff member, or partnering with an external expert, but it cannot mean blind trust.
Transparency isn’t a luxury
Opacity is not just a technical problem; it’s often a cultural one. In the public sector, I found that opacity often stemmed from what I called “neutral opacity”, a kind of bureaucratic indifference where nobody meant to hide information, yet nobody took responsibility for making it understandable. If a supplier cannot clearly describe how their model uses data, what decisions it influences, and how errors are corrected, don’t assume that’s acceptable complexity. Assume it’s a sign of weak governance.
Practical ways to assess AI safety
Here are some ways small organisations can build safety into their decision-making before deploying AI:
1. Ask about explainability. Can the vendor describe, in plain language, how inputs lead to outputs? Ask for a clear account of the model’s purpose, logic, and limitations. If they talk only in abstractions about “learning from data,” that’s not an explanation, that’s evasion.
2. Check for bias testing. Request evidence of fairness audits or testing on diverse datasets. If the vendor hasn’t tested for discrimination, you’ll inherit the liability when harm occurs. If they aren't testing, you must test before and during deployment.
3. Clarify accountability. Who is legally and ethically responsible when the system makes a mistake, you, the developer, or both? Put it in writing. Shared accountability should be explicit, not assumed.
4. Demand data transparency. Know exactly what data the system will access, how long it will be stored, and whether it’s shared with third parties. Avoid “improvement clauses” that let vendors reuse your data for model training without an established lawful basis.
5. Test for human competence, not just presence. Identify who in your organisation will review automated outputs. Do they have the authority and digital literacy to challenge them? If not, invest in training before deployment.
6. Pilot before you scale. Run small, contained trials and review outcomes with real users or clients. Pay attention to anomalies and complaints, they often reveal hidden biases or unintended effects.
7. Publish what you learn. Even if you’re not legally required, share your impact assessments and lessons learned. Transparency isn’t only an ethical act; it’s a trust-building one.
A culture, not a checklist
Checking compliance boxes will not make your organisation safe. Safety grows from a culture that treats technology as a human system, one that can fail, discriminate, or erode autonomy if left unexamined. That culture starts with curiosity and humility: asking how a model reaches its decisions, who might be disadvantaged by them, and how you’ll know if things go wrong.
AI doesn’t remove your responsibility to act ethically; it amplifies it. The safest tools are the ones that keep that responsibility visible.
Want to find out more about our compliant AI platform - CleanAI? It helps you build or buy transparent AI systems with confidence.




Comments