Why Your AI Supplier Won't Explain
- Kafico Ltd
- 4 days ago
- 3 min read

Opacity in your supply chain isn’t always about them having something to hide. Opacity has many faces; some calculated, some careless, and some very ordinary. When we buy or inherit algorithmic systems, we may find that AI vendors speak the language of the sales deck and that algorithmic snake oil.
Yet when DPOs ask how the model actually works, that models are in the pipeline and their source, the conversation often stops at “commercial sensitivity" or worse, confused silence. IP protection has become a ready-made defence, not always cynical but always convenient. The claim that transparency would “compromise the system” usually goes unchallenged, partly because buyers sometimes lack the technical literacy the confidence to challenge it.
This is how opacity becomes embedded in governance itself. In my previous research work, I discussed several shades of it — intentional, intrinsic, illiterate, even neutral opacity, where accountability is displaced rather than denied. It’s the kind of opacity that seeps in when responsibility for understanding is nobody’s explicit job.
For DPOs, this poses a deep structural problem. How do you assess compliance when the organisation itself doesn’t understand the system it’s using? The instinct is to rely on the vendor and the assurance that the model meets GDPR standards, that Article 22 doesn’t apply because the decision is “advisory,” that bias has been “tested and mitigated.”
Why Resist Transparency?
Perceived uniqueness: Admitting it’s stitched from existing models undermines IP.
Contracts: Licences/NDAs forbid naming components.
Pricing: Transparency reveals cheap or open-source inputs, weakening leverage.
Liability: Opaqueness deflects blame for bias, safety, or data issues.
Ops reality: They may not even track which models are in the stack.
What More Can We Do?
The UK faces an acute digital skills gap, especially in roles that demand an understanding of algorithms, coding, and data architecture.
The people who operate automated systems are rarely those who built them, and those charged with oversight often lack the technical language to challenge their design or outputs. The result is tokenistic human oversight and a quiet deference to automated or supplier judgment.
That gap matters because the safeguard of “human intervention” only works when the human is competent and empowered. The law assumes a level of comprehension and authority that, in practice, many buyers / operators do not possess. The risk is that assessment and subsequent oversight becomes procedural, not substantive — a box ticked rather than a dialogue of accountability.
Confidence is the missing piece. It is understood that “automation bias” encourages staff to defer to machine outputs even when they sense something is wrong. It is difficult to challenge an algorithmic decision when the system has been framed as scientific, neutral, or authorised by law.
Closing the skills gap, then, is not simply about training courses. It is about changing how authority is distributed within organisations. Meaningful oversight depends on confidence as much as competence — on the understanding that DPOs and other governance professionals are not overstepping when they ask difficult questions.
A better model would pair DPOs with technically literate staff who can interrogate claims about training data, explainability, and system accuracy. That small investment in literacy can redraw the power balance between authorities and vendors. It can help DPOs move from passive recipients of assurances to active interpreters of risk.





Comments