top of page
Search

AI Laundering: Hiding AI in Supplier Pipelines

ree

If AI washing is about suppliers exaggerating their use of artificial intelligence, AI laundering is the opposite problem. It happens when suppliers deliberately disguise, downplay, or hide the presence of AI in their systems.


Why would they do this? Because AI is increasingly associated with risk, regulation, and scrutiny. By presenting an AI-driven system as simple automation or “just analytics,” suppliers try to bypass difficult questions about accountability, fairness, or compliance and ship their product as quickly as possible.


The Challenge of Defining AI

Part of the problem is that even defining what counts as AI isn’t straightforward. Many people think AI only applies when systems “learn”, for example, when models are trained on data and adapt over time.


But under the EU AI Act, the definition is much broader. It describes an AI system as any machine-based system that operates with some level of autonomy, may or may not adapt after deployment, and generates outputs, such as predictions, recommendations, or decisions, that can influence the environment.


This clearly includes what some researchers call “good old-fashioned AI” (GOFAI): rule-based systems, expert systems, and deterministic decision trees. These may look simple compared to modern machine learning, but they still fall under AI regulation if they influence human decisions or automate critical processes.


Why is that? Because the regulation takes a purpose-driven view: it isn’t about how advanced the technology looks, but about its potential for harm, discrimination, or social consequence. Even simple automated or rule-based systems can produce skewed or unfair outcomes.


This ambiguity creates an opening for suppliers. They may claim their rules-based system falls outside the scope of AI to avoid compliance obligations, or blur the lines between “just software” and regulated AI. That’s classic AI laundering.


Why AI Laundering Matters for Procurement

For procurement and contract management teams, AI laundering creates serious risks:

  • Transparency risk: You don’t know what technology you’re actually buying or deploying.

  • Compliance risk: Undeclared AI (whether rule-based or machine learning) can expose you to GDPR, discrimination claims, or breaches of the AI Act.

  • Contractual risk: If AI isn’t declared, it may not be covered in your service level agreements (SLAs) or liability clauses.

  • Operational risk: Hidden AI can behave unpredictably, making performance monitoring and assurance more difficult.


Red Flags to Watch Out For

Signs that a supplier may be laundering AI include:

  • Descriptions like “advanced analytics” or “intelligent automation” that carefully avoid the word “AI.”

  • Sudden shifts in terminology between marketing and technical documents.

  • Reluctance to explain whether a system uses rules, machine learning, or a mix.

  • Contracts that leave decision-making mechanisms vague or undefined.


Questions Every Buyer Should Ask

To counter AI laundering, procurement teams should probe directly during tender evaluation, supplier due diligence, and contract negotiations:


  1. Which parts of your system meet the EU AI Act definition of AI - including both rule-based and learning-based components?

  2. What decisions or recommendations are generated by the system, and how do they influence users or processes?

  3. How are these AI elements validated for accuracy, fairness, and reliability - and who is accountable for that validation (supplier or buyer)?

  4. If the AI components change over time (e.g. retraining, rule updates, third-party model swaps), how will these changes be monitored, reported, and reflected in the contract?

  5. What transparency can you provide e.g. documentation of data sources, model logic, or rule sets, to support procurement due diligence?


AI washing is about overselling, AI laundering is about concealing, and both create risks for procurement professionals. By asking targeted questions and insisting on transparency, buyers can avoid signing contracts that hide critical dependencies.


This is the second blog in our series on AI risks in procurement. In the next post, we’ll look at the bigger picture: the AI bubble burst, and what happens when inflated claims across the market collide with real-world performance.


We’ll also be tackling these issues in detail at our December event, Scrutinising AI Systems Effectively. Join us if you want practical tools for spotting hidden AI risks and protecting your procurement processes.


Emma Kitcher, AI Nerd and Founder of CLEAN AI
Emma Kitcher, AI Nerd and Founder of CLEAN AI

 
 
 

Comments


00011-2939233035.png

DID YOU FIND THIS USEFUL?

Join our mailing list to get practical insights on ISO 27001, AI, and data protection; No fluff, just useful stuff.

You can unsubscribe at any time. You are welcome to read our Privacy Policy

bottom of page