top of page
Search

The False Promise of AI Productivity: When the Tool Becomes the Task

ree

When AI tools first exploded onto scene, they came wrapped in a big promise: faster, smarter, cheaper. For many organisations, especially in resource-strapped sectors like healthcare and professional services, the appeal was clear. But as real-world pilots begin to show their findings, a more complex picture is emerging, one where promised efficiency gains are often eroded by the effort required to supervise the technology.


✍️ Everest PR: When Generative AI Became a Distraction


The vision was sound: AI would support ideation, drafting, transcription, and media pitches, allowing human staff to focus on more strategic work.

In reality, the result was quite different.


Staff found that using ChatGPT often slowed them down:

  • They had to write careful prompts and structure inputs.

  • They spent time correcting errors and inaccuracies.

  • Each platform update introduced new quirks to learn.

Rather than a time-saver, the tool became a new layer of work and a source of stress.


🏥 General Practice Pilots: AI That Creates, Then Consumes Time


The same pattern has emerged in healthcare. On our experience, supporting customers, several GP practices recently piloted an AI tool designed to manage and draft documents. On paper, the benefits were clear: faster correspondence, lighter admin burden, more time for patients.

In practice, every output still had to be reviewed and corrected.
In practice, every output still had to be reviewed and corrected.

Staff were often unsure whether to trust the AI, so checking became standard and checking takes time. In the end, the very human oversight needed to keep the system safe cancelled out any meaningful productivity gains. The pilots were quietly discontinued.


⚖️ When AI Adds Work Instead of Removing It


These examples highlight a fundamental issue often overlooked in AI procurement and implementation:


The tool only adds value if the cost of supervising it is lower than the value it provides.


If a task takes twice as long because a human now has to:

  • Brief the AI carefully,

  • Check its outputs for hallucinations, omissions, or biases,

  • Adjust to platform quirks and updates...

…then AI becomes a net negative, not a force multiplier.


What Should Organisations Do Instead?


  1. Pilot with scrutiny – Not every AI tool will work in your environment. Run short, honest trials with clear success criteria.

  2. Factor in the hidden cost of oversight – Human review isn’t optional in high-stakes or high-complexity contexts. Budget for it.

  3. Don’t mistake novelty for value – An AI tool that dazzles in a demo may still fall apart under routine conditions.

  4. Train people, not just models – Productivity only improves if the team using the tool understands when to trust it, and when not to.


In Summary

AI is not magic. It’s software, and like any software, it requires configuration, training, monitoring, and occasionally rejection. The organisations that benefit most from AI will not be those who chase the dream, but those who design around its limitations, measure its real-world impact, and aren’t afraid to walk away if it adds more noise than value.



Emma Kitcher, AI Nerd
Emma Kitcher, AI Nerd

 
 
 
00011-2939233035.png

DID YOU FIND THIS USEFUL?

Join our mailing list to get practical insights on ISO 27001, AI, and data protection; No fluff, just useful stuff.

You can unsubscribe at any time. You are welcome to read our Privacy Policy

bottom of page