top of page
Search

Deepfakes: The Taylor-Robinson Example and How to Reduce the Risk

Deepfakes are being used to drive financial gain through deception.
Deepfakes are being used to drive financial gain through deception.

Artificial intelligence has unlocked amazing capabilities, from enhancing video calls to creating convincing virtual actors. But as with all powerful technologies, it has a dark side. Deepfakes, AI-generated images, videos, or audio that convincingly imitate real people — are no longer fringe curiosities. They are increasingly being used to misinform, manipulate and, in some cases, make money off deception.


When Expertise Becomes Exploited: The Taylor-Robinson Case

One of the most recent examples involves Professor David Taylor-Robinson, a respected public health expert at the University of Liverpool. In late 2025, he discovered that AI-generated videos had been circulating on TikTok portraying a fake version of him talking about a fabricated menopausal symptom called “thermometer leg.”


These doctored videos went further; the synthetic Professor Taylor-Robinson appeared to endorse health products like probiotics and herbal supplements from a US-based company. One of the videos alone got more than 365,000 views before removal.


He had never spoken on this topic, had no connection to the supplements, and was unaware the videos existed until a colleague alerted him. Even more concerning, people who knew him well later admitted they might have been taken in by the fake clip had they not recognised it was too good to be true.


This is serious identity misuse and misinformation. And while platforms like TikTok eventually removed the content, the initial spread highlighted weaknesses in content moderation and how easily AI can be weaponised to mislead the public and profit from bogus advice.


Deepfakes for Money: A Growing Exploit

The Taylor-Robinson case is symbolic of a broader trend: deepfakes being used to drive financial gain through deception.


Fake Expert Endorsements

In the Taylor-Robinson case, the deepfakes were used to push viewers toward purchasing supplements, a clear form of monetary exploitation. These videos often redirect viewers to commercial websites, playing on trust in a professional figure and extracting financial benefit from that misplaced trust.


Financial Fraud and Scams

Beyond the health sector, deepfakes have been used to pull off more overt financial scams. For example, there are documented cases (from other deepfake studies and reports) where deepfake audio of a CEO or CFO was used to trick employees into authorising large financial transfers. According to broader analyses of deepfake misuse, these scams have already contributed to billions in fraud losses worldwide, with projections rising sharply as the technology becomes more accessible.


Celebrity and Identity Misuse

Public figures, from entertainers to political leaders, have also been targeted. Fake deepfake videos portraying celebrities promoting fraudulent schemes, investment scams, or fraudulent crypto offerings have circulated widely online — often driven by actors hoping to profit from clicks, purchases or ad revenue. These incidents illustrate how deepfakes can be weaponised for monetary gain at scale.


Why This Matters

Deepfakes threaten public trust in media and institutions. When a respected academic’s face can be convincingly repurposed to sell products or spread misinformation, the implications go far beyond one individual:


  • Public health risk: People may make decisions about their health based on false advice.

  • Reputational damage: A professional’s credibility can be undermined by fake content.

  • Erosion of trust: If audiences can no longer trust what they see as “real,” society’s baseline for truth is weakened.

  • Regulatory gaps: Platforms and legislators are still scrambling to keep pace with the speed of AI-generated content.


How Decision-Makers Can Reduce Their Risk of Being Deepfaked


Deepfakes aren’t just a problem for professors on TikTok. Anyone in a decision-making role – executives, board members, SIROs, Caldicott Guardians, clinical leaders – is an attractive target. If an attacker can convincingly imitate you, they can:


  • authorise payments,

  • push staff to share sensitive information, or

  • appear to endorse products or policies you’d never support.


The good news: you don’t need fancy tech to reduce the risk. Most of the protection comes from habits and process.


1. Assume your face and voice can be faked

First mindset shift:

“If it’s just my face on a video call or my voice on the phone, that alone doesn’t prove it’s me.”

Make this explicit with your team. Say, out loud, things like:

  • “I will never approve payments or share passwords over Teams or WhatsApp.”

  • “If anything feels off, you must double-check with me through a known route.”


That gives people permission to challenge even something that looks and sounds like you.


2. Use strong verification for sensitive decisions

For anything involving money, access or sensitive data, build in friction:


  • Call-back rule: Staff should always confirm high-risk requests (payments, changing bank details, sending data) by contacting you via a trusted channel – e.g. your internal number or official email found in the address book, not the number or link in the message.

  • No “urgent shortcut” policy: Make it policy that no one, including you, can bypass normal approval steps “because it’s urgent” on the basis of a call or video alone. Deepfake scams live off urgency and authority.

  • Shared challenge system: With key people (PA, finance lead, on-call lead), agree a simple challenge mechanism – for example:

    • a rotating passphrase known only to you and them, or

    • a private question only you would know how to answer quickly.


    If they’re unsure it’s really you, they challenge. If “you” get annoyed and refuse… that’s a red flag in itself.


3. Tidy up your digital footprint (without going off-grid)

You can’t – and shouldn’t – disappear from public view. But you can make it harder to weaponise your presence:


  • Be thoughtful about long, high-quality public audio/video (podcasts, webinars, keynotes). Where possible:

    • avoid posting hour-long, clean recordings everywhere;

    • favour shorter clips, mixed audio (panel discussions), or live-only formats where practical.

  • Lock down personal profiles and avoid oversharing:

    • detailed travel plans (“I’m out of the country this week”);

    • who signs off payments;

    • internal nicknames/structures that make impersonation easier.

  • Have a clear, public rule on your website or LinkedIn like:

    “I will never ask you for money, bank details or login codes by DM, email, video call or social media.”

That gives staff and the public a simple test to ignore scam content that uses your likeness.


4. Train your team to be politely suspicious

It’s not enough for you to understand deepfakes – your front-line staff need to as well:


  • Include deepfake scenarios in fraud / phishing / cyber training:

    • “You get a Teams call from the CFO asking you to urgently pay a new supplier…”

    • “You see a video of our Medical Director promoting a new crypto-flavoured weight-loss scheme…”

  • Normalise the response you want:

    “I’m going to end this call and ring you back on your usual number to confirm.”

  • Celebrate, don’t punish, people who refuse suspicious requests, even if they turn out to be genuine. That’s how you build a culture where second-checking is safe.


5. Have a response plan if you are deepfaked

Finally, plan for the “when”, not just the “if”:

  • Decide in advance:

    • who investigates and collects evidence,

    • who contacts the platform / police / regulators if needed,

    • how you’ll communicate with staff and the public (“If you saw a video of me saying X, it’s fake – here’s what we’re doing about it.”).

  • Log incidents so you can:

    • update training,

    • strengthen weak points in process,

    • show regulators you took reasonable steps.


Deepfakes are no longer sci-fi possibilities, they are practical tools for fraudsters, opportunists and misinformation peddlers. The Professor Taylor-Robinson incident shows how easily a respected professional can have their identity hijacked, while high-value deepfake scams demonstrate the financial and organisational risks when trust can be so easily forged.


But it isn’t hopeless. With clear processes, thoughtful digital habits, and a culture where staff feel empowered to double-check anything suspicious, decision-makers can dramatically reduce the likelihood that a synthetic version of them is used to mislead others. The challenge is real and acting now will always be easier than repairing the damage later.


Emma Kitcher, AI Nerd
Emma Kitcher, AI Nerd

 
 
 

Comments


00011-2939233035.png

DID YOU FIND THIS USEFUL?

Join our mailing list to get practical insights on ISO 27001, AI, and data protection; No fluff, just useful stuff.

You can unsubscribe at any time. You are welcome to read our Privacy Policy

bottom of page