When AI Meets Romance Fraud: Why I’m Cautiously Hopeful
- Kafico Ltd
- Sep 8
- 2 min read

I’ve always had an interest in documentaries about fraud, but particularly romance fraud. They’re fascinating and heartbreaking, stories of trust, loneliness, manipulation, and resilience. So when I came across the Alan Turing Institute’s briefing on AI and romance scams, I was immediately intrigued.
The paper explains how generative AI is changing the fraud landscape. Scammers no longer have to spend hours crafting messages or building fake personas. With tools like large language models, they can generate thousands of flirty, believable introductions in seconds - playing the numbers game. Add in deepfake photos or even fake video calls, and spotting a scam gets much harder.
At first, this feels like the beginning of an unstoppable wave. But there’s a positive nugget in here: the same technology that powers these scams also creates new opportunities to stop them.
Where AI Stumbles
At the moment, AI is very good at getting conversations started, it can mimic warmth and charm convincingly in short bursts. But where it falls down is in keeping the story straight. Over time, the replies become repetitive, inconsistent, or emotionally tone-deaf.
AI simply isn’t good at sustaining long-term, believable relationships.
That gap is important. It means platforms, regulators, and even individuals, can use those cracks as detection points.
Spotting Scams Without Reading Every Message
My immediate thought was, of course, the challenge of privacy. These conversations happen in private spaces, dating apps, messaging platforms, and nobody wants Big Brother reading their love notes.
The good news is: detection doesn’t have to mean surveillance. It can focus on patterns instead of content. Think of things like:
Accounts sending hundreds of near-identical opening lines.
Profiles that respond instantly at all hours, never showing the rhythms of real life.
Devices linked to multiple fake accounts.
And beyond that, there’s collaboration and guardrails:
Protective apps on a user’s phone could warn them if a conversation starts to show “bot-like” signs, a bit like spam filters in email.
Victims or suspicious users could feed conversations (anonymised) into detection systems that check for scam signatures.
If tech companies, banks, and law enforcement share anonymised data on scam patterns, they can catch more fraudsters earlier, without exposing people’s personal conversations.
Why I’m Hopeful
Yes, AI makes scams bigger and faster. But it also (at the moment) creates new “tells” that make them easier to spot. Instead of making romance fraud unstoppable, AI may actually help us build stronger defenses
I’ll keep watching the documentaries, but I’m also watching closely to see how we can turn AI from a fraudster’s weapon into a way to trip them up!




Comments