Scammers Are in Their ‘FraudGPT’ Era - CNN One Thing - Podcast on CNN Podcasts

CNN

CNN Podcasts

One Thing: Scammers Are in Their ‘FraudGPT’ Era
5 Things
Listen to
CNN 5 Things
Sun, Jul 13
New Episodes
How To Listen
On your computer On your mobile device Smart speakers
Explore CNN
US World Politics Business
podcast

CNN One Thing

You’ve been overwhelmed with headlines all week – what's worth a closer look? One Thing takes you beyond the headlines and helps make sense of what everyone is talking about. Host David Rind talks to experts, reporters on the front lines and the real people impacted by the news about what they've learned – and why it matters. New episodes every Wednesday and Sunday.

Back to episodes list

Scammers Are in Their ‘FraudGPT’ Era
CNN One Thing
Jul 13, 2025

A recent diplomatic cable revealed that someone has been using artificial intelligence to impersonate Secretary of State Marco Rubio in a bid to gain access to accounts or information. It comes amid a rise in deepfake scams targeting victims around the world. We look at whether law enforcement is equipped to deal with the problem – and what a different approach might look like.

Guest: David Maimon, SentiLink Head of Fraud Insights/Georgia State University Professor of Criminology

Have a question about the news? Have a story you think we should cover? Call us at 202-240-2895.

Episode Transcript
Reporter
00:00:00
How did you find out?
Secretary of State Marco Rubio
00:00:02
Oh, somebody called me, a senator that called me and said, hey, did you just try to reach me and actually sent me a voice recording? It doesn't sound, it doesn't really sound like me. Maybe if you fell for that call, you know, but maybe there was a better one that I didn't see.
David Rind
00:00:16
'When we learned on Tuesday of a diplomatic cable that revealed a brazen plot to impersonate US Secretary of State Marco Rubio, the specifics had a distinct 2025 flair to them. Someone, it's not clear who at this point. Had apparently used AI to generate voicemails and text messages pretending to be Rubio and then used those to reach out to high-level officials on Signal, including foreign ministers and a member of Congress. U.S. Diplomats and foreign leaders around the world have been put on alert. But we've seen voice spoofing like this before.
AI Generated Voice
00:00:50
It's important that you save your vote for the November election.
David Rind
00:00:53
For example, this may sound an awful lot like former President Joe Biden, but it's not.
AI Generated Voice
00:00:59
Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.
David Rind
00:01:06
This robocall urging New Hampshire Democrats not to vote in last year's primary was quickly flagged as AI generated and law enforcement got involved. The incidents show just how easy it could be to fall for a convincing fake. And even the most tech savvy among us are struggling to identify fact from fiction online these days. So with AI fueled scams on the rise, what can we do to keep ourselves safe? And do the authorities have the right tools to fight them? From CNN, this is One Thing. I'm David Rind. We're back in a bit.
David Rind
00:01:50
My guest today spends a lot of time on the dark web. I guess I realize how that sounds, but don't worry, it's for work.
David Maimon
00:01:56
My area of interest, I focus on fraud.
00:01:59
David Maimon is the Head of Fraud Insights at the Identity Verification from Centrelink and a Professor of Criminology at Georgia State University.
David Maimon
00:02:11
I oversee hundreds of fraud markets where I essentially collect intelligence and information the criminals put out there, all with the goal of understanding the modus operandi of different types of fraud they're engaging.
David Rind
00:02:25
'Are you seeing more AI-generated fraud or scam attempts in recent months, recent years?
David Maimon
00:02:32
Yeah, I mean, I can say that we're seeing more GEN.AI generated fraud starting 2023. So for example, compromised bank accounts are often offered for sale on some of the markets we oversee. And we've been seeing that going on for quite a while since 2019. But then came July of 2023, where GEN AI infiltrated the ecosystem, so to speak. We started to see. Criminals talking about FraudGPT and WarmGPT.
David Rind
00:03:03
Fraud GPT?!
David Maimon
00:03:04
Yeah. Fraud GPT, and so it's the criminal brother of chat GPT that that tool at the end of the day, allow you to be more efficient in manufacturing, phishing scams, phishing emails. It allows you to be more efficient in manufacturing the scam.
David Rind
00:03:23
So you're saying in the way that someone might go on to chat GPT to pull up research or have it write some kind of prompt criminals are using these systems in a similar way to help make their criminal enterprises more efficient?
David Maimon
00:03:40
100%. We're seeing them using some of the jailbroken tool, platforms such as ChetGPT and Grok and others, one of the interesting things we've seen was starting July of 2023, a dramatic increase in the volume of compromised bank accounts the criminals have offered for sale.
David Rind
00:03:57
'Well, but what about the person-to-person kind of scams? Like in the case of this Marco Rubio impersonation attempt, according to this US diplomatic cable, whoever was doing this likely used AI-generated text and voice messages, hoping to get the victim to kind of communicate, come over on signal and talk, how easy would that be for someone to do? Like how much audio do you need to generate a convincing fake?
David Maimon
00:04:23
'It's a great question. So we talked about ChatGPT and FraudGPT and how these tools infiltrated the system. In addition to the abilities of those tools, we are also seeing a lot of use in GenAI and deepfakes in the context of other types of crime. As you mentioned, the one-on-one, we're seeing that happening a lot in the context of executive impersonation, such as the Rubio case, but there are way more. I would say especially the voice cloning in the context of deepfakes and face swapping, we've seen this technology being used quite often also by online romance scams.
Saskya Vandoorne
00:05:00
'This AI-generated fake Brad Pitt swindled a 53-year-old French woman named Anne out of $850,000 in a scam that would become a viral sensation.
David Maimon
00:05:13
We've seen a lot of folks from Nigeria, a group called the Yahoo Boys, using deep fakes in order to lure folks to talk to them live and then building rapport with them with an end goal of the victim sending them money.
David Rind
00:05:30
'So somebody thinks they're having a one-on-one conversation over video, but the person on the other side is not who they think completely different person
Laura Coates
00:05:39
A loan the fake Brad Pitt supposedly needed for cancer treatment. The woman even divorced her own husband. This woman says quote, I just wanted to help someone and yes, I've been scammed. That's why I came forward because I'm not the only one in this situation.
David Maimon
00:05:56
'And it's kind of mind-boggling to see how the victims have no reservation and they express no suspicions with the fact that at the end of the day they're not talking to the person they think they do. And so it's mind- boggling to how those tools are being used in the context of online promise fraud.
David Rind
00:06:14
Well I mean, that's absolutely horrifying. Um, when does law enforcement get involved? Like, are they actually tracking this stuff in real time?
David Maimon
00:06:22
It's very difficult to track this stuff in real time, simply because, I mean, think about it, you'll have to sort of track every conversation a person have online. So unless someone complains, file a complaint, law enforcement will not get involved in this. Law enforcement agencies around the globe are familiar with this issue, but the focus is not necessarily the deep fake, the focus is more the criminal groups.
David Rind
00:06:50
Well, I mean, if the tech is moving so fast that these criminals can impersonate folks in real time over these video calls, how do people like you who are working to root out this fraud or law enforcement, how did they keep up?
David Maimon
00:07:04
So there's a lot of movement and traction across the industry building tools which will allow victims to detect those deepfakes. There are many tools out there, tools that you can download on your computer or your smartphone, tools which allow you to upload deepfake videos or images, which will at the end of the day allow you assess whether you're talking to a real person, whether The voice... You're getting on the other side of the phone is of a real person or of a deep fake. Now, you know, whether those tools are effective or not is a different story.
David Rind
00:07:41
'Right. If some of this detection software is kind of AI-based, it would run into similar problems that these AI tools already do, right? Biases against certain skin types or hallucinations, right? Are those some of the issues they're running into?
David Maimon
00:07:58
Yes, I mean, you're right. The tools we're using for detection purposes are operating based on machine learning and AI technology, and so it's all depending on the data you have, the database you have and every biases that will be sort of in the database will of course be reflected in the tool ability to detect the differences. Unfortunately, this is an arden race and the criminals are always a few steps ahead of us. Having said that, there are many companies who are actively researching the issue and trying to come up with solutions to it.
David Rind
00:08:44
So it sounds like law enforcement is still kind of running behind the technology and these criminal groups to kind of catch up. What would an adequate response look like to you?
David Maimon
00:08:56
'Great question. I think folks need to be vigilant. When you're talking to someone live, it's really important to look for signs that you are talking to either a real person or a deepfake. So for example, folks should look into a natural eye movement, someone blinking a lot or not blinking at all. Folks should be looking for... Inconsistencies in facial expression. If you're saying something sad, but then the character on the video is smiling, that's an inconsistency that I think folks should be aware of. Lighting and shadow inconsistences are also something folks should look for. Lack of synchronization between audio and visual is a huge red flag. I know some of the fraudsters will tell you that They're talking to you from a slo- wi-fi or a location where the wi-Fi is bad.
David Rind
00:09:52
So that's like an excuse they use to hide some of the stuff?
David Maimon
00:09:55
It's one of the excuses, yeah, one of the excuses because they are aware of those signs as well, right? I mean, and so they're trying to sort of lure you into complying with them. I mean these are signs that at the end of the day the tools are looking for when they're learning the differences between deepfakes and real videos, but humans should be looking for as well when they are discussing with folks.
David Rind
00:10:19
'Right. That all makes sense. Those seem like logical tips that take a beat and see if it really makes sense to you or I. But I guess how much of a responsibility should fall on the companies in Silicon Valley to help detect this stuff? We've seen tech firms like OpenAI and Meta sign a pledge to kind of work together to detect harmful AI content leading up to the election last year. But should they be doing more? Because. To my mind, signing a pledge or just slapping an AI-generated label on something, it's a start, but it feels like kind of small potatoes if what you're describing is this huge escalation in how realistic this stuff seems to people.
David Maimon
00:10:59
I think we should all be doing more in order to try and detect that because at the end of the day, you do have a lot of people getting impacted by this type of crime. And the victims, unfortunately, are losing a lot money. They're losing their livelihood. There's a lot research suggesting that if you're the victim of this type of fraud, you also experience some psychological distress and health issues. I believe that we should all be doing more in order to try and detect this issue, flag it, and then prevent bad actors from using this technology in order to victimize our people.
David Rind
00:11:37
'It just seems to me that a lot of this stuff, when law enforcement gets involved in prosecuting some of these crimes or scams, it almost always takes place after it's too late, after it has already been done, after some horrible AI-manipulated pornographic image has been put out into the world and the victim is traumatized. And I know you talked about these detection tools that could be maybe used in real time. It just seems like this is so hard to stop before it's too late.
David Maimon
00:12:09
It's very difficult to stop before it's too late, but I think that it's hard to stop before it is too late because of the approach we're taking to dealing with the issue. I think if we are taking another approach, we suggest that instead of trying to arrest people, we should try and disrupt the operation. Maybe we will be more successful. So for example, think about putting a lot of chatbots out there. So the way... You know, fighting deepfakes with deepfake, right? I mean, so on all these platforms, the criminals are using deepfakess in order to engage with people. You upload your own deepfaks. And in a way, you know when the criminal deepfacess are trying to engage with you, they engage with an actual deepfaker, right. I mean so in a way that you will be able to waste their time and maybe nudge the criminals away from the victim.
David Rind
00:13:01
But if we're all just swimming in a sea of deepfakes that we're putting out to counter each other, how is anybody going to know what is actually real?
David Maimon
00:13:08
You're right. I mean, there should be some restraints, but in order to disrupt effectively, I think we need to think outside of the box. And that is one of the ideas we're thinking about.
David Rind
00:13:19
Interesting. I can't say this conversation makes me feel any better about opening up my computer going forward, but it's good tips for all of us to think about. Professor Maimon, thanks so much.
David Maimon
00:13:29
Thank you so much for having me.
David Rind
00:13:40
One Thing is a CNN Podcasts production. This episode was hosted and produced by me, David Rind. Our show runner is Felicia Patinkin. Matt Dempsey is our production manager. Dan Dzula is our technical director. And Steve Lickteig is the executive producer of CNN Podcasts. We get support from Alex Manasseri, Mark Duffy, Robert Mathers, John Dianora Leni Steinhardt, Jamus Andrest, Nicole Pesaru and Lisa Namarow. Special thanks to Wendy Brundidge. Just a reminder, we'd like to hear from you. You can leave a rating and a review wherever you listen. Let us know how we're doing. Hopefully it's in the positive direction, but either way, just wanna hear from ya. We'll be back on Wednesday. I'll talk to you then.