Podcast: Deepfakes and fraud: Coming to a phone near you!
Deepfakes? Aren’t they a bit Mission Impossible? Well, apparently not, as we hear from Dr. Nikolay Gaubitch at US-based network security specialists, Pindrop. In the latest Trending Tech Podcast, he says fraudsters have already used the AI-based technology with partial success in a $35 million bank fraud attempt in Dubai. This could become a common story if service providers don’t take preventive measures. And even fraudsters have been affected by Covid-led delays at contact centres. Now that’s a shame, says Jeremy Cowan, editorial director of IoT Now and The Evolving Enterprise! So hackers are bypassing the centres altogether. Plus, the Trending Tech Podcast tells how Google Maps helped catch a mafia boss on the run for 20 years.
Jeremy Cowan 00:05
Hi, and welcome to the latest Trending Tech Podcast. I’m Jeremy Cowan, co founder of the websites IoT-Now.com, VanillaPlus.com and The Evolving Enterprise (theee.ai). Now, we recently came across a new fraud analysis called “The Voice Intelligence Security Report“. It’s by Pindrop, a London-based computer and network security company. And I am delighted to say we’re joined today by Pindrop’s, director of research, Dr. Nikolay Gaubitch, who can tell us later about the security threats they found and how telcos and companies should respond. So, Nikolay, welcome to the Trending Tech Podcast.
Dr Nikolay Gaubitch 00:49
Hi Jeremy, thank you so much. It’s a great pleasure to be here today.
Jeremy Cowan 00:53
Good to have you. First of all, we’re going to have a look at some of the latest tech news Nikolay across the piece. What have you come across?
Nikolay Gaubitch 01:01
So, you know, I was looking back at 2021. And really the most interesting pieces that came out for me, one was the Anthony Bourdain story, where there was a documentary made with him posthumously. And parts of the documentary, were actually synthesising his voice using the so-called deep fake technology. And I found that fascinating because, of course, it caused a lot of stir of opinions of whether that’s ethical, whether it’s not. And you know, it almost feels like a dystopian story. It’s come out in stories like Black Mirror, you know, reviving people through technology, which is what happened here. That to me was one of the sort of speech technology highlights of 2021.
And, you know, the second story I had was very much related to that, where fraudsters use the cloned voice. And were detected to be involved in a US$35 million bank heist. Right. So that’s a massive thing. And the fascinating part is that these two articles are linked by the same technology. On the one hand, of course, there’s a lot of conversations, whether it’s ethical, whether we should do this, but it’s rather innocent. And then, on the other end, you have the same technology being used in a pretty malicious way.
Jeremy Cowan 02:34
Yeah, the thing that struck me about the Anthony Bourdain issue was that they never actually indicated in the programme that this was what they done. It was only explained afterwards. And I know that one of the reviewers felt particularly misled and was pretty scathing about it. My feeling about that – and I don’t know how you feel, Nikolay – is that, you know, if there’s an intention to deceive, as there clearly is in the attempted theft and fraud, then every time there’s an intention to deceive, then that’s a misuse. Otherwise, it seems like an interesting tool for all sorts of uses.
Nikolay Gaubitch 03:15
That’s right. And in the larger scheme of things, I’m a strong believer that when technology is used to synthesise humans in whatever way or form that should be clearly indicated.
Jeremy Cowan 03:30
Yeah. And as the report said about the attempted fraud – which I think happened in Dubai didn’t it? – it sounded a bit Mission Impossible. I don’t want to seem wise after the event, but what would you say were the key mistakes made by the bank in this case in allowing the fraud or allowing partial fraud? And what lessons should we learn from this and from the voice cloning attack on a UK company that failed in 2019?
Nikolay Gaubitch 04:02
So, you know, I don’t want to start talking about mistakes at this point in time, this is still a very rare occurrence. And I think, if this happened on a daily basis, and folks haven’t taken care to protect themselves against attacks like that, then you can start talking about mistakes. Yeah, here, it’s really, you know, you’re one of the first targets of this type. And what we should do is rather look at what we can learn from this and be extra suspicious. This has been my general trend of talking to the public over the last couple of years in particular, because we’ve seen many sort of phone-related attacks on individuals and banks. And the point is, Be extra suspicious. You know, if there is something in your conversation that makes you feel slightly uncomfortable, better act more on the cautious side than not.
Jeremy Cowan 05:03
Fair enough. And particularly when one’s dealing with subjects, where sums of money as large as $35 million. That’s an astonishing amount of money to be at risk with such a technology.
Nikolay Gaubitch 05:18
Yeah, no. That’s absolutely correct.
Jeremy Cowan 05:20
The other serious news story that I’d like to discuss is sort of slightly outside this area, but still of interest to many of us, I’m sure. And it appeared in the UK-based Telegraph Online, it was headlined, “Smart meter tariffs now massively overpriced” as gas prices tripled. And I think the underlying problem is probably pretty well known. I mean, demand for natural gas is outstripping supplies for a variety of reasons as Europe replenishes stocks that dwindled in a hard winter 12 months ago, and particularly following high demand in Asia, and China. So, I know gas bills have climbed 12% In some markets in recent months and look likely to jump another 50% in the next quarter in some markets too. So, in the IoT sector, we’re familiar with smart meter tariffs, and once seen as the future of energy pricing, but now they’re becoming ineffective as prices have risen across the market.
As the new report said, so called “Time Of Use” smart meter tariffs, which lower the price of electricity when demand is low and increasing the cost in times of high usage, these were sold as one of the key benefits of the smart meter rollout. Normally, deals like this could save the typical family about UK£200, or almost $300 a year compared with average variable tariffs. And right now gas prices don’t change during off peak hours. So, the Telegraph report quoted Joel Stark of the metering and data providers Stark as saying these kinds of benefits formed a huge part of the original business case for smart meters. Those benefits in turn justify the exploding costs of the rollout. If they aren’t happening why are we ploughing on with this massively overpriced solution?
Nikolay, I guess smart metering will prove its value once again, when prices stabilise and fall. I don’t want to criticise smart metering as a technology. It’s a very valid one, but it does seem to be under some pressure at the moment.
Nikolay Gaubitch 07:35
Yeah. And I think we’re gonna see these doubts in technology, always. Right. So whenever there is something new folks are looking for signs of weaknesses, or whether something actually works as it should. And it’s difficult for anything, whether it’s human or non-human, to cope with the big glitches in the system, right. And the gas prices are another big glitch. So yeah, I’m not sure it’s necessarily a fair judgement at this point in time.
Jeremy Cowan 08:12
Yeah, it’s too soon to call a judgement. I think you’re right. Yeah. Well, perhaps it’s time to look at our main story for this podcast, which is the Voice Intelligence Security Report. Nikolay, COVID-19 has hurt businesses in many ways. But remote working has impacted companies during the pandemic, sometimes for the better and sometimes for worse, but have our newly distributed workforces just created new opportunities for communications fraud?
Nikolay Gaubitch 08:46
So, yeah, that’s definitely one line of thinking amongst the public in general. And it is indeed interesting, you know, how companies had to cope with this to make sure that they can continue to ensure security that they otherwise have on site. However, from my perspective, one of the things that was very interesting with this pandemic, was that voice and telephony, actually, for a long time, became our only means of communicating with people. Face-to-face interactions disappeared for quite some time. And what that led to was that contact centres, call centres were swamped with calls. There were in some cases, I believe, up to 800% increase in incoming calls. And what then happened, interestingly, is that many fraudsters were deterred from calling call centres because waiting times got longer. And if you’re a fraudster who are in the hope to, you know, socially engineer your way through to a bank account having to wait for an hour to maybe succeed starts to reduce their return on investment pretty significantly.
And at the same time, of course, during this pandemic, there were many other sort of money streams that opened up for fraudsters to attack. So, it was very interesting from our point of view at Pindrop that the fraud rates actually went down. However, having said that, the fraudsters that did continue, were much more prepared. And so, they were much more sort of high-level, high-worth attacks happening.
Jeremy Cowan 10:39
I haven’t appreciated that. I thought that deep fakes, are just image and video related but from reading your recent report, there’s also voice synthesis, making your machine sound like somebody and voice conversion, making a human talker sound like someone else. Can you explain a bit about what deep fakes are in the round? And the risks that fraudsters can exploit with these capabilities, please?
Nikolay Gaubitch 11:07
Sure. Maybe I’ll take a couple of steps back for the benefit of our listeners. So, you know, that before it was called deepfake, the sort of curiosity and science of converting voices and synthesising voices has been around for decades, in many aspects. And as you clearly explained it, there are sort of two strands of this; one is to synthesise voice, so make a machine sound like somebody else, or the other one is to change somebody’s voice to sound like somebody else. And the implications of the two are somewhat different. But what happened in the last sort of four or five years is the rise of deep learning technology. And the application of deep learning technology to the problems of voice synthesis and voice conversion, is what resulted in the so-called Deepfakes.
And, you know, we already talked about two occurrences of these deepfakes last year. And the implications of those, right? It’s becoming more challenging, with the rise of technology like this, to trust who you are speaking to on the other end, right. And this holds for voice this holds for video. And, in some ways, I find that with voice it’s more tricky. With video deep fakes we’ve seen that they’re still quite glitchy, right? So you can, they are plausible, but you can see that something is not right there. Whilst with the voice, if done properly, you can you can actually start getting to a point where for a human, it’s tricky to hear that. And I believe strongly that it’s something that we’re going to see more of.
Jeremy Cowan 13:04
I’m now worried about how you can trust that I am really who I say I am on this call, but we’ll have to move past that. Take it from me! (Laughter) So, is the threat posed by deepfakes and voice synthesis, a real widespread risk right now, or is this still something that is a minority area of concern?
Nikolay Gaubitch 13:28
I believe it’s still in its infancy. So, it’s there, but it still requires quite a lot of skill to pull it off. So, it’s not a readily available tool that that you can download and run. But that’s not to say that it’s not going to be soon, right? And then, speaking of worrying, you know, being part of a podcast like this can be worrying because that means our voices are in the public domain, which is what is needed to create a synthetic voice.
Jeremy Cowan 14:05
Aieee!
Nikolay Gaubitch 14:06
(Laughter) Just putting it out there! But I believe it’s a real threat simply because looking at how fraudsters behave today, they take to all means they can, anything that is available to them to commit fraud, or facilitate fraud. And it’s not far from this being a favourite tool, I can see.
Jeremy Cowan 14:31
Yeah. So, looking at the wider telecoms arena, what are the most common types of fraud at the moment? And what can telcos and enterprises do now to better protect themselves and their customers?
Nikolay Gaubitch 14:45
So, the deepfake world is is rather advanced compared to what happens in the fraud world today. The telephone is really one of the favourite tools of fraudsters to either directly commit fraud if they can, or more often even facilitate fraud, which does not actually happen on the phone. So, the benefit of the phone for many of the fraudsters is that they feel anonymous, right? They make a phone call, your phone doesn’t necessarily have to be connected to your identity any more. And you can talk to somebody in a call centre, usually with hundreds, sometimes thousands of call centre agents, so it feels safe. And then on the other end, when you speak to a human, you have the opportunity to try and trick them to give you information they shouldn’t be. And that’s partly the reason why fraudsters are in favour of this.
Now, it can be something from very small things like a fraudster calling to check an account balance, which seems very innocent, very difficult to detect, actually, unless you have the appropriate technology. But what they do effectively is to confirm that the information they have about a bank account is correct. And that it’s worthwhile to actually doing something further with that bank account. Right? So these are, these are the types of activities that you often see.
I can give you a story of a real fraudster that I like very much that we came across. And he kind of summarises the behaviour. And that’s a guy we dubbed Postman Pat. So, Postman Pat would call a bank and order a replacement bank card and replacement PIN number to the address on file, which is a rather simple and innocent type of activity you can do over the phone. Then he would intercept the physical mail to pick up that card because he is effectively the only person knowing that there is a new card coming. Right? So, this gives you an example of how the how the phone can be used as a tool.
Jeremy Cowan 17:01
And it can be very simple, or it can be as you’ve shown already, extremely complex.
Nikolay Gaubitch 17:07
Exactly.
Jeremy Cowan 17:08
I don’t want this to be all doom and gloom. So, in the best case scenario, how often are fraudsters actually being caught? I mean, for example, what percentage of attempted frauds are identified as fraud attempts? And do you know what percentage lead to identification or prosecution of the fraudsters?
Nikolay Gaubitch 17:29
There are so many questions in that question. (Laughter) I’m going to try and disentangle that.
Jeremy Cowan 17:34
Unpack that.
Nikolay Gaubitch 17:36
To start with, identifying fraud without the help of technology over the phone is very difficult. So, if you imagine a call centre that has tens of thousands, sometimes hundreds of thousands of calls per day, being able to quickly identify whether something is risky or not, is virtually impossible. And in particular, since the call centre agents objective is not to listen and try to detect fraudsters, they’re trying to give good customer service to us callers, right. And that’s a fortunate thing, because the last thing we want is to be treated as fraudsters every time we call a bank, but that’s the benefit. And you asked earlier, you know, what can companies do and businesses do? Technology gives you this ability, it gives you the ability to actually detect fraud within a very short amount of time of the call. So, usually we take about 15 seconds to give you a risk score to tell you there’s something risky about this call.
Jeremy Cowan 18:38
Wow, that is incredible.
Nikolay Gaubitch 18:39
And using that, you’re able to actually detect about 85 to 90% of the fraud attempts into a call centre. Starting from very close to zero, to 85-90% is a pretty huge leap using using artificial intelligence-driven technology.
Jeremy Cowan 19:01
Is it possible to say with any confidence how often fraudsters are identified or prosecuted?
Nikolay Gaubitch 19:09
I find it difficult, I don’t have any numbers. And personally, I’m not involved, and Pindrop are not really involved in that side of things. So, as I said, we give a score. But one interesting thing to think about when it comes to prosecuting fraudsters is that fraudsters don’t necessarily have to be located in one particular place. Right? Fraud is a global activity. And so you can have a fraudster from the other side of the globe attacking a bank in the UK. Yeah. Who would prosecute? How would you find them? Right? So, I think it’s a rather tricky business.
Jeremy Cowan 19:49
Yeah, it does sound as though we need more integration, a forum in which the global banks and global communications companies can get together with companies such as yours to identify ways of deterring this. Even if it’s not always catching people, deterrence is the first line, I guess.
Nikolay Gaubitch 20:09
Absolutely. Yeah, I’m a big proponent on collaboration, because fraudsters collaborate. Fraudsters share information on a global level. And so should we who try to catch them.
Jeremy Cowan 20:24
That’s a scary prospect. Well, this has been really an eye-opener for me, because I know so little about fraud and so little about how it’s been developing in recent years. So, thank you very much for sharing that.
We’ve just got time before we close, Nikolay for What The Tech which is our chance to talk about some of the things that amused or amazed us and I loved your story about Anthony Bourdain’s video. The thing that I really liked was the story from Reuters, shown on CNN and a number of other platforms highlighted, as “Google Maps helped Italian police capture mafia fugitive in Spain”. And apparently, Italian police have now caught a top mafia fugitive who’s been on the run for no less than 20 years. And they did it thanks to Google Maps.
Giacino Gamino, I don’t know if I’ve pronounced his name correctly – but I don’t suppose he’s going to come after me for that – a 61 year-old Sicilian mafiosi, he escaped from jail in Rome in 2002, was sentenced in 2003 in his absence to life imprisonment for an earlier murder. And after a lengthy investigation, Gamino was tracked to a town close to Madrid. But it was only when a man fitting his description was seen standing in front of a fruit shop on a Google Maps street view that investigators knew they were looking in the right place. So Gamino is now in custody in Spain, and the authorities hope to bring him back to Italy by the end of February. So, a quick tip of the hat to Google for their role in this. And I say, also a quick chapeau from me to Reuters, who ended their report by saying they had been “unable to locate a representative of Gamino to comment”. (Laughter) That made me smile. So, there’s no end of uses for Street View, Nikolay and perhaps not always in directions that we might have thought.
Nikolay Gaubitch 22:40
That’s right. I mean, this is a fascinating story of accidental use of technology, let’s say. (Laughter)
Jeremy Cowan 22:48
I love it. Sadly, that is all we have time for today, we’re going to be back with another Trending Tech Podcast very soon. Don’t forget to like the podcast wherever you found us. And please share it with friends. It won’t cost you a thing. And it really helps boost our ranking. So, thank you if you can. But that’s it. I’ve been Jeremy Cowan talking to Dr. Nikolay Gaubitch of Pindrop. Thanks for all your input, Nikolay.
Nikolay Gaubitch 23:15
Thanks so much, Jeremy. It’s been great.
Jeremy Cowan 23:17
Great to have you here. And ladies and gentlemen, thank you too. Please join us again soon for the next Trending Tech Podcast. Bye for now.