Pindrop: utilising AI to tackle voice fraud threat

Pindrop's Director of Research, Dr. Nikolay Gaubitch on voice fraud threat and how organisations can turn to AI to help tackle the problem

Even in a world with a plethora of digital communication channels, voice remains one of the most important (and natural) ways for people to connect with others. An unfathomable number of calls are made every day, from sales and marketing activity to customer service, and people simply catching up with friends and family. But as with all forms of useful technology, the telephony channel is continually targeted by fraudsters looking to exploit the system. 

Dr. Nikolay Gaubitch is director of research at Pindrop, who started when co-founder Vijay Balasubramaniyan was traveling in India and tried to order a new suit from a local tailor. His bank immediately flagged the international transaction as suspicious and tried to call him to verify the purchase, but the the resulting phone call couldn’t prove his identity over the phone. As Vijay also had no way to prove his identity to the bank, the bank cancelled his order - leading Vijay to establish Pindrop in 2011, in order to find a better way for people to authenticate over the phone.

With the telephony channel now firmly at the fore of the fraud landscape, Gaubitch provides an engaging overview of the ins and outs of voice fraud - and why businesses should be taking it extremely seriously.

Why does voice fraud fall under the radar in many organisations?

In an increasingly remote working environment, call centres have become an important channel for organisations to connect with their customers. However, in the digital age we live in, sometimes the security and protection around the telephony channel may not have been a top priority for businesses. 

With the majority of business being carried out online, organisations have long secured their digital channels given the plethora of options available in the market. However, the telephone channel rarely has the same level of protection or regulation. 

What tactics and techniques do fraudsters typically use to target organisations?

Most commonly, fraudsters rely on social engineering techniques. They often pose as their victim with the objective to obtain information required to perform malicious attacks. This information is typically gathered online, via the telephone, or in its most raw form in your rubbish bin. They often use the telephony channel to impersonate a legitimate customer to verify the gathered information or to trick the agent to carry out fraudulent transactions. 

Taking this a step further, some fraudsters carry out what we call intercept attacks where they are on the phone to both an organisation’s call centre and the victim. This technique can enable fraudsters to gather the relevant data in real time, in order to authenticate as the customer through the traditional method of knowledge-based authentication (KBA) where they must provide information such as their mother’s maiden name, month of birth etc. 

Can organisations call on technology to help them to detect fraud?

Absolutely! When we talk about using technology to combat fraud it’s useful to look at two sides of the coin – fraud detection and authentication. 

When combating voice fraud, it’s vital to look at both stopping the fraudsters and ensuring good customer experience isn’t compromised. Stopping fraudsters in a timely manner without impacting the experiences of genuine callers is not something humans can accomplish alone. This is where technology, and more specifically artificial intelligence (AI) and machine learning (ML), can play a vital role.

No human can be expected to monitor for signs of fraud across the hundreds of calls they may take in a day. Instead, organisations can implement an anti-fraud solution that runs on AI and machine learning in their call centres. 

When it comes to detecting fraud, the technology can passively analyse audio, voice, behaviour, and metadata from every call with the aim to detect any subtle signs that indicate a potential fraudster. 

For authentication, the technology can be used in addition to or instead of the traditional KBA I mentioned earlier. Such technology can determine the caller’s identity quickly and seamlessly by creating unique multi-factor credentials based on the device, voice, and behaviour of the customer. This gives the call agent peace of mind and the confidence that they are speaking to a legitimate customer. The key benefit here is that the call agent can service the customer faster and in a more personalised way, rather than treating them as a potential fraudster. 

Fraud detection and customer authentication complement each other. If a fraudster attempts to trick the authentication system, the fraud detection element will step in, and vice versa. 

Share

Featured Articles

Catching up with Sophia: gender bias in AI

Gender bias in AI is discussed often. Here, Hanson Robotics’ robot, Sophia, shares how this bias is experienced by humans and robots alike

Artificial intelligence could be a stroke of genius for golf

AI has been besting humans for years with powerful expert systems like Deep Blue and AlphaGo. Now AIs are taking up golf – but this time to help humans win

Are we ready to hand humanity’s future over to AI?

AI could contribute up to US$5.2tn to the global economy by the end of the decade, all in the name of sustainability

A watershed moment: feeding the world with AgriTech

Data & Analytics

5 minutes with: Vikram Saxena, CEO, BetterCommerce

AI Applications

AI helps handle post-Covid ecommerce explosion, says EY

AI Applications