AI-powered biometrics helping fight the rising tide of fraud

AI-powered tools such as behavioural biometrics prevent fraud by detecting high-risk scenarios and helping institutions make better decisions

Already a familiar sight in technology from smartphones to laptops, biometric capabilities are increasingly embedded in everyday life.

With fraud on the rise, biometric technology has become increasingly popular in recent years, with the global biometric market's revenue projected to hit US$83bn by 2027.

Banks worldwide are expected to spend an additional US$31bn on AI embedded in existing systems by 2025 to reduce fraud, according to an IDC report, which also said fraud management featured strongly as a priority for banking executives.

AI-powered biometric verification helping combat identity fraud

Mitek brings the future to business with patented solutions and intuitive technologies that bridge the physical and digital worlds. The company’s leadership in identity verification, including facial biometrics, image capture technology and ID card verification enables customers to confidently onboard users, verify identities within seconds and strengthen security against cybercrimes. 

As Chris Briggs, Global SVP of Identity at Mitek, explains, AI-powered behavioural biometrics prevent fraud by detecting high-risk scenarios and helping institutions make better decisions: “For example, if a customer who logs in twice each month suddenly starts logging in more frequently or if a client who always types their password in copies and pastes the password from a different location, those pattern anomalies signal that these logins carry additional risk.”

Research released by ID R&D, a provider of AI-based voice and face biometrics and liveness detection technologies, found that humans have greater difficulty identifying images of biometric spoofing attacks compared to computers performing the same task, with machines as much as 10x faster than humans at detecting fake faces.

“Furthermore, AI-powered liveness detection is better than humans at recognising what is real and what is not. It ensures the integrity of a biometric match by distinguishing both identity and liveness, through AI. For example, fraudsters are no longer able to bypass screening processes with photos of a printed image. Also, fraudsters cannot work around liveness, mitigating the threat of identity fraud.”

Biometric authentication and behavioural biometrics

With Verizon’s 2022 Data Breach Investigations Report finding that stolen password credentials were involved in 61% of all company data breaches last year, password technology is clearly no longer sufficient.

Enter biometric technology, where facial recognition or voice verification is used to verify a customer’s identity. “The software measures the capture to create a baseline data point template or the "lock" that will be the determining data point for future uses,” comments Briggs. “This means that only the matching biometrics, whether its physiological or behavioural characteristics, provided will confirm a person's identity and unlock the service or account.”

As Adam Desmond, Sales Director EMEA at OCR Labs, explains, biometric authentication is leading the charge in the growing fight against identity fraud: “Banks are already using AI-powered facial biometrics in conjunction with liveness detection to verify faces and documents. 

“But voice technology offers the next level up in powerful and convenient biometrics, with a critical role to play in improving anti-fraud defences. In fact, when combined with face biometrics, voice is one hundred times more powerful than face alone. In our experience, the combination of both voice and face biometrics makes the verification process almost impenetrable by fraudsters.”

A step further, into behavioural biometrics, can detect unusual patterns of behaviour to improve security, using what Briggs describes as a ‘behavioural signature’.

“Behavioural biometrics uses customers’ digital breadcrumb trails, as well as how customers approach online logins, to effectively create a behavioural signature that fraudsters are hard-pressed to emulate. Implementing behavioural pattern analysis into continuous verification frameworks adds an additional layer of security that is difficult even for the most sophisticated fraudsters to crack.”

AI bias in identity verification

Biometric technology itself is not inherently biased – it is the design of biometric technology that can introduce discrimination.

But as Briggs explains, to tackle AI bias in identity verification, we first need a way to evaluate biometric bias: “There is currently no standardised, third-party measurement for evaluating demographic bias in biometric technologies.

“The industry needs a way to evaluate the equity and inclusion of biometric technologies. This would give service providers a way to ensure that their solution is equitable, regardless of whether it was built in-house or based on third-party technology from a vendor. This benchmark would provide the public with the information they need to select a service provider that’s more equitable.”

Determining ‘what is right’ goes beyond creating accuracy benchmarks – creating ethical guidelines is essential. 

“AI ethical guidelines would solidify the rights and freedoms of individuals using or subject to data-driven biometric technologies,” Briggs explains. “Until we define what is and is not an ethical use of biometric technology, there is no way to understand what is ‘right’.”

The rise of transparency

As Ricardo Amper, CEO of digital identity company Incode, told Cyber Magazine, trust is critical to the success of AI-powered biometrics.

“But trust is crucial for its success; the impact that biometrics can have can be substantially limited if there is a lack of trust in how the technology is used,” he explained. “Users need to be sure that their data is being used for the purpose it was given and nothing else. This trust is hard to come by, but if gained, then there will be no limit to the impact biometrics can have on our lives.”

As Briggs explains, the rise of transparency will be an important component of maintaining consumer trust in biometrics: “Consumers are becoming increasingly aware of exactly who and/or what technologies they are interacting with and how the data they are providing is being used. Over the next year, we can expect the leading technology innovators to lead by example – setting rules of engagement and standards for collecting and managing data.

“Separately,” Briggs concludes, “to reduce the risk of new cyberthreats, regulators should begin to define a clear private-public framework to address these problems within the private sector. Consumers should believe the regulators have their backs, but fully understand they themselves are responsible for all aspects of their data and identities.”

Share

Featured Articles

AI Agenda at Paris 2024: Revolutionising the Olympic Games

We attended the IOC Olympic AI Agenda Launch for Olympic Games Paris 2024 to learn about its AI strategy and enterprise partnerships to transform sports

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Qualtrics has appointed Microsoft veteran Gurdeep Singh Pall as its new President of AI Strategy to transform the company’s AI offerings for customers

Should Tech Leaders be Concerned About the Power of AI?

With insights from Blackstone CEO Steve Schwarzman, we consider if tech leaders are right to be anxious about AI innovation and if regulation is necessary

Andrew Ng Joins Amazon Board to Support Enterprise AI

Machine Learning

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

Machine Learning

Meta Launches AI Tools to Protect Against Online Image Abuse

AI Applications