More than half of banking fraud now involves the use of AI with banks using the same tools to fight back – but unlike the criminals, they’re hampered by ethical considerations.
Research from financial crime prevention platform Feedzai revealed that 92% of financial institutions are seeing fraudsters use generative AI.
More than four in ten financial professionals reported that deepfakes are being used in fraudulent schemes, and 56% say that they’ve been experiencing AI-powered social engineering attacks.
Six in ten said that voice cloning was a major concern, with a similar number citing SMS and phishing scams using AI.
“Today’s scams don’t come with typos and obvious red flags – they come with perfect grammar, realistic cloned voices, and videos of people who’ve never existed,” said Anusha Parisutham, Feedzai senior director of product and AI.
“We’re seeing scam techniques that feel genuinely human because they’re being engineered by AI with that intention. But now, financial institutions also have to deploy advanced AI technologies to fight fire with fire to combat scams.”
Financial institutions are fighting back by using AI themselves. Nine in ten are already using AI to detect fraud, and two-thirds have integrated AI within the past two years. Meanwhile, 90% of financial institutions are attempting to prevent fraud with AI-powered solutions.
And it’s working, with four in ten saying that AI had helped them cut fraud losses by between 40% and 60%, and 43% that it had led to a 40-60% improvement in efficiency.
A similar number are using AI to expedite fraud investigations and detect new tactics in real-time, with half using it for scam detection, 39% for transaction fraud, and three in ten to combat money laundering. Most said they expected AI-driven behavioral analytics to make an impact on fraud prevention.
However, unlike fraudsters, financial institutions have to take data management and ethical considerations into account. They have to comply with strict privacy regulations such as GDPR and CCPA, while making sure that data isn’t biased, through the use of diverse datasets, rigorous bias testing, and human oversight.
As a result, nearly nine in ten banks cited data management as their biggest hurdle, with fragmented data sources and regulatory constraints slowing their adoption of AI. Smaller institutions struggled the most.
A similar number said they prioritize explainability and transparency in their AI systems – critical in regulated areas like AML, where institutions are required to justify their decisions to regulators.
“In some ways, AI is like a car. When automakers design a car, they don’t just think about horsepower. They also consider safety features such as seatbelts, airbags, and anti-lock brakes that will keep drivers and passengers safe. The same is true for AI. Models that aren’t designed with trust at the forefront can lead to significant problems for users,” said Pedro Bizarro, co-founder and chief science officer at Feedzai.
“By ensuring that AI decisions are transparent, robust, unbiased, secure, and tested (TRUST), businesses will accelerate innovation and reinforce customer confidence.”
Source link