This story was originally published on Banking Dive. To receive daily news and insights, subscribe to our free daily Banking Dive newsletter.
In the face of increasing generative AI-based deepfake attacks, banks must fight fire with fire and invest more in artificial intelligence themselves, Federal Reserve Gov. Michael Barr said Thursday.
Banks must evolve their use of AI to thwart deepfake attacks with the use of facial recognition, voice analysis and behavioral biometrics, he said during a New York Fed event, and technical solutions can be used to detect subtle inconsistencies that point to the use of AI in audio or video that human observers may miss.
With deepfakes, cybercriminals use generative AI to copy a person’s voice or image and then use that voice or image to commit fraud. One in 10 companies have experienced a deepfake attack, a 2024 business.com survey found.
“In the past, a skilled forger could pass a bad check by replicating a person’s signature. Now, advances in AI can do much more damage by replicating a person’s entire identity,” Barr said of deepfakes, which have the “potential to supercharge identity fraud.”
While banks often use voice detection as a tool of identity verification, that technology could become vulnerable to generative AI tools, Barr said.
“If this technology becomes cheaper and more broadly available to criminals — and fraud detection technology does not keep pace — we are all vulnerable to a deepfake attack,” he said.
Cybercrime is an “asymmetrical” game, Barr said, in which fraudsters can cast a wide net and find success in ensnaring only a small number of victims without too much cost.
Banks, on the other hand, must “undergo a rigorous review and testing process to mount effective cyber defenses and will thus be slower in developing their defenses.” Even when they prevent many attacks, a single failure can still cost them, he said.
“As we consider this issue from a policy perspective, we need to take steps to make attacks less likely by raising the cost of the attack to the cybercriminals and lowering the costs of defense to financial institutions and law enforcement,” he said.
Banks, Barr said, can employ advanced analytics to flag suspicious activity, and further review activity that raises red flags. And they can invest in human controls by keeping staff trained on emerging risks.
The onus is not just on banks: Customers and regulators also have to do their part in preventing such schemes from taking hold, Barr said.
Story Continues