In the past, a skilled forger could pass a bad check by replicating a person’s signature. Now, advances in AI can do much more damage by replicating a person’s entire identity. This technology—known as deepfakes—has the potential to supercharge identity fraud. I’ve recently spoken about the importance of recognizing both the benefits and the risks of generative AI (Gen AI). Today, I’d like to focus more on the darker side of the technology—specifically how Gen AI has the potential to enable deepfake technology, and what we should be doing now to defend against this risk in finance. At the Federal Reserve Bank of New York, New York, April 17, 2025.
By Governor Michael S. Barr
Escalating threat of Gen-AI facilitated cybercrime
Cybercrime is on the rise, and cybercriminals are increasingly turning to Gen AI to facilitate their crimes. Criminal tactics are becoming more sophisticated and available to a broader range of criminals. Estimates of direct and indirect costs of cyber incidents range from 1 to 10 percent of global GDP. Deepfake attacks have seen a twentyfold increase over the last three years.4
Cybercrime with deepfakes involves the same cat and mouse game common to sophisticated criminal activity. Both cybercriminals and financial institutions are constantly trying to outdo each other. Criminals develop new attack methods, and companies respond with better defenses. Here, the same technological innovations that enable the bad actors can also help those fighting cybercrime.
However, there is an asymmetry—the fraudsters can cast a wide net of approaches and target a wide number of victims, and they only need a small number to be successful. Their marginal cost is generally low, and individual failures matter little. Conversely, companies must undergo a rigorous review and testing process to mount effective cyber defenses and will thus be slower in developing their defenses. A single failure is very costly. As we consider this issue from a policy perspective, we need to take steps to make attacks less likely by raising the cost of the attack to the cybercriminals and lowering the costs of defense to financial institutions and law enforcement.
Anatomy of a deepfake
Deepfake attacks are those in which an attacker uses Gen AI to create a doppelganger with a person’s voice or image and uses this doppelganger to interact with individuals or institutions to commit fraud. Deepfake technology is a particularly pernicious vehicle for cybercrime. The process begins with voice synthesis, where Gen AI models can synthesize the speech of their victim not only in words, but also in phrase patterns, tone, and inflection. With just a short sample audio, for example, criminals assisted by Gen AI can impersonate a close relative in a crisis situation or a high-value bank client, seeking to complete a transaction at their bank.
Criminals can also use Gen AI-generated videos to create believable depictions of individuals. For videos, Generative Adversarial Networks (GANs) are the core technology behind most deepfake systems. GANs consist of two competing models, the generator and the discriminator, which compete with and improve each other. This competition results in increasingly realistic, indistinguishable fake images and videos.
Deepfake technology can also be augmented by other AI tools; for instance, criminals can use AI to extract and organize extensive multimodal personal data to facilitate identity verification. Attackers can also turn to “dark web” tools, such as jailbroken versions of popular large language models, where the guardrails have been removed, to learn the deepfake trade and improve their attacks.
Deepfakes in action
I expect that many of you can recall examples of how deepfakes of politicians and prominent business executives have fooled the public and spread disinformation. Deepfakes are also being used to commit payment fraud. In one case in 2024, a sophisticated deepfake of the chief financial officer for British engineering and architectural firm Arup was reportedly deployed in a video meeting and convinced an Arup financial employee to transfer $25 million to thieves.
In another case, an attacker attempted to undertake a highly convincing audio deepfake of the chief executive of Ferrari, down to mimicking his southern Italian accent. The recipient of the attack—another Ferrari executive—tested the caller with a personal question only the chief executive would know, which thankfully exposed the fraud.
And these institutions and individuals are not alone—a 2024 survey finds that over 10 percent of companies reported experiencing deepfake fraud attempts, and few steps have been taken to mitigate the risks.
Particularly since COVID, we conduct much of our professional and personal lives over video. When we see realistic and interactive video images of a loved one in trouble, we are disposed to trust them and do what we can to help. Identity verification standards at banks often use voice detection, which may become vulnerable to Gen AI tools. If this technology becomes cheaper and more broadly available to criminals—and fraud detection technology does not keep pace—we are all vulnerable to a deepfake attack. These attacks can have significant financial costs to the victims of the crime and can also pose costs to society, eroding trust in communications and in institutions.
Defending against deepfakes
So what should we do? As I mentioned above, we should take steps to lessen the impact of attacks by making successful breaches less likely, while making each attack more resource-intensive for the attacker.
Let me start with ways to make successful breaches less likely. A key step is to recognize the importance of strong, resilient financial institutions in preventing attacks. Banks are frontline defenders against deepfake-enabled fraud due to their direct involvement with financial transactions and customer data. To verify payors, banks maintain identity verification processes, including multi-factor authentication and account monitoring practices.
To the extent deepfakes increase, bank identity verification processes should evolve in kind to include AI-powered advances such as facial recognition, voice analysis, and behavioral biometrics to detect potential deepfakes. Other techniques focus on assessing the probability that AI has been used in audio or video based on underlying metadata and then flagging the identity or transaction for further review using other verification. These technical solutions can detect subtle inconsistencies in video and audio that human observers may miss.
Banks have two points of control over the transaction—confirming not only the sender’s identity, but also the legitimacy of the recipient address. They can scrutinize the recipients of large or unusual transactions, employing advanced analytics to flag suspicious patterns that could indicate fraudulent activities, and perform additional reviews before authorizing a payment to a recipient that raises flags. Banks also invest in their human controls by maintaining up-to-date training for staff on the emerging risks and incorporating the necessary security measures to mitigate the damages from breaches when they occur. And they are engaging with other financial institutions to help define the threat and identify appropriate controls and mitigants.
Customers should do their part, enabling multi-factor authentication on their accounts and verifying unusual requests through a separate channel, even if the person making the request seems genuine. They should seek out education for themselves and their loved ones to help them detect and prevent fraud before it occurs. And customers should value strong security practices at their financial institutions, including those which may add some friction to the user experience. The customers that may be the highest-value targets for criminals are often those with the largest digital presence, and thus most susceptible to deepfakes. They are also the customers who may prefer the most frictionless user experience, making detecting deepfakes more difficult. When it comes to protecting our money, we ought to expect and appreciate a little friction.
Regulators can help to reinforce the importance of cyber defenses in safe and sound banking through appropriate updates to guidance and regulation. As with all rules, we should be mindful of the impacts on smaller institutions and help ensure that rules are right-sized for the risk. In addition, we can work with core providers to understand the extent to which they are incorporating AI advancements in their products and services to help smaller banks defend against deepfakes and other emerging risks from the technology. Last, we can also highlight research and development for cybersecurity startups and research into tools to combat deepfakes and Gen AI-based fraud.
Regulators should consider how we could leverage AI technologies ourselves, including to enhance our ability to monitor and detect patterns of fraudulent activity at regulated institutions in real time. This could help provide early warnings to affected institutions and broader industry participants, as well as to protect our own systems.
In addition to preventing attacks, we should also explore ways of making attacks more costly. These may include coordination with domestic and global law enforcement, internationally consistent laws against cybercrime, and continued improvement on sharing threat intelligence and insights in real-time. The official sector and banks should continue efforts to improve fraud data sharing within the financial sector and help institutions respond more quickly to emerging Gen AI-driven threats. This will make it far harder for fraudsters to operate undetected, increasing the complexity and cost of their activities. But the sharing is only as good as the data, and banks must do their part. We should help ensure that banks and other regulated institutions meet their duties to report cyber incidents in a timely way, and regulators should too.
Another way to disrupt the economics of cybercrime is by increasing penalties for attempting to use Gen AI to commit fraud and increasing investment in cybercrime enforcement. This includes targeting the upstream organizations that benefit from illegal action and strengthening anti-money-laundering laws to disrupt illicit fund flows and freeze assets related to cybercrime. The fear of severe legal consequences could help to deter bad actors from pursuing AI-driven fraud schemes in the first place.
Conclusion
Deepfakes are only one of many new techniques to facilitate cyberattacks, but they feel particularly salient because they are so personal. And they are on the rise.
We will need financial institutions to adapt, collaborate, and innovate in the face of these emerging threats.
The post Deepfakes and the AI arms race in bank cybersecurity appeared first on Caribbean News Global.