March 05, 2024

AI Presents Both Risks and Rewards

Author

Jordan Bennett

Jordan Bennett

Senior Director, Network Risk Management

Nacha

laptop with AI over screen

Artificial Intelligence (AI) leapt into the lives of the public last year with ChatGPT and has been ever present in the conversations of risk professionals since. AI is improving rapidly and is far more capable than it was only a year ago. Nacha’s Risk Management Advisory Group (RMAG) members met recently and discussed what it means for financial institutions, the banking industry, and everyday consumers and businesses. 

Fraud is more convincing and targeted with AI

Fraudsters are using new technology and tools available to them. They use deepfakes, a technology enabled by AI, to create realistic audio and/or video of celebrities and of regular people. The technology uses audio or video clips available online or taken from short conversations fraudsters record when you answer one of their phone calls. Deepfake audio and video is used to enhance common social engineering schemes and help us let our guard down. A recent news story featured an executive who sent millions of dollars to a fraudster at what he believed was the request of his company president. This “president” made the request via video conferencing. Only it wasn’t the president, but a deepfake created from publicly available video.

Before you say “this couldn’t happen to me” consider what’s on your social media profile. What content have you created professionally? Have you or representatives from your company created podcasts, public presentations, or even advertising that features audio and video? If so, your likeness or the likeness of individuals in your company can be used to create a deepfake. 

AI is being used to scan large sets of data from social media and other sources to look for connections and target individuals. Activities, locations, and other information on social media can be used to better target a victim and make a scheme more believable. Knowing that your CEO is giving a presentation at a large conference or that a family member is out of the country on vacation is useful information to fraudsters. These are perfect opportunities to target connected colleagues or family members with urgent and believable requests for money. 

The proper response is not to shut down all public appearances and social media. That’s not realistic or effective for promoting ourselves or our companies. We should be aware that these tools are available to fraudsters who are using them to enhance social engineering schemes like business email compromise (BEC), payroll impersonation, vendor impersonation, or old-fashioned spoofing or phishing. Financial institutions, businesses, and consumers should continue to use the same controls and techniques that have proven effective in combating other forms of social engineering. 

  • Understand these attacks can start with messages via email, phone calls, faxes, video conferences or letters in the mail. Don’t assume the fraud risk posed by AI is purely a cybersecurity issue. 
  • Educate and train employees to recognize, question, and independently authenticate changes in payment instructions, requests for secrecy, pressure to take action quickly, and any change of payment method (e.g., ACH to wire). 
  • Authenticate information using a different communication channel than the original message was received.
  • Review accounts daily. 
  • Initiate payments using dual controls, especially for payments over a threshold set by your organization.
  • Be careful what is posted on social media and other public sites and do not share nonpublic business information on social media. The fraudster will use this information to target you. 
  • To make impersonation and spoofing harder, consider registering domains that closely resemble the company’s actual domain.
  • Do not use the “reply” option when authenticating emails for payment requests. Instead, use the “forward” option and type in the correct email address or select from a known address book. Even better: Confirm instructions using another communication method. For example, call the person at a number you know to be correct. Make vendor payment forms available only via secure means or to known entities. 
  • Require changes to payment account information be made or confirmed only by site administrators, and use methods like verification codes to existing contacts. 
  • Do not ignore calls from a financial institution questioning the legitimacy of a payment.

We, the good guys, can use AI too

Financial institutions and other organizations in the payments flow with access to payments data and history can use AI to help us and our customers. The goal is to identify fraudulent transactions and keep fraudsters from accessing money they’ve received by convincing a victim to push funds to their accounts. Modern financial institutions can’t hire enough employees to review every transaction in real time. Financial institutions can use AI to detect unusual payments patterns and flag specific transactions for their employees to review. The financial institution can flag a transaction, investigate, and contact the customer before the customer is aware that they’ve been a victim of fraud. A few examples illustrate what AI can look for:

  • Large transactions. Monitor for large transactions that are not consistent with the account. ODFIs can monitor transactions sent from their originators. ODFIs can see transactions set to leave their customer’s accounts. RDFIs can monitor transactions received by receiver accounts. RDFIs don’t want to allow mules or fraudsters access to their financial institutions.
  • Account type mismatches. Why is a consumer account receiving a large corporate transaction? This can even be detected by ODFIs, if they instruct AI to look at what other Originators have sent to a particular Receiver.
  • Velocity monitoring. Is the transaction velocity consistent with the history on this account or similar accounts? Is the account receiving multiple new payroll transactions?
  • New relationships. Does your customer have previous transactions with this relationship? Are the transactions consistent in timing and value with previous transactions? ODFIs: Has your customer sent money to this account previously? Is this a large transaction as defined by your institution? RDFIs:  Has your customer received money from this account previously? Is this a large transaction as defined by your institution?

AI is here to stay

AI is now part of our lives. It is up to us to recognize how fraudsters are using this technology and for us to harness the power for good. We can expect better deepfakes and precise victim targeting. It is up to us as financial institutions to educate our employees and customers on how to identify common schemes and use effective controls to protect ourselves. It’s also up to us to employ technology in new ways to detect and prevent credit push fraud and to aid in the recovery of funds for victims.