October 5, 2024, 1:31 pm

Banks Must Guard against Scammers Using AI

  • Update Time : Saturday, July 6, 2024
  • 24 Time View
Photo : Collected

—Nironjan Roy—

Fraudsters of diverse shades are using Artificial Intelligence (AI) to serve their narrow personal or group interests. Scammers have already used ChatGPT and other AI tools to create an imitation of target’s voices and identities.

In recent years criminals have used AI-based software to impersonate bank’s senior executives and got the wire transfers passed through. Scammers are using AI so smartly that people are not be able to prevent themselves from being fallen victim to scammers’ evil design even after applying their high alert.

According to recent media report, fraud related loss through bank transfer or payment has increased from USD 2 billion in 2019 to USD 18 billion in 2023 while investment related fraud has increased from USD 1.6 billion in 2019 to 48 billion in 2024. The loss is quite high in social media including WhatsApp and other apps.

Another alarming report is that the fraudsters applying AI have been stealing more money from the people of all ages. According to that report, people reported losing of USD 9 billion and USD 10 billion to scams in the year 2022 and 2023 respectively. It is apparent that such a drastic increase of the fraud losses in every means has been the result of the scammers’ using AI.

Banks and fraud prevention official have become highly concerned about enormous rise of fraud losses and are trying hard to find out the way to prevent this trend. They are more concerned over scammers applying AI and tactfully avoiding the use of malicious link, poor wording, grammatical error what are common indicators of fraud attempt. Criminals are now using fake ID and other identifications to open bank account with AI generated photograph and graphics.

Previously scammers used to either guess password and user ID or steal information about targeted customers. AI has made the scammers’ life easy; they now use more skillful approaches for financial offences than the banks apply their tools for prevention.

They can get quickly cross-references and experiment reused passwords across platform. Even, the criminals can use AI systems to write code that would automate various aspects of their targets. If the criminals can collect targets’ email addresses and a commonly used password from various sources of data breach, AI tools can then quickly check whether same credentials are used to log on bank account, social media, online payment etc. and accordingly can design their scam plan.

As per cybersecurity experts’ opinion, the scammers use AI to find out the details of the target from social media, other means of communication and even from data breaches. AI can easily enable the criminals to design their strategy in real time by generating personalised messages that make the approach trustworthy to the targets, persuade them to send money or provide sensitive information what the fraudsters subsequently use to conduct another fraud attempt.

The scammers will first target that person, initiate social engineering and if the approach works, they will send the target an offer letter with a cheque for downloading some job-related applications in computer. The target will be additionally advised that cheque collection may take some days but downloading requires instant payment because that application is purchased from third-party vendor. The target needs to immediately start working for which instant download is required, so he/she will be advised to make payment and don’t wait for cheque gets cleared.

Many banks have already started using large language models to validate payment which can help fight fraud. In addition, some banks have taken measures to educate their customers. Customer must not share their personal information and must not make any payment if not sure about the receiving person or entity.

Specially, extra care must be exercised while making payment through debit card, e-transfer, various payment apps and wire transfers. Credit Card may be the preferred payment method. In our country, fraud protection option is not in practice; so Bangladesh Bank must enforce all credit card issuing entities to include this feature what will not only protect the customers but also make the practice consistent with internationally accepted standard.

In the wake of rising AI driven financial frauds, banks are now considering for additional control and to trap AI itself so that customer money can be protected. Banks now monitor how customers enter their credentials, whether they tend to use their left or right hand when swiping debit or credit card.

As additional measures, some banks have started monitoring whether password is copied and pasted, if voice verification is too perfect or if text is too evenly spaced or grammatically correct etc. If the log-in behaviour is not exactly matched with the customer’s pattern or whenever, the presence of these indicators is noticed, the accounts are immediately red flagged requiring additional information.

It is now a reality that as soon as AI has started debut in the technological world, this application is being abused to commit financial crimes. Since banks and customers of the developed world have already fallen victim to that crime losing billions of dollars, it is not unlikely to occur in our country too. We should keep in mind that criminals are everywhere. The banking sector must therefore exercise extra caution and place highest level of control.

_____________________________________

The writer is a certified anti-money laundering specialist and banker, Toronto, Canada. Email: [email protected]

 

Please Share This Post in Your Social Media

More News Of This Category
© All rights reserved © 2023 The Daily Sky
Theme Developed BY ThemesBazar.Com