James Mirfin, global head of risk and identity solutions at Visa, told CNBC that the payment giant is utilising AI and machine learning to combat fraud.
Between October 2022 and September 2023, the business stopped $40 billion worth of fraudulent activity—nearly twice as much as it did the previous year.
According to Visa’s Mirfin, fraudsters use artificial intelligence (AI) to develop main account numbers and test them on a regular basis. The PAN is a card number that appears on payment cards and is typically 16 digits long, however it can occasionally be up to 19 digits long.
Cybercriminals utilise artificial intelligence (AI) bots to continuously try to approve online transactions by combining primary account numbers, card verification values (CVV), and expiration dates, until they receive a response.
According to Visa, this technique, called an enumeration assault, results in $1.1 billion in fraud losses yearly, accounting for a sizeable portion of all fraud-related losses worldwide.
We evaluate more than 500 distinct factors related to [each] transaction, assign a score, and generate a score using an AI model that will carry out the task. About 300 billion transactions are made annually, according to Mirfin.
For transactions where a purchase is made remotely using a card reader or terminal without a physical card, each transaction is given a real-time risk score that aids in the detection and prevention of enumeration attacks.
“Every single one of those [transactions] has been processed by AI. It’s looking at a range of different attributes and we’re evaluating every single transaction,” Mirfin said.
Therefore, in the event that a novel kind of fraud is detected, our algorithm will identify it, stop it, and mark such transactions as high risk, giving our clients the option to refuse to authorise them.
Visa uses artificial intelligence (AI) to assess the risk of fraud for token provisioning requests in order to combat scammers who use social engineering and other techniques to get tokens unlawfully and carry out fraudulent transactions.
The company has spent $10 billion over the past five years on technology that improves network security and lowers fraud.
According to Mirfin, cybercriminals are using deepfakes, voice cloning, and other cutting-edge technologies like generative AI to trick consumers.
He said that AI was being used in “pig butchering, romance scams, and investment scams.”
Pig butchering is a fraud technique where scammers establish trusting connections with their victims in order to dupe them into investing or trading cryptocurrencies on phoney platforms.
When you consider what they’re doing, you won’t find a criminal taking up a phone and making a call while shopping. Whether it’s social engineering, deepfakes, or voice cloning, they’re utilising artificial intelligence to some extent. They’re doing various kinds of it using artificial intelligence, Mirfin added.
With the use of generative AI technologies like ChatGPT, con artists may create more convincing phishing messages to trick victims.
According to U.S.-based identity and access management company Okta, cybercriminals can use generative AI to clone a voice in less than three seconds. This can be used to trick family members into believing a loved one is in trouble or trick bank employees into transferring money out of a victim’s account.
According to Okta, generative AI technologies have also been used to fabricate deepfakes of celebrities in order to trick admirers.
Paul Fabara, chief risk and client services officer at Visa, stated in the company’s biannual risks report that “scams are more convincing than ever, leading to unprecedented losses for consumers.” This is due to the usage of Generative AI and other developing technologies.
By targeting numerous victims at once with the same or less resources, cybercriminals employing generative AI to perpetrate fraud may do it for a lot less money, according to a paper from Deloitte’s Centre for Financial Services.
According to the report, “Incidents like this will likely proliferate in the years ahead as bad actors find and deploy increasingly sophisticated, yet affordable, generative AI to defraud banks and their customers.” From $12.3 billion in 2023 to $40 billion in 2027, the report estimated that generative AI could increase fraud losses in the U.S.
An employee of a Hong Kong-based company transferred $25 million earlier this year to a fraudster who had deepfaked his chief financial officer and given the order to execute the transfer.
A similar incident occurred in Shanxi province this year, according to Chinese official media, where a fraudster used a deepfake of her supervisor in a video conversation to trick an employee into sending 1.86 million yuan ($262,000).
(Adapted from TradingViews.com)
Categories: Economy & Finance, Strategy, Uncategorized
Leave a comment