reported that “Artificial intelligence technologies such as generative AI are not helping fraudsters create new and innovative types of scams. They are doing just fine relying on the traditional scams, but the advent of AI is helping them scale up attacks and snare more victims, according to fraud researchers at Visa.”  The March 21, 2024 article entitled “AI Is Making Payment Fraud Better, Faster and Easier” ( included these comments from Paul Fabara (Chief Risk and Client Services officer at Visa):

 Organized threat actors continue to target the most vulnerable point in the payments ecosystem – humans. And they’re using AI to make their scams more convincing, leading to “unprecedented losses” for victims,

Also these comments were in the article:

Fraudsters can use AI to automate the process of identifying vulnerabilities in a system to make it easier for threat actors to launch targeted attacks, carry out large-scale social engineering attacks and generate convincing phishing emails on a massive scale by analyzing and mimicking human behavior. Generative AI tools also can generate realistic speech capable of mimicking human emotions and logic, which threat actors can exploit to impersonate financial institutions and obtain one-time passwords or execute phishing campaigns to steal payment account credentials.

AI deepfakes are a growing concern. Criminals recently used a deepfake video to impersonate company executives and trick an employee into transferring $25.6 million to several accounts held by the group. Researchers say hackers need just three seconds of audio sample to clone a voice using AI in 10 minutes. A month after that research became public, a Vice reporter demonstrated how an unauthorized person used a cloned voice to access a consumer’s bank account.

What can we do to help things get better, and not worse?

Frist published at