[ad_1]
Deepfake fraud attempts have increased by a whopping 31 times in 2023 — a 3,000% increase year-on-year.
That’s according to a new report by Onfido, an ID verification unicorn based in London. The company attributes the surge to the growing availability of cheap and simple online tools and generative AI.
Face-swapping apps are the most common example. The most basic versions crudely paste one face on top of another to create a “cheapfake.” More sophisticated systems use AI to morph and blend a source face onto a target, but these require greater resources and skills.
The simple software, meanwhile, is easy to run and cheap or even free. An array of forgeries can then be simultaneously used in multiple attacks.
These cheapfakes aim to penetrate facial verification systems, conduct fraudulent transactions, or access sensitive business information. They may be crude, but only one needs to succeed.
By emphasising quantity over quality, the fraudsters target the maximum reward from the minimum effort.
Research suggests that this is their preferred approach. Onfido found that “easy” or less sophisticated fraud accounts for 80.3% of all attacks in 2023 — 7.4% higher than last year.
Despite the rise of deepfake fraud, Onfido insists that biometric verification is an effective deterrent. As evidence, the company points to its latest research. The report found that biometrics received three times fewer fraudulent attempts than documents.
The criminals, however, are becoming more creative at attacking these defences. As GenAI tools become more common, malicious actors are increasingly producing fake documents, spoofing biometric defences, and hijacking camera signals.
“Fraudsters are pioneers, always seeking opportunities and continually evolving their tactics,” Vincent Guillevic, the head of Onfido’s fraud lab, told TNW.
To stop them, Onfido recommends “liveness” biometric verification tech. These systems verify the user by determining that they’re genuinely present at that moment — rather than a deepfake, photo, recording, or a masked person.
At present, fraudsters typically attempt to spoof liveness checks with a very basic method: submitting a video of a video displayed on a screen. This approach currently accounts for over 80% of attacks.
In the future, however, tech will offer far more sophisticated options.
“The developments we’re likely to see with deepfakes and quantum computing will make fakes indistinguishable to the human eye,” Guillevic said.
In response, Guillevic expects businesses to apply more automated solutions. He also sees a crucial role for non-visual fraud signals, such as device intelligence, geolocation, and repeat fraud signals that work in the background.
Undoubtedly, the fraudsters will develop counterattacks. Both sides will have to upgrade their weapons on the AI versus AI battleground.
[ad_2]
Source link