Generative AI – fraud friend or foe?

05 April 2023

Artificial intelligence plays a large role in fraud detection in various ways. Account detection makes it easier to determine the legitimacy of a user account and streamlines the verification process. Device analytics collect data to track and, ultimately, enhance the performance of devices, while network analytics monitor network traffic and data – identifying and preventing security threats and simultaneously optimising network performance

With ransomware booming and cyber-attacks hitting headlines daily, crippling businesses, the automation that AI brings will be a vital tool for those trying to fight this type of cybercrime, as well as other frauds. Unfortunately, it’s also invaluable for the criminals enacting these crimes. This means if the fraud fighters don’t keep pace, the percentages below are sure to rise drastically in the coming years.

How AI equips the actor

Many organisations are training their staff, their front line of defence, to be on high alert for suspicious emails. These days, most employees know to be wary of communications addressed to ‘dear sirs’, with obvious spelling and grammatical errors and hyperlinks to questionable sites. With the vast majority of malware being delivered by email and it also being the easiest route in for social engineering, it makes sense to educate employees in this way. However, since the pandemic, suspicious emails directed at individuals, known as spear phishing, have become far less obviously suspect and far more targeted, tailored and frequent.

The cyber and fraud landscape is changing at a rapid pace and criminals leveraging the power of offensive AI adds a further layer of complexity, with tools to create emails indistinguishable from true communications and deepfakes becoming more widely available.

Deepfake technology can alter or clone voices in real time, resulting in the artificial simulation of a person’s voice. So, the ‘chief finance officer’ can request an urgent payment from a finance team member over the phone or gather valuable intelligence, all without arousing suspicion. Similarly, banking verification measures using voice authentication can be duped using the software. Deepfakes aren’t restricted to voice spoofing, they can be sophisticated enough to use video technology, helping to make impersonation attempts appear as credible as possible.

AI is also scaling up one of the biggest threats to businesses, ransomware. The introduction of AI allows large parts of the process to be automated, rather than a human-driven targeted and tailored attack which cannot be done at scale. AI allows automation with systems to be monitored, codes to be changed, and new domains to be registered, all without time-consuming human intervention.

The availability of malicious technology to perform attacks is not restricted to cybercrime ‘specialists’. The likes of ransomware-as-a-service and vishing-as-a-service is a business model that provides paid-for access to targeted ransomware or vishing tools.

Businesses are right to be bolstering their defences against all of the above, as without increasingly rigorous protection and prevention, there is a far higher chance of an organisation hitting headlines following a ransomware attack or mandate fraud.

AI fraud defensive

Cybersecurity teams are increasingly relying on defensive-AI, where machine learning identifies what is ‘normal’ for a business 24/7. Where abnormalities are detected, this is a red flag for potential malicious activity. The technology is used to identify abnormalities rapidly and autonomously at the earliest stage of an attack, which is the most easily salvageable position. Bad actors will often compromise a network and wait for whatever opportunity they want to seize. It is at this point of compromise that AI defences come into their own, protecting the security of data and assets. Human defences alone are insufficient in mitigating the risks of the types of attacks discussed.

AI on side

For businesses, the rise in the AI fraud offensive means upping the prevention game by, as the criminals have, investing in next-generation AI technology to fight the increased risk of fraud. Key actions for businesses are:

  • hire skilled AI cybersecurity professionals;
  • implement AI intelligence operations;
  • invest significantly in IT audits;
  • provide and enforce diligent fraud and cybersecurity training;
  • enhance cyber-hygiene and condense sprawling legacy systems;
  • complete penetration testing; and
  • invest in defensive AI.