Rahman Ravelli
Rahman Ravelli Solicitors Logo
Rapid Response Team: 0800 559 3500
Switchboard: +44 (0)203 947 1539

About Us Expertise PEOPLE International Legal Articles News Events Contact Us toggle button for phone toggle button for search
Rapid Response Team: 0800 559 3500
Switchboard: +44 (0)203 947 1539
search
Rapid Response Team: 0800 559 3500
Switchboard: +44 (0)203 947 1539
search

Use of AI in Fraud - Preventing or Perpetrating

Author: Nicola Sharp  7 September 2023
5 min read

AI is widely used in the financial sector to detect fraud. For example, it is routinely used by banks to analyse credit applications. Machine learning models review vast quantities of data to detect any red flags or unusual activity with greater speed and accuracy than any manual human review.

But in its current forms, AI can only go so far and it has its limitations.

Its accuracy is not always guaranteed and some human quality assurance is needed to discern false signals. That’s because there are relatively few frauds in relation to the real transactions in financial data sets so there is limited input with which to develop the models.[1]

Secondly, the type of fraud that is aimed at banks and their clients varies widely. So any system that is learning to detect the signs of fraud needs to identify a vast amount of different signals. It will need a range of functionality including (i) integrated algorithms, applied behavioural analytics, (iii) creating models from massive datasets and (iv) adaptive analytics.[2] So it’s likely that only the biggest businesses with sufficient resources will be able to roll out these systems internally.[3]


How Fraudsters Are Committing Fraud Using AI

While AI has many benevolent uses, it has also been used as a tool to commit fraud. As incidents of fraud are on the rise, artificial intelligence is one of the developments to blame.[4]

That’s due to AI’s ability to mimic and imitate individuals, which fraudsters have used to their advantage. AI can absorb personal data, such as emails, photos, videos, and voice recordings, to impersonate real humans and gain access to private information.

The kinds of financial fraud that are made easier and more frequent by AI’s use of this information include:

  • Unauthorised transactions on credit cards.
  • Phishing scams.
  • Automated push payment (APP) scams.
  • Identity theft.[5]

Phishing is a particularly prevalent problem, which often leads to APP fraud. Criminals dupe people into disclosing sensitive data such as credentials, accounts, identity, or credit card details. The use of AI has allowed a level of automation in these phishing attacks often used for identity fraud or identity theft, which makes them more frequent, and more targeted.

The recent case of Philipp v Barclays Bank PLC [UKSC] 25, which went all the way to the Supreme Court, is one example of “a particularly egregious example” of this type of scam. Dr Philipp was contacted by a fraudster who claimed to be working for the Financial Conduct Authority in conjunction with the National Crime Agency. In a series of telephone calls Dr and Mrs Philipp were led to believe that their money needed to be moved from their bank and investment schemes to “safe accounts”. The couple lost £700,000 to this fraud. Read our full article on the Supreme Court judgment here.

And they are not the only ones. The Annual Fraud Report, published in May 2023 reported that over £1.2 billion was stolen by criminals through authorised and unauthorised fraud in 2022.[6]

It is suggested that fraudsters are using AI algorithms to analyse large volumes of social media data and other online information to gather personal details about potential targets, which makes the schemes all the more believable to their victims.


Can AI Commit Civil or Criminal Fraud?


So is AI legally responsible for these frauds?

At the moment AI is a bit of a misnomer, as it’s not really ‘intelligent’. The programmes that are widely available do not simulate human thinking. Instead, current AI programs are really machine learning that learns from data without being reprogrammed and forms of automation.

But as the pace of AI development increases, we may see the ability of AI to replicate human thought and make autonomous decisions. If that development succeeds, then there’s an interesting tension around liability for fraudulent acts perpetrated by AI.

Most actions in civil fraud require an investigation into the perpetrator’s state of mind. They must have the requisite intent or dishonesty to be liable. For example, to establish a claim of deceit, the claimant must show that the defendant has (i) made a representation, (ii) which is false and (iii) dishonestly made and (iv) intended to be relied on and is relied on and (v) the claimant suffers damage as a result.

An example from the criminal side of fraud is seen in the UK Fraud Act 2006. A person commits the crime of fraud by misrepresentation if they (i) make a false representation, (ii) dishonestly, (iii) knowing that the representation might be untrue or misleading and (iv) intend to make a gain for themselves or cause or risk a loss to another.

As these examples demonstrate, the intent and knowledge of the defendant are critical to establishing their culpability.


How do you establish the state of mind of artificial intelligence AI?

On the one hand, it could be said to be the developer’s state of mind that is required. But as the technology develops, and makes decisions for itself, the technology’s behaviour might differ from the developer’s intent.[7]

While it’s an interesting idea to consider, in practice this issue is a moot point. As it’s not a legal person, AI is not able in law to commit crimes or torts.

But will we see the Government legislate to intervene here? Or will the courts find creative applications of the law to attribute culpability directly to AI?

In recent years we have seen the courts adapt to new technology in the cryptocurrency space. For example:

  • The courts have embraced creative ways to serve documents through Non Fungible Tokens.[8]
  • Judges have defined crypto-assets as "sufficiently permanent or stable to be treated as property".[9]
  • The court has ordered delivery-up of crypto-assets by establishing that a crypto-wallet provider was the constructive trustee of stolen funds.[10]


Will we see new creative applications of the law to find AI liable for civil frauds?

At the moment, the UK government is cautious about introducing legislation to avoid stifling innovation.[11]

But the Government’s White Paper does recognise the potential disparity between AI’s developing actions and what it was originally programmed to do: “AI systems can operate with a high level of autonomy, making decisions about how to achieve a certain goal or outcome in a way that has not been explicitly programmed or foreseen. Establishing clear, appropriate lines of ownership and accountability is essential for creating business certainty while ensuring regulatory compliance.”

When it comes to legal liability, the White Paper briefly addresses this issue, noting that under the current legal frameworks, liability across the supply chain may be allocated in a way that is not fair and effective.

There is no definitive action plan to mitigate this risk, but the White Paper says that the Government “would consider proportionate interventions to address such issues which could otherwise undermine our pro-innovation approach to AI regulation.”

While significant changes to the law are unlikely to be imminent, the legal framework around AI will need to develop to keep pace with the changes in AI technologies. We will watch out for any developments.


Sources:

  1. Benson Edwin Raj, S. and Annie Portia, A. (2011). Analysis on credit card fraud detection methods. In: 2011 International Conference on Computer, Communication and Electrical Technology (ICCCET). March 2011
  2. Musaab Mohammad Alhaddad ‘Artificial Intelligence in the Banking Industry: A Review on Fraud Detection, Credit Management, and Document Processing’
  3. Musaab Mohammad Alhaddad ‘Artificial Intelligence in the Banking Industry: A Review on Fraud Detection, Credit Management, and Document Processing’
  4. ‘Increasing Incidents of Fraud in the UK’
  5. Musaab Mohammad Alhaddad ‘Artificial Intelligence in the Banking Industry: A Review on Fraud Detection, Credit Management, and Document Processing’
  6. UK Finance Annual Fraud Report
  7. Hal Aston, ‘Definitions of intent suitable for algorithms’
  8. Osbourne v Persons Unknown Category A [2023] EWHC 39 (KB)
  9. UK Jurisdiction Taskforce of the Law Society's LawTech Delivery Panel (UKJT) published in November 2019
  10. Jones v Persons Unknown [2022] EWHC 2543 (Comm)
  11. Policy Paper ‘A pro-innovation approach to AI regulation’ published March 2023 and updated in August 2023

 

Nicola Sharp C 09983

Nicola Sharp

Partner

nicola.sharp@rahmanravelli.co.uk
+44 (0)203 910 4567 vCard

Download Profile PDF

View Profile

Nicola is known for her fraud, civil recovery, arbitration and business crime expertise, her experience of leading the largest financial disputes and multinational investigations and her skills in devising preventative measures and conducting internal investigations for corporates.

Share this page on