Author: Nicola Sharp
7 September 2023
5 min read
AI is widely used in the financial sector to detect fraud. For example, it is routinely used by banks to analyse credit applications. Machine learning models review vast quantities of data to detect any red flags or unusual activity with greater speed and accuracy than any manual human review.
But in its current forms, AI can only go so far and it has its limitations.
Its accuracy is not always guaranteed and some human quality assurance is needed to discern false signals. That’s because there are relatively few frauds in relation to the real transactions in financial data sets so there is limited input with which to develop the models.
Secondly, the type of fraud that is aimed at banks and their clients varies widely. So any system that is learning to detect the signs of fraud needs to identify a vast amount of different signals. It will need a range of functionality including (i) integrated algorithms, applied behavioural analytics, (iii) creating models from massive datasets and (iv) adaptive analytics. So it’s likely that only the biggest businesses with sufficient resources will be able to roll out these systems internally.
While AI has many benevolent uses, it has also been used as a tool to commit fraud. As incidents of fraud are on the rise, artificial intelligence is one of the developments to blame.
That’s due to AI’s ability to mimic and imitate individuals, which fraudsters have used to their advantage. AI can absorb personal data, such as emails, photos, videos, and voice recordings, to impersonate real humans and gain access to private information.
The kinds of financial fraud that are made easier and more frequent by AI’s use of this information include:
Phishing is a particularly prevalent problem, which often leads to APP fraud. Criminals dupe people into disclosing sensitive data such as credentials, accounts, identity, or credit card details. The use of AI has allowed a level of automation in these phishing attacks often used for identity fraud or identity theft, which makes them more frequent, and more targeted.
The recent case of Philipp v Barclays Bank PLC [UKSC] 25, which went all the way to the Supreme Court, is one example of “a particularly egregious example” of this type of scam. Dr Philipp was contacted by a fraudster who claimed to be working for the Financial Conduct Authority in conjunction with the National Crime Agency. In a series of telephone calls Dr and Mrs Philipp were led to believe that their money needed to be moved from their bank and investment schemes to “safe accounts”. The couple lost £700,000 to this fraud. Read our full article on the Supreme Court judgment here.
And they are not the only ones. The Annual Fraud Report, published in May 2023 reported that over £1.2 billion was stolen by criminals through authorised and unauthorised fraud in 2022.
It is suggested that fraudsters are using AI algorithms to analyse large volumes of social media data and other online information to gather personal details about potential targets, which makes the schemes all the more believable to their victims.
At the moment AI is a bit of a misnomer, as it’s not really ‘intelligent’. The programmes that are widely available do not simulate human thinking. Instead, current AI programs are really machine learning that learns from data without being reprogrammed and forms of automation.
But as the pace of AI development increases, we may see the ability of AI to replicate human thought and make autonomous decisions. If that development succeeds, then there’s an interesting tension around liability for fraudulent acts perpetrated by AI.
Most actions in civil fraud require an investigation into the perpetrator’s state of mind. They must have the requisite intent or dishonesty to be liable. For example, to establish a claim of deceit, the claimant must show that the defendant has (i) made a representation, (ii) which is false and (iii) dishonestly made and (iv) intended to be relied on and is relied on and (v) the claimant suffers damage as a result.
An example from the criminal side of fraud is seen in the UK Fraud Act 2006. A person commits the crime of fraud by misrepresentation if they (i) make a false representation, (ii) dishonestly, (iii) knowing that the representation might be untrue or misleading and (iv) intend to make a gain for themselves or cause or risk a loss to another.
As these examples demonstrate, the intent and knowledge of the defendant are critical to establishing their culpability.
On the one hand, it could be said to be the developer’s state of mind that is required. But as the technology develops, and makes decisions for itself, the technology’s behaviour might differ from the developer’s intent.
While it’s an interesting idea to consider, in practice this issue is a moot point. As it’s not a legal person, AI is not able in law to commit crimes or torts.
But will we see the Government legislate to intervene here? Or will the courts find creative applications of the law to attribute culpability directly to AI?
In recent years we have seen the courts adapt to new technology in the cryptocurrency space. For example:
At the moment, the UK government is cautious about introducing legislation to avoid stifling innovation.
But the Government’s White Paper does recognise the potential disparity between AI’s developing actions and what it was originally programmed to do: “AI systems can operate with a high level of autonomy, making decisions about how to achieve a certain goal or outcome in a way that has not been explicitly programmed or foreseen. Establishing clear, appropriate lines of ownership and accountability is essential for creating business certainty while ensuring regulatory compliance.”
When it comes to legal liability, the White Paper briefly addresses this issue, noting that under the current legal frameworks, liability across the supply chain may be allocated in a way that is not fair and effective.
There is no definitive action plan to mitigate this risk, but the White Paper says that the Government “would consider proportionate interventions to address such issues which could otherwise undermine our pro-innovation approach to AI regulation.”
While significant changes to the law are unlikely to be imminent, the legal framework around AI will need to develop to keep pace with the changes in AI technologies. We will watch out for any developments.
Nicola is known for her fraud, civil recovery, arbitration and business crime expertise, her experience of leading the largest financial disputes and multinational investigations and her skills in devising preventative measures and conducting internal investigations for corporates.