Editorial: The use of artificial intelligence in fighting financial crime, for better or worse?

Chris Stears (BPP University Law School London Holborn, London, UK)
Joshua Deeks (Edmund Group, London, UK)

Journal of Money Laundering Control

ISSN: 1368-5201

Article publication date: 18 April 2023

Issue publication date: 18 April 2023

791

Citation

Stears, C. and Deeks, J. (2023), "Editorial: The use of artificial intelligence in fighting financial crime, for better or worse?", Journal of Money Laundering Control, Vol. 26 No. 3, pp. 433-435. https://doi.org/10.1108/JMLC-05-2023-174

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Emerald Publishing Limited


The general use of artificial intelligence is on the rise

The general adoption of artificial intelligence (“AI”) has been around for far longer than most would regard. In 2023, AI is often benchmarked against sophisticated tools, such as ChatGPT, capable of taking a simple user input and churning out a complex, well-informed response – even going so far as being capable of designing entire websites and generating full code scripts. But AI has been around for a long time already in more basic forms such as Amazon’s Alexa and Apple’s Siri. In fact, it is estimated that 97% of mobile users are already using AI-powered voice search and voice action tools [1]. It is not hard to understand why AI has risen stratospherically in the past six months given that OpenAI’s ChatGPT advanced autoregressive large language model crossed 100 million users in January 2023 and now sees 13 million individual users daily [2]. But the jury remains divided on how advances in the capabilities of AI could change the landscape of professional and financial services.

AI in the financial services sector

The financial services sector has seen a remarkable transformation in the past decade. And financial services firms have been investing in deploying AI through their tech stack with differing levels of intensity and in different places throughout their operations.

For example, JP Morgan Chase deploys AI in fraud detection and prevention, and they have developed a chatbot known as COiN to help service their customers in a faster and more efficient way. And it's not just banks, but the vendors that serve them that have done a good job so far of integrating AI into their own products. A good example of this is Palantir which deploys AI throughout its ontology core, the notably so in their “dynamic layer” in which a client can create data models from which it can run advanced simulations and test automated AI-based decisions.

As AI use cases increase, so too has regulatory interest. Notably, in the Autumn of last year, the UK Regulators published a joint Discussion Paper [3] exploring the questions around whether (and if so, how) the potential benefits of, as well as the novel challenges and risks posed by AI, can be managed within the existing regulatory framework, or whether in fact, a new approach is needed. It is worth briefly trailing some of the observations that might be made on AI in this regulatory context, through a financial crime lens, specifically.

The case for AI in fighting financial crime

When it comes to identifying potential money laundering activities, AI can prove invaluable due to its ability to process vast amounts of data quickly and accurately. AI-driven analytics can provide firms with deep insights into customer behaviour that would not be possible with traditional methods (or at least would be very time-consuming and complex by comparison). AI-based systems can be trained to detect and alert anomalous behaviour simultaneously at the portfolio, segmentation and individual levels and alert the financial institution, allowing them to act before any money is laundered.

Furthermore, AI has been found to be more effective at detecting fraudulent activities than traditional approaches such as manual audits or human oversight [4]. AI algorithms are able to identify patterns that humans might miss or overlook. These algorithms, when deployed well, can have a much lower error rate than some fraud monitoring rulesets which at some institutions have been found to be arbitrary with very high false positive rates.

It would be foolish though, to think that the use of AI has incontrovertibly proven itself valuable and accessible for all financial firms in the detection and prevention of money laundering, and therefore the use and even the decision to embark on the use of AI, comes with material risk. There is a risk that fundamentally the data used to create the AI models could be of poor quality and have insufficient volumes of data to train the model. As the FCA’s Jessica Rusu noted in a speech on building better foundation in AI [5], referring to the regulators’ recent survey results [6], “[…] a lack of AI explainability was considered the key risk for firms”. Respondents to the survey, concerned about this risk, “[…] highlighted the need for clear metrics to assess what constitutes a successful outcome for a [machine learning] application” and suggested “setting a framework and baseline metrics to measure the impact of [machine learning] applications” [7].

Furthermore, the algorithm could be inherently biased or produce false positives which are hard to retroactively dissect. This of course also raises an important question around audibility, how might the firm be expected to explain to auditors and regulators exactly how and why its AI model is producing the results that it is? In a space that is particularly in need of strong governance, this is a significant matter of consideration.

It is notable that, through the regulator’s recent publications in this area, there is a clear emphasis on their looking to the Senior Managers & Certification (SM&CR) regime to respond to (some of the) governance, risk and ultimately accountability concerns arising in connection with the adoption of AI. And, the stakes are particularly high in the context of financial crime prevention; not least given the inherent risks to the firm, its consumers and the market(s) in which it operates, but also given the various offences on the statute books capable (on paper at least) of holding individuals to account for financial crime control failings. Still, the FCA raises the SM&CR up as “the right framework to respond quickly to innovations, including AI” and that “it creates an incentive to collect data to better measure the impact of the technology” [8]. Indeed, the regulator is presently consulting on whether to introduce a Prescribed Responsibility for AI to the Senior Managers Regime and is also, thankfully, inviting comment on the need for further and better regulatory guidance within, for example, DEPP 6.2, on what may constitute reasonable steps in the AI context and how, if at all, these steps differ from the reasonable steps that Senior Management Function holders are generally required to take.

It will be particularly interesting to follow developments in this area, as the SM&CR matures as an individual accountability regime and, moreover, as we see AI-driven financial crime control solutions increasingly deployed within and by financial services firms. Indeed, with the regulator’s AI consultation having closed on 10 February, there’ll be an opportunity to return to this subject in short order, to examine this specific intersection between technological innovation and the regulatory framework on accountability when (not if) the AI gets it wrong.

Finally, it is important to recognise that AI also poses ethical concerns, that may even run counter to perceived product and service benefits or improvements to operational efficiency. AI systems have been criticised for their potential to infringe on people’s privacy and the ability for the tools to be misused or abused. Thus, firms must be vigilant in their use of AI and ensure they are following all necessary regulations when using such technology. And it should be noted that the same AI capabilities that give firms strength can also give the criminals that seek to attack them more complex and varied ways in which to attempt to infiltrate firms and undermine integrity. AI, therefore, can also be seen as a proverbial “wolf at the gate”: playing the role of both “wolf” and “gate”, simultaneously.

Notes

3.

FCA and PRA ‘Artificial Intelligence and Machine Learning: Discussion Paper’, DP22/6 (11 October 2022).

4.

See, for example, the results of the FCA and Bank of England’s second survey on the state of machine learning in UK financial services, which reported on a number of AI in anti-fraud and AML use cases: FCA, BoE ‘Machine learning in UK financial services’ (11 October 2022) at 5.3.

5.

Rusu, J., FCA Chief Data, Information and Intelligence Officer, ‘Building better foundations in AI’, speech at The Alan Turing Institute’s Framework for Responsible Adoption of Artificial Intelligence in the Financial Services Industry (FAIR) event (24 January 2023).

6.

See fn 4, at 4.3.

7.

Ibid, at 3.3.

8.

See fn 5.

Related articles