Improving financial crime fighting with artificial intelligence
Teams that fight financial crime within financial institutions face a daunting task. Alongside their task of monitoring and assessing billions of transactions with accuracy, they at the same time are expected to maintain a good customer experience for clients and comply with mounting – and ever-changing – regulatory requirements.
In meeting these demands, financial crime teams encounter a range of operational issues, from staffing shortfalls and inefficient processes to bottlenecks in workflows and technology shortcomings. To address these and more challenges, financial institutions should adopt artificial intelligence (AI), in the form of machine learning to be more precise – say Ruben Velstra en Michel Witte from financial services consultancy Delta Capita.
“Automation is presenting several opportunities for financial crime functions,” Velstra kicks off. “Take the segment of anti-money laundering as an example. Solutions powered with artificial intelligence can improve productivity by identifying high-risk situations that require human interaction; and detect hidden risks between siloed processes.”
Witte throws in another example, “With the use of artificial intelligence, solutions can monitor the transactions of customers across their full life cycle, and turn that into behavioural information. This enables financial crime teams to access ‘profiles’ of customer in lightning-speed, as opposed to this process being done manually.”
This is easier said than done, though. In practice, artificial intelligence adoption still finds itself in its infancy. “It really still is a niche, driven mainly by the real technology enthusiasts,” says Witte. “That’s mainly down to challenges which teams encounter along the way.”
The biggest difficulties fall according to the duo into three groups: skills; data quality and availability; and transparency and understanding.
Skills
“For AI adoption, financial institutions tend to focus on hiring data science and internal ratings-based model specialists who are not always experienced in using AI,” Witte notes. “But AI skills should no longer be the exclusive domain of number-crunchers and data wizards. A growing number of people can use analytics without complex techniques. Algorithms are increasingly generated automatically, which changes the required expertise.”
To address this, companies need to build skills among their staff, which focus on understanding and interpreting results. When it comes to recruiting to help fill those gaps more rapidly, finding people that can use the technology effectively can be very difficult. However, it is essential that firms put mechanisms in place to supplement their pre-existing talent, as high-performing project teams must include both technical and financial crime expertise to operate effectively.
Velstra adds, “Handling alerts or signals from AI systems also requires different perspectives for operational analysts. Rule-based systems mostly have black-and-white decision-making processes. Using AI to analyse client behaviour requires a more proactive, risk-based, and client-centric approach – and more professional judgement.”
Data quality and availability
In computer science, garbage in, garbage out (GIGO) is the concept that flawed, or nonsensical input data produces low-quality output. As such, even if institutions have market-leading AI facilities, they must secure a constant stream of credible data to feed into it. But data standards often fall short, according to Witte.
“Financial institutions struggle to keep all client information up to date,” he continues. “And client data is often duplicated in internal systems or stored in silos. For example, if information was set up based on siloed product lines, this can make it difficult to analyse data automatically and holistically.”
Understanding
Velstra meanwhile asserts that institutions must build their capacity for interpretation and transparency for data. Having quality data process efficiently by machines is useless if it cannot be understood and interpreted by the human assets of a business. At the same time, without this understanding, companies may end up falling foul of regulatory changes – placing them in a difficult position.
Going on, Velstra states, “The use of AI in financial crime is still controversial, especially among regulators. This makes it important to emphasise that data and analytics models are used ethically. Institutions should be aware that: data can contain bias; they will need to reconstruct the automatic decision-making for auditors to see; and they need to implement appropriate manual safeguards and rigorous testing in the meantime.”
Pointing to one recent example, Velstra cites a German bank, which inadvertently blocked hundreds of customers’ accounts after tightening its automatic controls. The following reputational backlash highlights the risks if the points above are not addressed adequately, while underlining the importance of aligning with internal stakeholders, such as compliance and audit, to support the goals of using AI and including this in company policies.
How data and technology can help
The first step to tackling all these challenges is recognising data as a both a problem and an opportunity, according to Velstra and Witte. Underlying issues should not be addressed from a financial crime perspective only, but with a much broader scope.
“Commercial initiatives, for example, can benefit from client-centric data structures,” Witte expands. “Institutions should seamlessly update information by connecting to official public sources, such as Chambers of Commerce; and regularly ask clients to validate their data.”
Beyond that, Velstra warns that institutions should still “think carefully about the decision to make or buy AI technology”. While it might be tempting to construct an own AI system to cut out the middle-man, not every institution is in a position to assemble specialised teams to take care of such a project. In that case, “effective, dedicated third-party players are available” and should be used.
Continuing, Velstra explains, “Financials that trust their in-house capabilities must strike the right balance between continuous experimentation and frequently bringing relevant AI uses into production.”
Showcase examples include: using AI in transaction monitoring to triage or prioritise alerts from rule-based scenarios; anomaly detection – generating alerts for a specific risk that existing rules cannot easily detect; and increasing name matching efficiency in sanctions screening.
“As we’ve highlighted, artificial intelligence is not easy to adopt,” Witte concludes. “But being aware of the challenges and developing a solid strategy will enable a plethora of possibilities. The time is ripe to start reaping the benefits.”