Imagine an algorithm that reviews thousands of financial transactions every second, and detects fraudulent ones.

This has become possible thanks to advances in artificial intelligence in recent years, a very important development for banks that are overwhelmed by massive amounts of daily transactions that represent an increasing challenge of combating financial crime, money laundering, terrorist financing and corruption.

However, AI is not entirely flawless.

A report by The Next Web site stated that companies that use artificial intelligence to uncover crime are dealing with new challenges, such as algorithmic bias, a problem that occurs when an AI algorithm causes systemic harm to a group of a certain gender, race or religion.

And in years past, an algorithmic bias that was not well controlled has damaged the reputation of the companies that use it.

It is extremely important for companies to always be alert to the presence of such bias.

For example, in 2019 the algorithm that ran Apple's credit card was found to be biased against women, causing a PR backlash against the company.

And in 2018, Amazon was forced to shut down an AI-powered recruitment tool that also showed bias against women.

Banks face similar challenges, and here's how to fight financial crime using artificial intelligence while avoiding the risks of algorithmic bias:

Catching criminals

Combating financial crime involves monitoring many transactions.

For example, ABN AMRO, the third largest bank in the Netherlands, has around 3,400 employees who check and monitor transactions.

Traditional surveillance relies on strict rule-based systems and ignores many emerging financial threats such as terrorist financing, illegal trade, wildlife fraud and healthcare.

Companies that use artificial intelligence to detect and prevent crime are grappling with new challenges such as algorithmic bias (Reuters)

This is the main area in which AI algorithms can help.

Artificial intelligence algorithms can be trained to detect suspicious transactions and transactions that deviate from the customer's normal behavior.

The data science team at ABN Amro Bank's Innovation and Design unit headed by Malo van den Berg built models that help find the unknown in financial transactions.

The team was very successful in finding fraudulent transactions while minimizing false signals.

Instead of the fixed rules that banks currently practice in detecting fraud, these algorithms can adapt to the changing habits of customers, and also detect new threats that appear as financial patterns gradually change.

"We are seeing patterns and things that we haven't seen before," Van den Berg says. "If our AI indicates a transaction as a deviation from the customer’s natural pattern, then we discover the reason based on the available information, and we verify that, and if the investigation does not provide clarity about the payment process, We can make inquiries with the client. "

The Dutch bank uses unsupervised machine learning, a branch of artificial intelligence that can search massive amounts of unlabeled data, find related patterns that could hint at safe and suspicious transactions, and unsupervised machine learning can help create dynamic systems. To uncover financial crimes.

But like other branches of artificial intelligence, unsupervised machine learning models may also develop hidden biases that can cause unwanted damage if not handled properly.

Remove unwanted biases

Data science and analytics teams at banks must find the right balance as their AI algorithms can detect fraudulent transactions without infringing on anyone's rights.

AI system developers make sure to avoid including problematic variables such as gender and race in their models.

The problem, however, is that other information can act as cofactors for bias, and AI scientists must ensure that these factors do not influence the decision-making process of their algorithms.

For example, in the case of Amazon's infamous recruitment algorithm, and while gender was not taken as a factor in hiring decisions, the algorithm learned to associate negative results with feminine names or terms such as "women's chess club."

"For example, when AI techniques are used to identify clients suspected of criminal activity, it must first be demonstrated that this AI treats all clients fairly with regard to sensitive characteristics (such as where they are born)," Van den Berg says.

Lars Haringa, a data scientist on the van den Berg team, explains, “Not only does the data scientist building the AI ​​model need to demonstrate the model’s performance, but also ethically justify its impact. This means that before production of the model begins, the data scientist must ensure compliance. "With regard to privacy, fairness, and bias. An example of this is to ensure that employees do not develop biases as a result of using AI systems by building statistical safeguards that ensure that employees are offered unbiased choices with AI tools."

AI system developers make sure to avoid including problematic variables such as gender and race in their models (social media).

Balanced cooperation

One of the challenges faced by companies that use AI algorithms is deciding how much detail to disclose about AI.

On the one hand, companies want to take full advantage of joint work on algorithms and technology, while on the other hand they want to prevent malicious actors from manipulating them.

They also have a legal duty to protect customer data.

And banks, like many other sectors, are being reinvented and redefined by AI.

And because financial criminals have become more sophisticated in their methods and tactics, bankers will need all the help they can get to protect their customers and their reputation, and cooperation across the sector on smart technologies to combat financial crimes that respect the rights of all customers can be one of the best allies of bankers around the world. .