AI Helps, But Humans Still Key to Spotting Financial Risks

By Iva Karen

KUALA LUMPUR, June 17: While technologies such as artificial intelligence (AI) and robotic process automation (RPA) are streamlining compliance processes, human judgment ultimately remains essential, particularly in detecting red flags and managing complex risk scenarios, says Raymon Ram.

Automation is no doubt a powerful enabler, but it cannot be a substitute for professional judgment, said Raymon, who has years of expertise in financial fraud detection. 

He explained that AI excels in flagging anomalies, analysing transactional behaviours, and surfacing hidden connections across data networks. 

“However, the interpretation of those red flags, considering geopolitical context, transaction purpose, or customer credibility, often requires nuanced human insight. Not all alerts carry the same weight. 

“A spike in transaction volume could be legitimate business growth or it could signal layering activity in money laundering. Making that distinction requires a trained eye,” Raymon said. 

Raymon, a certified anti-money laundering specialist emphasized that the optimal model is a human-in-the-loop approach: automation handles initial data processing, pattern detection, and alert generation, while experienced compliance officers review findings, apply context, and make final determinations.

“This hybrid framework ensures that ethical standards, legal obligations, and real-world complexity are factored into every critical compliance decision, preserving both regulatory integrity and reputational trust.

“Ultimately, automation should amplify human expertise, not replace it,” Raymon said. 

Commenting on how AI and Machine Learning (ML) are currently being used to enhance due diligence processes in financial institutions, he contended that they are used to automate the ingestion and analysis of vast, disparate data sources—including transactional records, watchlists, legal filings, adverse media, and beneficial ownership registries (if available).

“In enhanced due diligence (EDD), AI models map out complex financial relationships and detect hidden risk linkages, such as indirect ties to sanctioned entities. Natural language processing (NLP) enables rapid screening of adverse media and regulatory disclosures, helping compliance teams identify red flags with greater precision and speed. 

“ML further improves transaction monitoring by distinguishing between normal and anomalous behaviours in near real-time, significantly reducing false positives,” he said. 

–WE

Touching on the challenges or risks when integrating AI into due diligence workflows, Raymond said on the data privacy, AI systems rely on aggregating vast amounts of sensitive customer data. 

Financial institutions must ensure strict adherence to privacy frameworks such as PDPA or GDPR, local banking secrecy laws, and cross-border data transfer restrictions. Any breach in handling or misuse of personal data could result in regulatory sanctions and reputational damage.

“Many AI models, particularly deep learning systems, operate as “black boxes,” making decisions that are difficult to interpret or audit. Regulators are increasingly concerned about the opacity of such systems, particularly when they lead to risk-based exclusions or automated decision-making. 

“Compliance teams must be able to explain the rationale behind high-risk classifications and maintain documentation for audit trails,” he said.

Explaining on the automated risk scoring, Raymond pointed out that in terms of speed, automated risk scoring is exponentially faster than manual assessments as it can evaluate thousands of data points—transaction histories, geolocation patterns, KYC metadata—in seconds. 

“This scalability is crucial for institutions managing high customer volumes and complex risk exposures. Meanwhile, in terms of accuracy, the automation offers more consistent application of scoring criteria, unlike manual assessments, which may vary between analysts, algorithms apply logic uniformly. 

“Machine learning models can detect hidden correlations or emerging risks that manual processes might overlook, such as unusual transactional behaviours across networks. However, accuracy is highly dependent on data quality and proper calibration. Incomplete or biased input data can skew outcomes and lead to misclassifications,” he added.