AI in Risk Scoring Brings Big Potential — But Accuracy, Bias Still Major Hurdles

The right Artificial Intelligence (AI) tools may be bringing speedier results in the area of risk assessments, but human intelligence still needed to ensure accuracy,

By Iva Karen

KUALA LUMPUR, June 27: Artificial Intelligence (AI) is changing how companies assess risk, with many moving towards real-time systems that can quickly detect suspicious activities or unreliable business partners.

Fraud examiner and entrepreneur Raymon Ram has warned that despite the benefits, there were serious challenges that would require attention.

He said, said more organisations, especially in sectors like banking and procurement, are using AI to spot risks faster, instead of checking risks once a year or quarter, companies are shifting to “always-on” models that work continuously.

“These AI tools can scan both structured and unstructured data to flag possible issues in real time. It’s a big improvement but also brings new risks if not handled carefully. The first challenge is accuracy, if AI models are not properly tuned, they can produce too many false alerts or, worse, miss real threats.

“Too many false positives can overwhelm compliance teams and if the system misses a serious risk, it can cause real damage. That’s why human checks and regular testing are still very important. The second issue is bias. AI systems learn from past data and that data may contain unfair patterns or prejudices.

“If you’re not careful, the AI might treat certain people or groups unfairly without you even realising it. Companies must check their systems regularly to make sure they’re fair to everyone,” he added.

The third major concern is regulatory acceptance.

It is only fair if regulators in Malaysia and other countries remain concerned with AI systems they can’t fully understand. “Authorities don’t like black-box systems — if a company can’t explain how its AI makes decisions, it could face legal or reputational problems,” Raymon said.

“Clear documentation and human oversight are now must-haves. These guidelines are telling us loud and clear, AI in compliance must be ethical, explainable, and defensible.”
–WE