Ensuring Ethical AI in Fraud Detection: FNA’s Perspective

By Federico Musciotto (Data Science Manager) & Matteo Neri (Data Science Manager)


As the pace of digitisation accelerates, so too does the scale and complexity of fraud. In countries across the world, the emergence of real-time payments and digital onboarding has enabled new forms of abuse: scams, mule networks, and social engineering attacks that outpace traditional fraud prevention methods. Faced with this challenge, several central banks and payment system operators are now moving beyond isolated institutional defences. They are building fraud portals—shared platforms designed to enable collaboration across public and private stakeholders in the detection, tracing, and resolution of fraud.

These initiatives are not theoretical. Malaysia’s National Scam Response Centre connects 48 financial institutions in a single response environment. Across Asia and the Middle East, national authorities are increasingly exploring fund tracing platforms to improve ecosystem-wide fraud response. In each of these examples, technology plays a critical role. And within that technology stack, artificial intelligence is becoming central to how fraud is detected, prioritised, and prevented.

But with this power comes a new responsibility: to ensure that the AI used in these national infrastructures operates ethically, transparently, and in the public interest—an imperative underscored by BIS Innovation Hub Insight #63 [1], which highlights the importance of embedding explainability, privacy-by-design, and human-in-the-loop validation in fraud detection AI to build trust and operational resilience across financial ecosystems; and reinforced by the Monetary Authority of Singapore’s FEAT Principles [2], which provide a foundational governance framework to guide the Fair, Ethical, Accountable, and Transparent use of AI in financial services, especially when decisions affect individuals at scale.

Why AI is Necessary

AI is not used in these systems simply because it is fashionable or efficient. It is used because the scale and speed of fraud activity now exceeds what any human team can analyse manually. Every day, payment systems process millions of transactions. Even a small percentage of false positives can overwhelm investigative capacity, while false negatives allow scams to go unchallenged. AI is necessary to filter what matters from what does not.

FNA’s fraud portal deployments integrate machine learning in two core areas. First, in the scoring of transactions for fraud risk pre-settlement, allowing suspicious payments to be delayed or reviewed. Second, in the scoring of accounts based on behavioural and network indicators, helping institutions detect money mules. In both cases, the models must not only be effective—they must also be fair, interpretable, and responsive to new evidence.

 

FNA’s Approach to Ethical AI

Ensuring the ethical use of AI in public infrastructure is not a one-time exercise. It is a continuous commitment that shapes system design, client engagement, and model governance. FNA’s approach rests on four main principles.

The first is explainability. In every deployment, FNA ensures that transaction and account scores can be understood by human investigators. While not all models are fully explainable, efforts are made to provide clear indicators such as the number of counterparties, the speed of fund movement, or known scam typologies, which can be examined and challenged.  Improving the transparency and explainability of AI outputs remains a key focus of ongoing development.

Second is feedback. Fraud models are only as good as the data they learn from. FNA integrates investigation outcomes from financial institutions and law enforcement directly into the model retraining process. When a flagged transaction is later confirmed to be benign, that information is used to reduce future false positives. When a new scam pattern is uncovered, the model adjusts accordingly. This creates a dynamic learning cycle grounded in operational experience.

Third is privacy. Many national platforms face legitimate concerns about data sovereignty and cross-institutional data sharing. FNA addresses this through a distributed system design. Each participant retains control over their own data.  Scoring models operate centrally using non-sensitive data and, where necessary, locally to process personally identifiable information (PII). Where needed, cryptographic techniques further reduce the amount of information exchanged. This ensures that the benefits of collaboration are realised without compromising legal or ethical boundaries.

Finally, FNA supports structured oversight. In every deployment, clients are encouraged to establish governance mechanisms around the use of AI. This includes threshold setting, escalation rules, audit logs, and the ability to override model decisions. The platform includes features that can support these processes and enable accountability.

 

Looking Ahead

As fraud threats continue to evolve, the need for real-time, cross-institutional detection will only increase. AI will remain at the centre of that response. However, to retain public trust, it must operate with care. The future of ethical AI in national fraud systems should not be left to informal norms or internal policies. It should be formalised through shared standards, public sector leadership, and transparent evaluation.

FNA recommends several steps. First, that model explainability be treated as a core requirement, not an optional feature. Second, that public-private systems commit to routine feedback loops between model developers and end users. Third, that regulators define a minimum ethical framework for AI in fraud detection, including fairness, auditability, and privacy standards. And finally, that governments invest in independent validation of cross-institutional AI models, ensuring they reflect the priorities of the full ecosystem, not just the operator.

In the end, technology is not neutral. It reflects the choices we embed into it. At FNA, we believe that choices about fraud detection must be grounded not only in technical accuracy but in social responsibility. The promise of AI is real, but it must be guided by principles that match the scale of its impact. That is the work ahead, and it is work that must be done together.

 

References

[1] BIS Innovation Hub. Insight 63: Operationalising AI to fight payment fraud – the need for transparency and trust. Bank for International Settlements, 2024.
https://www.bis.org/fsi/publ/insights63.pdf 

[2] Monetary Authority of Singapore. Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector, February 2019.
https://www.mas.gov.sg/-/media/mas/news-and-publications/monographs-and-information-papers/feat-principles-updated-7-feb-19.pdf


Previous
Previous

From RTGS Simulation to RTGS Digital Twins – Why are they more powerful?

Next
Next

Multi-Rail, Multi-Bank Approaches to Payment Fraud Detection