Money Mules: Understanding How Tookitaki Addresses the Top AML Threat
Money muling is a term used to describe a tactic used by money launderers to move illegal funds through the financial system. Essentially, a money mule is someone who transfers money between different accounts or jurisdictions on behalf of someone else. This is often done using false identities or other methods of obfuscation to conceal the true origins of the funds.
Money muling is a significant problem in the world of money laundering, as it is a relatively easy way for criminals to move funds around without being detected. It is estimated that billions of dollars are laundered each year through money muling, and this practice is becoming increasingly common as more and more financial transactions are conducted online.
In this article, we will take a closer look at the problem of money muling in money laundering and explore how Tookitakis platform is addressing this threat to help financial institutions stay compliant with anti-money laundering (AML) regulations.
Understanding Money Muling
Definition and explanation of money muling
Money muling is the act of moving illegally obtained money through financial systems with the intention of hiding its illicit origin. It is a crucial stage of money laundering that allows criminals to conceal the origin of their illegal proceeds and prevent detection by law enforcement authorities. Money muling involves the use of individuals to move illicit funds, making it difficult for authorities to trace the source of the money.
Types of money muling
There are two main types of money muling: unwitting and witting. Unwitting money muling involves the use of individuals who are unaware that they are involved in criminal activity. Criminals deceive these individuals into transferring money on their behalf, usually by offering them payment or other incentives. Witting money muling, on the other hand, involves the use of individuals who are aware that they are involved in criminal activity. These individuals are usually recruited by criminal organizations and receive a share of the proceeds.
Methods used by money mules
Money mules use various methods to transfer illicit funds, including cash deposits, wire transfers, and cryptocurrency transactions. Criminals often use multiple methods to move the money to make it harder for authorities to track. Some money mules may also use their personal bank accounts or create shell companies to move the money, further complicating the money trail.
Understanding the methods and types of money muling is essential in developing effective strategies to prevent and detect money laundering activities. By identifying the patterns and behaviours of money mules, financial institutions and law enforcement agencies can better protect their customers and communities from the harms of money laundering.
The Impact of Money Muling on Money Laundering
Money muling is an effective way for money launderers to avoid detection and move their illegally obtained funds through the financial system. The process involves using third parties, often unwitting individuals, to transfer money from one account to another, disguising the origin of the funds.
Money muling is commonly used in various types of money laundering and related crimes, including fraud, drug trafficking, and terrorist financing. By recruiting individuals to transfer funds, money launderers can avoid triggering the red flags that would alert financial institutions to suspicious activity.
In recent years, there have been numerous cases of money laundering involving money muling. One high-profile example is the HSBC money laundering scandal, in which the bank was found to have facilitated the movement of over $800 million in drug money through its US operations. Money mules were used to move the funds across borders and into the banking system.
Other examples of money laundering involving money muling include the Liberty Reserve case, in which an underground digital currency exchange was found to have laundered over $6 billion, and the European Central Bank heist, in which hackers used money mules to move stolen funds.
These cases highlight the importance of identifying and preventing money muling as part of an effective anti-money laundering strategy.
Addressing the Threat of Money Muling with Tookitaki's Platform
Tookitaki is a leading provider of AML compliance solutions that help businesses detect and prevent money laundering. The platform uses advanced technologies and a community-based approach to identify suspicious activities and transactions, including those related to money muling.
Tookitaki's Anti-Money Laundering Suite (AMLS) is a comprehensive and end-to-end AML compliance platform designed to assist financial institutions in detecting, preventing and managing financial crimes. The platform is built on a foundation of "collective intelligence" which is operationalized to enable partner financial institutions in uncovering money trails that aren’t discoverable by today’s standards. It uses machine learning and big data analytics to provide a comprehensive approach to detecting and preventing financial crime. This allows financial institutions to identify suspicious activity more quickly and efficiently.
The Anti-Financial Crime (AFC) Ecosystem by Tookitaki offers a solution to this problem by providing financial institutions with a comprehensive approach to anti-financial crime. Our Typology Repository, a key part of the AFC ecosystem, is continuously updated with the latest money laundering and terrorist financing techniques, including the use of money mules. By accessing this information and best practices, financial institutions can enhance their compliance efforts and make it harder for criminals to evade detection.
Final Thoughts
As the threat of money muling continues to evolve, financial institutions need to stay up to date with the latest trends and developments in AML compliance. By partnering with a trusted AML compliance solutions provider like Tookitaki, businesses can ensure that they are well-equipped to address the ever-changing threat of money muling.
If you're interested in learning more about Tookitaki's platform and how it can help address the threat of money muling in your business, we encourage you to get in touch with us to book a demo.
Experience the most intelligent AML and fraud prevention platform
Experience the most intelligent AML and fraud prevention platform
Experience the most intelligent AML and fraud prevention platform
Top AML Scenarios in ASEAN

The Role of AML Software in Compliance

Ready to Streamline Your Anti-Financial Crime Compliance?
Our Thought Leadership Guides
Eliminating AI Hallucinations in Financial Crime Detection: A Governance-First Approach
Introduction: When AI Makes It Up — The High Stakes of “Hallucinations” in AML
This is the third instalment in our series, Governance-First AI Strategy: The Future of Financial Crime Detection.
- In Part 1, we explored the governance crisis created by compliance-heavy frameworks.
- In Part 2, we highlighted how Singapore’s AI Verify program is pioneering independent validation as the new standard.
In this post, we turn to one of the most urgent challenges in AI-driven compliance: AI hallucinations.
Imagine an AML analyst starting their day, greeted by a queue of urgent alerts. One, flagged as “high risk,” is generated by the newest AI tool. But as the analyst investigates, it becomes clear that some transactions cited by the AI never actually happened. The explanation, while plausible, is fabricated: a textbook case of AI hallucination.
Time is wasted. Trust in the AI system is shaken. And worse, while chasing a phantom, a genuine criminal scheme may slip through.
As artificial intelligence becomes the core engine for financial crime detection, the problem of hallucinations, outputs not grounded in real data or facts, poses a serious threat to compliance, regulatory trust, and operational efficiency.
What Are AI Hallucinations and Why Are They So Risky in Finance?
AI hallucinations occur when a model produces statements or explanations that sound correct but are not grounded in real data.
In financial crime compliance, this can lead to:
- Wild goose chases: Analysts waste valuable time chasing non-existent threats.
- Regulatory risk: Fabricated outputs increase the chance of audit failures or penalties.
- Customer harm: Legitimate clients may be incorrectly flagged, damaging trust and relationships.
Generative AI systems are especially vulnerable. Designed to create coherent responses, they can unintentionally invent entire scenarios. In finance, where every “fact” matters to reputations, livelihoods, and regulatory standing, there is no room for guesswork.

Why Do AI Hallucinations Happen?
The drivers are well understood:
- Gaps or bias in training data: Incomplete or outdated records force models to “fill in the blanks” with speculation.
- Overly creative design: Generative models excel at narrative-building but can fabricate plausible-sounding explanations without constraints.
- Ambiguous prompts or unchecked logic: Vague inputs encourage speculation, diverting the model from factual data.
Real-World Misfire: A Costly False Alarm
At a large bank, an AI-powered monitoring tool flagged accounts for “suspicious round-dollar transactions,” producing a detailed narrative about potential laundering.
The problem? Those transactions never occurred.
The AI had hallucinated the explanation, stitching together fragments of unrelated historical data. The result: a week-long audit, wasted resources, and an urgent reminder of the need for stronger governance over AI outputs.
A Governance-First Playbook to Stop Hallucinations
Forward-looking compliance teams are embedding anti-hallucination measures into their AI governance frameworks. Key practices include:
1. Rigorous, Real-World Model Training
AI models must be trained on thousands of verified AML cases, including edge cases and emerging typologies. Exposure to operational complexity reduces speculative outputs.At Tookitaki, scenario-driven drills such as deepfake scam simulations and laundering typologies continuously stress-test the system to identify risks before they reach investigators or regulators.
2. Evidence-Based Outputs, Not Vague Alerts
Traditional systems often produce alerts like: “Possible layering activity detected in account X.” Analysts are left to guess at the reasoning.Governance-first systems enforce data-anchored outputs:“Layering risk detected: five transactions on 20/06/25 match FATF typology #3. See attached evidence.”
This creates traceable, auditable insights, building efficiency and trust.
3. Human-in-the-Loop (HITL) Validation
Even advanced models require human oversight. High-stakes outputs, such as risk narratives or new typology detections, must pass through expert validation.At Tookitaki, HITL ensures:
- Analytical transparency
- Reduced false positives
- No unexplained “black box” reasoning
4. Prompt Engineering and Retrieval-Augmented Generation (RAG)Ambiguity invites hallucinations. Precision prompts, combined with RAG techniques, ensure outputs are tied to verified databases and transaction logs, making fabrication nearly impossible.
Spotlight: Tookitaki’s Precision-First AI Philosophy
Tookitaki’s compliance platform is built on a governance-first architecture that treats hallucination prevention as a measurable objective.
- Scenario-Driven Simulations: Rare typologies and evolving crime patterns are continuously tested to surface potential weaknesses before deployment.
- Community-Powered Validation: Detection logic is refined in real time through feedback from a global network of financial crime experts.
- Mandatory Fact Citations: Every AI-generated narrative is backed by case data and audit references, accelerating compliance reviews and strengthening regulatory confidence.
At Tookitaki, we recognise that no AI system can be infallible. As leading research highlights, some real-world questions are inherently unanswerable. That is why our goal is not absolute perfection, but precision-driven AI that makes hallucinations statistically negligible and fully traceable — delivering factual integrity at scale.

Conclusion: Factual Integrity Is the Foundation of Trust
Eliminating hallucinations is not just a technical safeguard. It is a governance imperative. Compliance teams that embed evidence-based outputs, rigorous training, human-in-the-loop validation, and retrieval-anchored design will not only reduce wasted effort but also strengthen regulatory confidence and market reputation.
Key Takeaways from Part 3:
- AI hallucinations erode trust, waste resources, and expose firms to regulatory risk.
- Governance-first frameworks prevent hallucinations by enforcing evidence-backed, auditable outputs.
- Zero-hallucination AI is not optional. It is the foundation of responsible financial crime detection.
Are you asking your AI to show its data?
If not, you may be chasing ghosts.
In the next blog, we will explore how building an integrated, agentic AI strategy, linking model creation to real-time risk detection, can shift compliance from reactive to resilient.

When MAS Calls and It’s Not MAS: Inside Singapore’s Latest Impersonation Scam
A phone rings in Singapore.
The caller ID flashes the name of a trusted brand, M1 Limited.
A stern voice claims to be from the Monetary Authority of Singapore (MAS).
“There’s been suspicious activity linked to your identity. To protect your money, we’ll need you to transfer your funds to a safe account immediately.”
For at least 13 Singaporeans since September 2025, this chilling scenario wasn’t fiction. It was the start of an impersonation scam that cost victims more than S$360,000 in a matter of weeks.
Fraudsters had merged two of Singapore’s most trusted institutions, M1 and MAS, into one seamless illusion. And it worked.
The episode underscores a deeper truth: as digital trust grows, it also becomes a weapon. Scammers no longer just mimic banks or brands. They now borrow institutional credibility itself.

The Anatomy of the Scam
According to police advisories, this new impersonation fraud unfolds in a deceptively simple series of steps:
- The Setup – A Trusted Name on Caller ID
Victims receive calls from numbers spoofed to appear as M1’s customer service line. The scammers claim that the victim’s account or personal data has been compromised and is being used for illegal activity. - The Transfer – The MAS Connection
Mid-call, the victim is redirected to another “officer” who introduces themselves as an investigator from the Monetary Authority of Singapore. The tone shifts to urgency and authority. - The Hook – The ‘Safe Account’ Illusion
The supposed MAS officer instructs the victim to move money into a “temporary safety account” for protection while an “investigation” is ongoing. Every interaction sounds professional, from background call-centre noise to scripted verification questions. - The Extraction – Clean Sweep
Once the transfer is made, communication stops. Victims soon realise that their funds, sometimes their life savings, have been drained into mule accounts and dispersed across digital payment channels.
The brilliance of this scam lies in its institutional layering. By impersonating both a telecom company and the national regulator, the fraudsters created a perfect loop of credibility. Each brand reinforced the other, leaving victims little reason to doubt.
Why Victims Fell for It: The Psychology of Authority
Fraudsters have long understood that fear and trust are two sides of the same coin. This scam exploited both with precision.
1. Authority Bias
When a call appears to come from MAS, Singapore’s financial regulator, victims instinctively comply. MAS is synonymous with legitimacy. Questioning its authority feels almost unthinkable.
2. Urgency and Fear
The narrative of “criminal misuse of your identity” triggers panic. Victims are told their accounts are under investigation, pushing them to act immediately before they “lose everything.”
3. Technical Authenticity
Spoofed numbers, legitimate-sounding scripts, and even hold music similar to M1’s call centre lend realism. The environment feels procedural, not predatory.
4. Empathy and Rapport
Scammers often sound calm and helpful. They “guide” victims through the process, framing transfers as protective, not suspicious.
These psychological levers bypass logic. Even well-educated professionals have fallen victim, proving that awareness alone is not enough when deception feels official.
The Laundering Playbook Behind the Scam
Once the funds leave the victim’s account, they enter a machinery that’s disturbingly efficient: the mule network.
1. Placement
Funds first land in personal accounts controlled by local money mules, individuals who allow access to their bank accounts in exchange for commissions. Many are recruited via Telegram or social media ads promising “easy income.”
2. Layering
Within hours, funds are split and moved:
- To multiple domestic mule accounts under different names.
- Through remittance platforms and e-wallets to obscure trails.
- Occasionally into crypto exchanges for rapid conversion and cross-border transfer.
3. Integration
Once the money has been sufficiently layered, it’s reintroduced into the economy through:
- Purchases of high-value goods such as luxury items or watches.
- Peer-to-peer transfers masked as legitimate business payments.
- Real-estate or vehicle purchases under third-party names.
Each stage widens the distance between the victim’s account and the fraudster’s wallet, making recovery almost impossible.
What begins as a phone scam ends as money laundering in motion, linking consumer fraud directly to compliance risk.
A Surge in Sophisticated Scams
This impersonation scheme is part of a larger wave reshaping Singapore’s fraud landscape:
- Government Agency Impersonations:
Earlier in 2025, scammers posed as the Ministry of Health and SingPost, tricking victims into paying fake fees for “medical” or “parcel-related” issues. - Deepfake CEO and Romance Scams:
In March 2025, a Singapore finance director nearly lost US$499,000 after a deepfake video impersonated her CEO during a virtual meeting. - Job and Mule Recruitment Scams:
Thousands of locals have been drawn into acting as unwitting money mules through fake job ads offering “commission-based transfers.”
The lines between fraud, identity theft, and laundering are blurring, powered by social engineering and emerging AI tools.
Singapore’s Response: Technology Meets Policy
In an unprecedented move, Singapore’s banks are introducing a new anti-scam safeguard beginning 15 October 2025.
Accounts with balances above S$50,000 will face a 24-hour hold or review when withdrawals exceed 50% of their total funds in a single day.
The goal is to give banks and customers time to verify large or unusual transfers, especially those made under pressure.
This measure complements other initiatives:
- Anti-Scam Command (ASC): A joint force between the Singapore Police Force, MAS, and IMDA that coordinates intelligence across sectors.
- Digital Platform Code of Practice: Requiring telcos and platforms to share threat information faster.
- Money Mule Crackdowns: Banks and police continue to identify and freeze mule accounts, often through real-time data exchange.
It’s an ecosystem-wide effort that recognises what scammers already exploit: financial crime doesn’t operate in silos.

Red Flags for Banks and Fintechs
To prevent similar losses, financial institutions must detect the digital fingerprints of impersonation scams long before victims report them.
1. Transaction-Level Indicators
- Sudden high-value transfers from retail accounts to new or unrelated beneficiaries.
- Full-balance withdrawals or transfers shortly after a suspicious inbound call pattern (if linked data exists).
- Transfers labelled “safe account,” “temporary holding,” or other unusual memo descriptors.
- Rapid pass-through transactions to accounts showing no consistent economic activity.
2. KYC/CDD Risk Indicators
- Accounts receiving multiple inbound transfers from unrelated individuals, indicating mule behaviour.
- Beneficiaries with no professional link to the victim or stated purpose.
- Customers with recently opened accounts showing immediate high-velocity fund movements.
- Repeated links to shared devices, IPs, or contact numbers across “unrelated” customers.
3. Behavioural Red Flags
- Elderly or mid-income customers attempting large same-day transfers after phone interactions.
- Requests from customers to “verify” MAS or bank staff, a potential sign of ongoing social engineering.
- Multiple failed transfer attempts followed by a successful large payment to a new payee.
For compliance and fraud teams, these clues form the basis of scenario-driven detection, revealing intent even before loss occurs.
Why Fragmented Defences Keep Failing
Even with advanced fraud controls, isolated detection still struggles against networked crime.
Each bank sees only what happens within its own perimeter.
Each fintech monitors its own platform.
But scammers move across them all, exploiting the blind spots in between.
That’s the paradox: stronger individual controls, yet weaker collaborative defence.
To close this gap, financial institutions need collaborative intelligence, a way to connect insights across banks, payment platforms, and regulators without breaching data privacy.
How Collaborative Intelligence Changes the Game
That’s precisely where Tookitaki’s AFC Ecosystem comes in.
1. Shared Scenarios, Shared Defence
The AFC Ecosystem brings together compliance experts from across ASEAN and ANZ to contribute and analyse real-world scenarios, including impersonation scams, mule networks, and AI-enabled frauds.
When one member flags a new scam pattern, others gain immediate visibility, turning isolated awareness into collaborative defence.
2. FinCense: Scenario-Driven Detection
Tookitaki’s FinCense platform converts these typologies into actionable detection models.
If a bank in Singapore identifies a “safe account” transfer typology, that logic can instantly be adapted to other institutions through federated learning, without sharing customer data.
It’s collaboration powered by AI, built for privacy.
3. AI Agents for Faster Investigations
FinMate, Tookitaki’s AI copilot, assists investigators by summarising cases, linking entities, and surfacing relationships between mule accounts.
Meanwhile, Smart Disposition automatically narrates alerts, helping analysts focus on risk rather than paperwork.
Together, they accelerate how financial institutions identify, understand, and stop impersonation scams before they scale.
Conclusion: Trust as the New Battleground
Singapore’s latest impersonation scam proves that fraud has evolved. It no longer just exploits systems but the very trust those systems represent.
When fraudsters can sound like regulators and mimic entire call-centre environments, detection must move beyond static rules. It must anticipate scenarios, adapt dynamically, and learn collaboratively.
For banks, fintechs, and regulators, the mission is not just to block transactions. It is to protect trust itself.
Because in the digital economy, trust is the currency everything else depends on.
With collaborative intelligence, real-time detection, and the right technology backbone, that trust can be defended, not just restored after losses but safeguarded before they occur.

How Collective Intelligence Can Transform AML Collaboration Across ASEAN
Financial crime in ASEAN doesn’t recognise borders — yet many of the region’s financial institutions still defend against it as if it does.
Across Southeast Asia, a wave of interconnected fraud, mule, and laundering operations is exploiting the cracks between countries, institutions, and regulatory systems. These crimes are increasingly digital, fast-moving, and transnational, moving illicit funds through a web of banks, payment apps, and remittance providers.
No single institution can see the full picture anymore. But what if they could — collectively?
That’s the promise of collective intelligence: a new model of anti-financial crime collaboration that helps banks and fintechs move from isolated detection to shared insight, from reactive controls to proactive defence.

The Fragmented Fight Against Financial Crime
For decades, financial institutions in ASEAN have built compliance systems in silos — each operating within its own data, its own alerts, and its own definitions of risk.
Yet today’s criminals don’t operate that way.
They leverage networks. They use the same mule accounts to move money across different platforms. They exploit delays in cross-border data visibility. And they design schemes that appear harmless when viewed within one institution’s walls — but reveal clear criminal intent when seen across the ecosystem.
The result is an uneven playing field:
- Fragmented visibility: Each bank sees only part of the customer journey.
- Duplicated effort: Hundreds of institutions investigate similar alerts separately.
- Delayed response: Without early warning signals from peers, detection lags behind crime.
Even with strong internal controls, compliance teams are chasing symptoms, not patterns. The fight is asymmetric — and criminals know it.
Scenario 1: The Cross-Border Money Mule Network
In 2024, regulators in Malaysia, Singapore, and the Philippines jointly uncovered a sophisticated mule network linked to online job scams.
Victims were recruited through social media posts promising part-time work, asked to “process transactions,” and unknowingly became money mules.
Funds were deposited into personal accounts in the Philippines, layered through remittance corridors into Malaysia, and cashed out via ATMs in Singapore — all within 48 hours.
Each financial institution saw only a fragment:
- A remittance provider noticed repeated small transfers.
- A bank saw ATM withdrawals.
- A payment platform flagged a sudden spike in deposits.
Individually, none of these signals triggered escalation.
But collectively, they painted a clear picture of laundering activity.
This is where collective intelligence could have made the difference — if these institutions shared typologies, device fingerprints, or transaction patterns, the scheme could have been detected far earlier.
Scenario 2: The Regional Scam Syndicate
In 2025, Thai authorities dismantled a syndicate that defrauded victims across ASEAN through fake investment platforms.
Funds collected in Thailand were sent to shell firms in Cambodia and the Philippines, then layered through e-wallets linked to unlicensed payment agents in Vietnam.
Despite multiple suspicious activity reports (SARs) being filed, no single institution could connect the dots fast enough.
Each SAR told a piece of the story, but without shared context — names, merchant IDs, or recurring payment routes — the underlying network remained invisible for months.
By the time the link was established, millions had already vanished.
This case reflects a growing truth: isolation is the weakest point in financial crime defence.
Why Traditional AML Systems Fall Short
Most AML and fraud systems across ASEAN were designed for a slower era — when payments were batch-processed, customer bases were domestic, and typologies evolved over years, not weeks.
Today, they struggle against the scale and speed of digital crime. The challenges echo what community banks face elsewhere:
- Siloed tools: Transaction monitoring, screening, and onboarding often run on separate platforms.
- Inconsistent entity view: Fraud and AML systems assess the same customer differently.
- Fragmented data: No single source of truth for risk or identity.
- Reactive detection: Alerts are investigated in isolation, without the benefit of peer insights.
The result? High false positives, slow investigations, and missed cross-institutional patterns.
Criminals exploit these blind spots — shifting tactics across borders and platforms faster than detection rules can adapt.

The Case for Collective Intelligence
Collective intelligence offers a new way forward.
It’s the idea that by pooling anonymised insights, institutions can collectively detect threats no single bank could uncover alone. Instead of sharing raw data, banks and fintechs share patterns, typologies, and red flags — learning from each other’s experiences without compromising confidentiality.
In practice, this looks like:
- A payment institution sharing a new mule typology with regional peers.
- A bank leveraging cross-institution risk indicators to validate an alert.
- Multiple FIs aligning detection logic against a shared set of fraud scenarios.
This model turns what used to be isolated vigilance into a networked defence mechanism.
Each participant adds intelligence that strengthens the whole ecosystem.
How ASEAN Regulators Are Encouraging Collaboration
Collaboration isn’t just an innovation — it’s becoming a regulatory expectation.
- Singapore: MAS has called for greater intelligence-sharing through public–private partnerships and cross-border AML/CFT collaboration.
- Philippines: BSP has partnered with industry associations like Fintech Alliance PH to develop joint typology repositories and scenario-based reporting frameworks.
- Malaysia: BNM’s National Risk Assessment and Financial Sector Blueprint both emphasise collective resilience and information exchange between institutions.
The direction is clear — regulators are recognising that fighting financial crime is a shared responsibility.
AFC Ecosystem: Turning Collaboration into Practice
The AFC Ecosystem brings this vision to life.
It is a community-driven platform where compliance professionals, regulators, and industry experts across ASEAN share real-world financial crime scenarios and red-flag indicators in a structured, secure way.
Each month, members contribute and analyse typologies — from mule recruitment through social media to layering through trade and crypto channels — and receive actionable insights they can operationalise in their own systems.
The result is a collective intelligence engine that grows with every contribution.
When one institution detects a new laundering technique, others gain the early warning before it spreads.
This isn’t about sharing customer data — it’s about sharing knowledge.
FinCense: Turning Shared Intelligence into Detection
While the AFC Ecosystem enables the sharing of typologies and patterns, Tookitaki’s FinCense makes those insights operational.
Through its federated learning model, FinCense can ingest new typologies contributed by the community, simulate them in sandbox environments, and automatically tune thresholds and detection models.
This ensures that once a new scenario is identified within the community, every participating institution can strengthen its defences almost instantly — without sharing sensitive data or compromising privacy.
It’s a practical manifestation of collective defence, where each institution benefits from the learnings of all.
Building the Trust Layer for ASEAN’s Financial System
Trust is the cornerstone of financial stability — and it’s under pressure.
Every scam, laundering scheme, or data breach erodes the confidence that customers, regulators, and institutions place in the system.
To rebuild and sustain that trust, ASEAN’s financial ecosystem needs a new foundation — a trust layer built on shared intelligence, advanced AI, and secure collaboration.
This is where Tookitaki’s approach stands out:
- FinCense delivers real-time, AI-powered detection across AML and fraud.
- The AFC Ecosystem unites institutions through shared typologies and collective learning.
- Together, they form a network of defence that grows stronger with each participant.
The vision isn’t just to comply — it’s to outsmart.
To move from isolated controls to connected intelligence.
To make financial crime not just detectable, but preventable.
Conclusion: The Future of AML in ASEAN is Collective
Financial crime has evolved into a networked enterprise — agile, cross-border, and increasingly digital. The only effective response is a networked defence, built on shared knowledge, collaborative detection, and collective intelligence.
By combining the collaborative power of the AFC Ecosystem with the analytical strength of FinCense, Tookitaki is helping financial institutions across ASEAN stay one step ahead of criminals.
When banks, fintechs, and regulators work together — not just to report but to learn collectively — financial crime loses its greatest advantage: fragmentation.

Eliminating AI Hallucinations in Financial Crime Detection: A Governance-First Approach
Introduction: When AI Makes It Up — The High Stakes of “Hallucinations” in AML
This is the third instalment in our series, Governance-First AI Strategy: The Future of Financial Crime Detection.
- In Part 1, we explored the governance crisis created by compliance-heavy frameworks.
- In Part 2, we highlighted how Singapore’s AI Verify program is pioneering independent validation as the new standard.
In this post, we turn to one of the most urgent challenges in AI-driven compliance: AI hallucinations.
Imagine an AML analyst starting their day, greeted by a queue of urgent alerts. One, flagged as “high risk,” is generated by the newest AI tool. But as the analyst investigates, it becomes clear that some transactions cited by the AI never actually happened. The explanation, while plausible, is fabricated: a textbook case of AI hallucination.
Time is wasted. Trust in the AI system is shaken. And worse, while chasing a phantom, a genuine criminal scheme may slip through.
As artificial intelligence becomes the core engine for financial crime detection, the problem of hallucinations, outputs not grounded in real data or facts, poses a serious threat to compliance, regulatory trust, and operational efficiency.
What Are AI Hallucinations and Why Are They So Risky in Finance?
AI hallucinations occur when a model produces statements or explanations that sound correct but are not grounded in real data.
In financial crime compliance, this can lead to:
- Wild goose chases: Analysts waste valuable time chasing non-existent threats.
- Regulatory risk: Fabricated outputs increase the chance of audit failures or penalties.
- Customer harm: Legitimate clients may be incorrectly flagged, damaging trust and relationships.
Generative AI systems are especially vulnerable. Designed to create coherent responses, they can unintentionally invent entire scenarios. In finance, where every “fact” matters to reputations, livelihoods, and regulatory standing, there is no room for guesswork.

Why Do AI Hallucinations Happen?
The drivers are well understood:
- Gaps or bias in training data: Incomplete or outdated records force models to “fill in the blanks” with speculation.
- Overly creative design: Generative models excel at narrative-building but can fabricate plausible-sounding explanations without constraints.
- Ambiguous prompts or unchecked logic: Vague inputs encourage speculation, diverting the model from factual data.
Real-World Misfire: A Costly False Alarm
At a large bank, an AI-powered monitoring tool flagged accounts for “suspicious round-dollar transactions,” producing a detailed narrative about potential laundering.
The problem? Those transactions never occurred.
The AI had hallucinated the explanation, stitching together fragments of unrelated historical data. The result: a week-long audit, wasted resources, and an urgent reminder of the need for stronger governance over AI outputs.
A Governance-First Playbook to Stop Hallucinations
Forward-looking compliance teams are embedding anti-hallucination measures into their AI governance frameworks. Key practices include:
1. Rigorous, Real-World Model Training
AI models must be trained on thousands of verified AML cases, including edge cases and emerging typologies. Exposure to operational complexity reduces speculative outputs.At Tookitaki, scenario-driven drills such as deepfake scam simulations and laundering typologies continuously stress-test the system to identify risks before they reach investigators or regulators.
2. Evidence-Based Outputs, Not Vague Alerts
Traditional systems often produce alerts like: “Possible layering activity detected in account X.” Analysts are left to guess at the reasoning.Governance-first systems enforce data-anchored outputs:“Layering risk detected: five transactions on 20/06/25 match FATF typology #3. See attached evidence.”
This creates traceable, auditable insights, building efficiency and trust.
3. Human-in-the-Loop (HITL) Validation
Even advanced models require human oversight. High-stakes outputs, such as risk narratives or new typology detections, must pass through expert validation.At Tookitaki, HITL ensures:
- Analytical transparency
- Reduced false positives
- No unexplained “black box” reasoning
4. Prompt Engineering and Retrieval-Augmented Generation (RAG)Ambiguity invites hallucinations. Precision prompts, combined with RAG techniques, ensure outputs are tied to verified databases and transaction logs, making fabrication nearly impossible.
Spotlight: Tookitaki’s Precision-First AI Philosophy
Tookitaki’s compliance platform is built on a governance-first architecture that treats hallucination prevention as a measurable objective.
- Scenario-Driven Simulations: Rare typologies and evolving crime patterns are continuously tested to surface potential weaknesses before deployment.
- Community-Powered Validation: Detection logic is refined in real time through feedback from a global network of financial crime experts.
- Mandatory Fact Citations: Every AI-generated narrative is backed by case data and audit references, accelerating compliance reviews and strengthening regulatory confidence.
At Tookitaki, we recognise that no AI system can be infallible. As leading research highlights, some real-world questions are inherently unanswerable. That is why our goal is not absolute perfection, but precision-driven AI that makes hallucinations statistically negligible and fully traceable — delivering factual integrity at scale.

Conclusion: Factual Integrity Is the Foundation of Trust
Eliminating hallucinations is not just a technical safeguard. It is a governance imperative. Compliance teams that embed evidence-based outputs, rigorous training, human-in-the-loop validation, and retrieval-anchored design will not only reduce wasted effort but also strengthen regulatory confidence and market reputation.
Key Takeaways from Part 3:
- AI hallucinations erode trust, waste resources, and expose firms to regulatory risk.
- Governance-first frameworks prevent hallucinations by enforcing evidence-backed, auditable outputs.
- Zero-hallucination AI is not optional. It is the foundation of responsible financial crime detection.
Are you asking your AI to show its data?
If not, you may be chasing ghosts.
In the next blog, we will explore how building an integrated, agentic AI strategy, linking model creation to real-time risk detection, can shift compliance from reactive to resilient.

When MAS Calls and It’s Not MAS: Inside Singapore’s Latest Impersonation Scam
A phone rings in Singapore.
The caller ID flashes the name of a trusted brand, M1 Limited.
A stern voice claims to be from the Monetary Authority of Singapore (MAS).
“There’s been suspicious activity linked to your identity. To protect your money, we’ll need you to transfer your funds to a safe account immediately.”
For at least 13 Singaporeans since September 2025, this chilling scenario wasn’t fiction. It was the start of an impersonation scam that cost victims more than S$360,000 in a matter of weeks.
Fraudsters had merged two of Singapore’s most trusted institutions, M1 and MAS, into one seamless illusion. And it worked.
The episode underscores a deeper truth: as digital trust grows, it also becomes a weapon. Scammers no longer just mimic banks or brands. They now borrow institutional credibility itself.

The Anatomy of the Scam
According to police advisories, this new impersonation fraud unfolds in a deceptively simple series of steps:
- The Setup – A Trusted Name on Caller ID
Victims receive calls from numbers spoofed to appear as M1’s customer service line. The scammers claim that the victim’s account or personal data has been compromised and is being used for illegal activity. - The Transfer – The MAS Connection
Mid-call, the victim is redirected to another “officer” who introduces themselves as an investigator from the Monetary Authority of Singapore. The tone shifts to urgency and authority. - The Hook – The ‘Safe Account’ Illusion
The supposed MAS officer instructs the victim to move money into a “temporary safety account” for protection while an “investigation” is ongoing. Every interaction sounds professional, from background call-centre noise to scripted verification questions. - The Extraction – Clean Sweep
Once the transfer is made, communication stops. Victims soon realise that their funds, sometimes their life savings, have been drained into mule accounts and dispersed across digital payment channels.
The brilliance of this scam lies in its institutional layering. By impersonating both a telecom company and the national regulator, the fraudsters created a perfect loop of credibility. Each brand reinforced the other, leaving victims little reason to doubt.
Why Victims Fell for It: The Psychology of Authority
Fraudsters have long understood that fear and trust are two sides of the same coin. This scam exploited both with precision.
1. Authority Bias
When a call appears to come from MAS, Singapore’s financial regulator, victims instinctively comply. MAS is synonymous with legitimacy. Questioning its authority feels almost unthinkable.
2. Urgency and Fear
The narrative of “criminal misuse of your identity” triggers panic. Victims are told their accounts are under investigation, pushing them to act immediately before they “lose everything.”
3. Technical Authenticity
Spoofed numbers, legitimate-sounding scripts, and even hold music similar to M1’s call centre lend realism. The environment feels procedural, not predatory.
4. Empathy and Rapport
Scammers often sound calm and helpful. They “guide” victims through the process, framing transfers as protective, not suspicious.
These psychological levers bypass logic. Even well-educated professionals have fallen victim, proving that awareness alone is not enough when deception feels official.
The Laundering Playbook Behind the Scam
Once the funds leave the victim’s account, they enter a machinery that’s disturbingly efficient: the mule network.
1. Placement
Funds first land in personal accounts controlled by local money mules, individuals who allow access to their bank accounts in exchange for commissions. Many are recruited via Telegram or social media ads promising “easy income.”
2. Layering
Within hours, funds are split and moved:
- To multiple domestic mule accounts under different names.
- Through remittance platforms and e-wallets to obscure trails.
- Occasionally into crypto exchanges for rapid conversion and cross-border transfer.
3. Integration
Once the money has been sufficiently layered, it’s reintroduced into the economy through:
- Purchases of high-value goods such as luxury items or watches.
- Peer-to-peer transfers masked as legitimate business payments.
- Real-estate or vehicle purchases under third-party names.
Each stage widens the distance between the victim’s account and the fraudster’s wallet, making recovery almost impossible.
What begins as a phone scam ends as money laundering in motion, linking consumer fraud directly to compliance risk.
A Surge in Sophisticated Scams
This impersonation scheme is part of a larger wave reshaping Singapore’s fraud landscape:
- Government Agency Impersonations:
Earlier in 2025, scammers posed as the Ministry of Health and SingPost, tricking victims into paying fake fees for “medical” or “parcel-related” issues. - Deepfake CEO and Romance Scams:
In March 2025, a Singapore finance director nearly lost US$499,000 after a deepfake video impersonated her CEO during a virtual meeting. - Job and Mule Recruitment Scams:
Thousands of locals have been drawn into acting as unwitting money mules through fake job ads offering “commission-based transfers.”
The lines between fraud, identity theft, and laundering are blurring, powered by social engineering and emerging AI tools.
Singapore’s Response: Technology Meets Policy
In an unprecedented move, Singapore’s banks are introducing a new anti-scam safeguard beginning 15 October 2025.
Accounts with balances above S$50,000 will face a 24-hour hold or review when withdrawals exceed 50% of their total funds in a single day.
The goal is to give banks and customers time to verify large or unusual transfers, especially those made under pressure.
This measure complements other initiatives:
- Anti-Scam Command (ASC): A joint force between the Singapore Police Force, MAS, and IMDA that coordinates intelligence across sectors.
- Digital Platform Code of Practice: Requiring telcos and platforms to share threat information faster.
- Money Mule Crackdowns: Banks and police continue to identify and freeze mule accounts, often through real-time data exchange.
It’s an ecosystem-wide effort that recognises what scammers already exploit: financial crime doesn’t operate in silos.

Red Flags for Banks and Fintechs
To prevent similar losses, financial institutions must detect the digital fingerprints of impersonation scams long before victims report them.
1. Transaction-Level Indicators
- Sudden high-value transfers from retail accounts to new or unrelated beneficiaries.
- Full-balance withdrawals or transfers shortly after a suspicious inbound call pattern (if linked data exists).
- Transfers labelled “safe account,” “temporary holding,” or other unusual memo descriptors.
- Rapid pass-through transactions to accounts showing no consistent economic activity.
2. KYC/CDD Risk Indicators
- Accounts receiving multiple inbound transfers from unrelated individuals, indicating mule behaviour.
- Beneficiaries with no professional link to the victim or stated purpose.
- Customers with recently opened accounts showing immediate high-velocity fund movements.
- Repeated links to shared devices, IPs, or contact numbers across “unrelated” customers.
3. Behavioural Red Flags
- Elderly or mid-income customers attempting large same-day transfers after phone interactions.
- Requests from customers to “verify” MAS or bank staff, a potential sign of ongoing social engineering.
- Multiple failed transfer attempts followed by a successful large payment to a new payee.
For compliance and fraud teams, these clues form the basis of scenario-driven detection, revealing intent even before loss occurs.
Why Fragmented Defences Keep Failing
Even with advanced fraud controls, isolated detection still struggles against networked crime.
Each bank sees only what happens within its own perimeter.
Each fintech monitors its own platform.
But scammers move across them all, exploiting the blind spots in between.
That’s the paradox: stronger individual controls, yet weaker collaborative defence.
To close this gap, financial institutions need collaborative intelligence, a way to connect insights across banks, payment platforms, and regulators without breaching data privacy.
How Collaborative Intelligence Changes the Game
That’s precisely where Tookitaki’s AFC Ecosystem comes in.
1. Shared Scenarios, Shared Defence
The AFC Ecosystem brings together compliance experts from across ASEAN and ANZ to contribute and analyse real-world scenarios, including impersonation scams, mule networks, and AI-enabled frauds.
When one member flags a new scam pattern, others gain immediate visibility, turning isolated awareness into collaborative defence.
2. FinCense: Scenario-Driven Detection
Tookitaki’s FinCense platform converts these typologies into actionable detection models.
If a bank in Singapore identifies a “safe account” transfer typology, that logic can instantly be adapted to other institutions through federated learning, without sharing customer data.
It’s collaboration powered by AI, built for privacy.
3. AI Agents for Faster Investigations
FinMate, Tookitaki’s AI copilot, assists investigators by summarising cases, linking entities, and surfacing relationships between mule accounts.
Meanwhile, Smart Disposition automatically narrates alerts, helping analysts focus on risk rather than paperwork.
Together, they accelerate how financial institutions identify, understand, and stop impersonation scams before they scale.
Conclusion: Trust as the New Battleground
Singapore’s latest impersonation scam proves that fraud has evolved. It no longer just exploits systems but the very trust those systems represent.
When fraudsters can sound like regulators and mimic entire call-centre environments, detection must move beyond static rules. It must anticipate scenarios, adapt dynamically, and learn collaboratively.
For banks, fintechs, and regulators, the mission is not just to block transactions. It is to protect trust itself.
Because in the digital economy, trust is the currency everything else depends on.
With collaborative intelligence, real-time detection, and the right technology backbone, that trust can be defended, not just restored after losses but safeguarded before they occur.

How Collective Intelligence Can Transform AML Collaboration Across ASEAN
Financial crime in ASEAN doesn’t recognise borders — yet many of the region’s financial institutions still defend against it as if it does.
Across Southeast Asia, a wave of interconnected fraud, mule, and laundering operations is exploiting the cracks between countries, institutions, and regulatory systems. These crimes are increasingly digital, fast-moving, and transnational, moving illicit funds through a web of banks, payment apps, and remittance providers.
No single institution can see the full picture anymore. But what if they could — collectively?
That’s the promise of collective intelligence: a new model of anti-financial crime collaboration that helps banks and fintechs move from isolated detection to shared insight, from reactive controls to proactive defence.

The Fragmented Fight Against Financial Crime
For decades, financial institutions in ASEAN have built compliance systems in silos — each operating within its own data, its own alerts, and its own definitions of risk.
Yet today’s criminals don’t operate that way.
They leverage networks. They use the same mule accounts to move money across different platforms. They exploit delays in cross-border data visibility. And they design schemes that appear harmless when viewed within one institution’s walls — but reveal clear criminal intent when seen across the ecosystem.
The result is an uneven playing field:
- Fragmented visibility: Each bank sees only part of the customer journey.
- Duplicated effort: Hundreds of institutions investigate similar alerts separately.
- Delayed response: Without early warning signals from peers, detection lags behind crime.
Even with strong internal controls, compliance teams are chasing symptoms, not patterns. The fight is asymmetric — and criminals know it.
Scenario 1: The Cross-Border Money Mule Network
In 2024, regulators in Malaysia, Singapore, and the Philippines jointly uncovered a sophisticated mule network linked to online job scams.
Victims were recruited through social media posts promising part-time work, asked to “process transactions,” and unknowingly became money mules.
Funds were deposited into personal accounts in the Philippines, layered through remittance corridors into Malaysia, and cashed out via ATMs in Singapore — all within 48 hours.
Each financial institution saw only a fragment:
- A remittance provider noticed repeated small transfers.
- A bank saw ATM withdrawals.
- A payment platform flagged a sudden spike in deposits.
Individually, none of these signals triggered escalation.
But collectively, they painted a clear picture of laundering activity.
This is where collective intelligence could have made the difference — if these institutions shared typologies, device fingerprints, or transaction patterns, the scheme could have been detected far earlier.
Scenario 2: The Regional Scam Syndicate
In 2025, Thai authorities dismantled a syndicate that defrauded victims across ASEAN through fake investment platforms.
Funds collected in Thailand were sent to shell firms in Cambodia and the Philippines, then layered through e-wallets linked to unlicensed payment agents in Vietnam.
Despite multiple suspicious activity reports (SARs) being filed, no single institution could connect the dots fast enough.
Each SAR told a piece of the story, but without shared context — names, merchant IDs, or recurring payment routes — the underlying network remained invisible for months.
By the time the link was established, millions had already vanished.
This case reflects a growing truth: isolation is the weakest point in financial crime defence.
Why Traditional AML Systems Fall Short
Most AML and fraud systems across ASEAN were designed for a slower era — when payments were batch-processed, customer bases were domestic, and typologies evolved over years, not weeks.
Today, they struggle against the scale and speed of digital crime. The challenges echo what community banks face elsewhere:
- Siloed tools: Transaction monitoring, screening, and onboarding often run on separate platforms.
- Inconsistent entity view: Fraud and AML systems assess the same customer differently.
- Fragmented data: No single source of truth for risk or identity.
- Reactive detection: Alerts are investigated in isolation, without the benefit of peer insights.
The result? High false positives, slow investigations, and missed cross-institutional patterns.
Criminals exploit these blind spots — shifting tactics across borders and platforms faster than detection rules can adapt.

The Case for Collective Intelligence
Collective intelligence offers a new way forward.
It’s the idea that by pooling anonymised insights, institutions can collectively detect threats no single bank could uncover alone. Instead of sharing raw data, banks and fintechs share patterns, typologies, and red flags — learning from each other’s experiences without compromising confidentiality.
In practice, this looks like:
- A payment institution sharing a new mule typology with regional peers.
- A bank leveraging cross-institution risk indicators to validate an alert.
- Multiple FIs aligning detection logic against a shared set of fraud scenarios.
This model turns what used to be isolated vigilance into a networked defence mechanism.
Each participant adds intelligence that strengthens the whole ecosystem.
How ASEAN Regulators Are Encouraging Collaboration
Collaboration isn’t just an innovation — it’s becoming a regulatory expectation.
- Singapore: MAS has called for greater intelligence-sharing through public–private partnerships and cross-border AML/CFT collaboration.
- Philippines: BSP has partnered with industry associations like Fintech Alliance PH to develop joint typology repositories and scenario-based reporting frameworks.
- Malaysia: BNM’s National Risk Assessment and Financial Sector Blueprint both emphasise collective resilience and information exchange between institutions.
The direction is clear — regulators are recognising that fighting financial crime is a shared responsibility.
AFC Ecosystem: Turning Collaboration into Practice
The AFC Ecosystem brings this vision to life.
It is a community-driven platform where compliance professionals, regulators, and industry experts across ASEAN share real-world financial crime scenarios and red-flag indicators in a structured, secure way.
Each month, members contribute and analyse typologies — from mule recruitment through social media to layering through trade and crypto channels — and receive actionable insights they can operationalise in their own systems.
The result is a collective intelligence engine that grows with every contribution.
When one institution detects a new laundering technique, others gain the early warning before it spreads.
This isn’t about sharing customer data — it’s about sharing knowledge.
FinCense: Turning Shared Intelligence into Detection
While the AFC Ecosystem enables the sharing of typologies and patterns, Tookitaki’s FinCense makes those insights operational.
Through its federated learning model, FinCense can ingest new typologies contributed by the community, simulate them in sandbox environments, and automatically tune thresholds and detection models.
This ensures that once a new scenario is identified within the community, every participating institution can strengthen its defences almost instantly — without sharing sensitive data or compromising privacy.
It’s a practical manifestation of collective defence, where each institution benefits from the learnings of all.
Building the Trust Layer for ASEAN’s Financial System
Trust is the cornerstone of financial stability — and it’s under pressure.
Every scam, laundering scheme, or data breach erodes the confidence that customers, regulators, and institutions place in the system.
To rebuild and sustain that trust, ASEAN’s financial ecosystem needs a new foundation — a trust layer built on shared intelligence, advanced AI, and secure collaboration.
This is where Tookitaki’s approach stands out:
- FinCense delivers real-time, AI-powered detection across AML and fraud.
- The AFC Ecosystem unites institutions through shared typologies and collective learning.
- Together, they form a network of defence that grows stronger with each participant.
The vision isn’t just to comply — it’s to outsmart.
To move from isolated controls to connected intelligence.
To make financial crime not just detectable, but preventable.
Conclusion: The Future of AML in ASEAN is Collective
Financial crime has evolved into a networked enterprise — agile, cross-border, and increasingly digital. The only effective response is a networked defence, built on shared knowledge, collaborative detection, and collective intelligence.
By combining the collaborative power of the AFC Ecosystem with the analytical strength of FinCense, Tookitaki is helping financial institutions across ASEAN stay one step ahead of criminals.
When banks, fintechs, and regulators work together — not just to report but to learn collectively — financial crime loses its greatest advantage: fragmentation.


