Uncovering COVID Fund Laundering Schemes Through Investment Platforms
Fraud targeting governments’ pandemic-related welfare programs have seen criminals exploiting these schemes ever since countries started helping their citizens and businesses. If reports are correct, fraudsters benefit immensely from the US government’s strategies to aid businesses affected by the COVID-19 pandemic. Also, they are making use of popular online investment platforms as a convenient way to launder money. According to a CNBC report, citing law enforcement officials, more than US$100 million in stolen COVID relief funds have gone through four investment platforms – Robinhood, TD Ameritrade, E-Trade and Fidelity – since Congress passed the CARES Act in March 2020.
The US government’s rapid roll-out of the Paycheck Protection Program (PPP) and the Economic Injury Disaster Loan (EIDL) has been criticised as the “financial crime bonanza act of 2021”, with the programs marred with problems. The PPP allows eligible small businesses and other organisations to receive loans with a maturity of two years and an interest rate of one per cent. The EIDL program provides economic relief to small businesses that are currently experiencing a temporary loss of revenue. Inadequate controls have been cited for aiding possible fraud totalling billions of dollars. The officials noted that new-age digital investment platforms are easy options “to dump the money into by setting up accounts with stolen identities”.
This article explores the fraudsters’ schemes to benefit from government programs and clean those funds illegally. Also, we look into the technology options these investment platforms could use to counter financial crime and ensure robust AML/CFT compliance.
Learn More: Latest AML Fine Figures
Typology involving online investment platforms
The fraud and money laundering scheme works as below:
- Criminals steal a business owner’s identity and apply for EIDL.
- Once they get the funds, the criminals again use stolen identity information such as date of birth and social security number to open an investment account at an online investment platform.
- In some cases, criminals use synthetic identity, a fictitious social security number tied to a real person or mules who are part of the scheme.
- Then, criminals would transfer EIDL funds from bank accounts to accounts opened with online investment platforms.
- A short time later, the funds are moved from online investment accounts using ACH reversal.
CNBC’s sources noted that criminals are taking advantage of the more straightforward sign-up process for online investment accounts as well as the relative anonymity compared with regular bank account. One of the officials cited by CNBC said they are “investigating several cases where Robinhood had been used by criminals to launder PPP funds and EIDL funds”. In one of the cases, a fraudster stole the identity of a local resident and was able to receive US$28,000 in EIDL funds, obtained using fraudulent information for a nonexistent business with 60 employees. The fraudster later opened an account with Robinhood and attempted to transfer most of the money from a bank account using a stolen identity. Then the fraudster reversed the transfer three days after opening the account using an ACH reversal.
Considerable Amounts Being Diverted to New Avenues
CNBC sources said criminals are using all the different platforms because of the sheer volume of the stimulus package and the amount of money. The PPP and EIDL programs have fraud identified worth US$84 billion, out of which only US$626 million have been seized or forfeited by the Department of Justice, according to the US House Select Subcommittee on the Coronavirus Crisis. The Subcommittee also noted that Financial institutions filed over 41,000 Suspicious Activity Reports related to potential PPP and EIDL fraud during May-October 2020 alone.
PPP & EIDL Fraud by Type

Source: US House Select Subcommittee on the Coronavirus Crisis
The PPP established by the Coronavirus Aid, Relief, and Economic Security (CARES) Act was a prime target for fraud due to its limited oversight and easy eligibility criteria. The program's original allocation of US$349 billion was depleted in just 13 days. Once the relief programs’ weak controls became evident, the US Department of Treasury and the Department of Justice (DOJ) realised that they would need to take an aggressive approach to prevent fraud and started auditing applications and prosecuting wrongdoers. The charges on those people caught by law enforcement include bank fraud, mail fraud, wire fraud, money laundering, and making false statements to financial institutions. In 2020, the DOJ charged over 100 people for fraudulently seeking loans and other payments under the CARES Act.
Importance of Sustainable AML Compliance Programs within Online Investment Platforms
The online investment platforms, named in the CNBC report, claimed they are “laser-focused on preventing fraud” and have a “range of safeguards and multiple layers of security in place for detecting fraudulent accounts and subsequent transactions” as in the case of other financial institutions. However, their AML/CFT measures’ effectiveness is in question, especially in the pandemic’s new status quo. To remain trustworthy, these platforms need to mitigate money laundering risks through effective and sustainable compliance programs.
A proper AML Compliance Program enables a financial institution to identify and respond to terrorist financing and money laundering risks by introducing a risk-based approach in various key processes such as Know Your Customer (KYC), Customer Due Diligence (CDD), Screening and Transaction Monitoring.
Tookitaki’s end-to-end AI-powered AML operating system, the Anti-Money Laundering Suite (AMLS), powered by the AFC Ecosystem is intended to identify hard-to-detect money laundering techniques. Available as a modular service across the three pillars of AML activity – Transaction Monitoring, AML Screening for names and transactions and Customer Risk Scoring – the solution has the following features to aid in detecting money laundering.
- The World’s most extensive repository of AML typologies provides real-world AML red flags to keep our underlying machine learning detection model updated with the latest money laundering techniques globally.
- Advanced data analytics and dynamic segmentation to detect unusual patterns in transactions
- Risk scoring based on matching with watchlist databases or adverse media
- Visibility on customer linkages and related scores to provide a 360-degree network overview
- Constantly updating risk scoring, which learns from incremental data changes
Our solution has been proven to be highly accurate in identifying high-risk customers and transactions. For more details on our AMLS solution and its ability to identify various money laundering techniques, don't hesitate to contact us.
Experience the most intelligent AML and fraud prevention platform
Experience the most intelligent AML and fraud prevention platform
Experience the most intelligent AML and fraud prevention platform
Top AML Scenarios in ASEAN

The Role of AML Software in Compliance

Ready to Streamline Your Anti-Financial Crime Compliance?
Our Thought Leadership Guides
Eliminating AI Hallucinations in Financial Crime Detection: A Governance-First Approach
Introduction: When AI Makes It Up — The High Stakes of “Hallucinations” in AML
This is the third instalment in our series, Governance-First AI Strategy: The Future of Financial Crime Detection.
- In Part 1, we explored the governance crisis created by compliance-heavy frameworks.
- In Part 2, we highlighted how Singapore’s AI Verify program is pioneering independent validation as the new standard.
In this post, we turn to one of the most urgent challenges in AI-driven compliance: AI hallucinations.
Imagine an AML analyst starting their day, greeted by a queue of urgent alerts. One, flagged as “high risk,” is generated by the newest AI tool. But as the analyst investigates, it becomes clear that some transactions cited by the AI never actually happened. The explanation, while plausible, is fabricated: a textbook case of AI hallucination.
Time is wasted. Trust in the AI system is shaken. And worse, while chasing a phantom, a genuine criminal scheme may slip through.
As artificial intelligence becomes the core engine for financial crime detection, the problem of hallucinations, outputs not grounded in real data or facts, poses a serious threat to compliance, regulatory trust, and operational efficiency.
What Are AI Hallucinations and Why Are They So Risky in Finance?
AI hallucinations occur when a model produces statements or explanations that sound correct but are not grounded in real data.
In financial crime compliance, this can lead to:
- Wild goose chases: Analysts waste valuable time chasing non-existent threats.
- Regulatory risk: Fabricated outputs increase the chance of audit failures or penalties.
- Customer harm: Legitimate clients may be incorrectly flagged, damaging trust and relationships.
Generative AI systems are especially vulnerable. Designed to create coherent responses, they can unintentionally invent entire scenarios. In finance, where every “fact” matters to reputations, livelihoods, and regulatory standing, there is no room for guesswork.

Why Do AI Hallucinations Happen?
The drivers are well understood:
- Gaps or bias in training data: Incomplete or outdated records force models to “fill in the blanks” with speculation.
- Overly creative design: Generative models excel at narrative-building but can fabricate plausible-sounding explanations without constraints.
- Ambiguous prompts or unchecked logic: Vague inputs encourage speculation, diverting the model from factual data.
Real-World Misfire: A Costly False Alarm
At a large bank, an AI-powered monitoring tool flagged accounts for “suspicious round-dollar transactions,” producing a detailed narrative about potential laundering.
The problem? Those transactions never occurred.
The AI had hallucinated the explanation, stitching together fragments of unrelated historical data. The result: a week-long audit, wasted resources, and an urgent reminder of the need for stronger governance over AI outputs.
A Governance-First Playbook to Stop Hallucinations
Forward-looking compliance teams are embedding anti-hallucination measures into their AI governance frameworks. Key practices include:
1. Rigorous, Real-World Model Training
AI models must be trained on thousands of verified AML cases, including edge cases and emerging typologies. Exposure to operational complexity reduces speculative outputs.At Tookitaki, scenario-driven drills such as deepfake scam simulations and laundering typologies continuously stress-test the system to identify risks before they reach investigators or regulators.
2. Evidence-Based Outputs, Not Vague Alerts
Traditional systems often produce alerts like: “Possible layering activity detected in account X.” Analysts are left to guess at the reasoning.Governance-first systems enforce data-anchored outputs:“Layering risk detected: five transactions on 20/06/25 match FATF typology #3. See attached evidence.”
This creates traceable, auditable insights, building efficiency and trust.
3. Human-in-the-Loop (HITL) Validation
Even advanced models require human oversight. High-stakes outputs, such as risk narratives or new typology detections, must pass through expert validation.At Tookitaki, HITL ensures:
- Analytical transparency
- Reduced false positives
- No unexplained “black box” reasoning
4. Prompt Engineering and Retrieval-Augmented Generation (RAG)Ambiguity invites hallucinations. Precision prompts, combined with RAG techniques, ensure outputs are tied to verified databases and transaction logs, making fabrication nearly impossible.
Spotlight: Tookitaki’s Precision-First AI Philosophy
Tookitaki’s compliance platform is built on a governance-first architecture that treats hallucination prevention as a measurable objective.
- Scenario-Driven Simulations: Rare typologies and evolving crime patterns are continuously tested to surface potential weaknesses before deployment.
- Community-Powered Validation: Detection logic is refined in real time through feedback from a global network of financial crime experts.
- Mandatory Fact Citations: Every AI-generated narrative is backed by case data and audit references, accelerating compliance reviews and strengthening regulatory confidence.
At Tookitaki, we recognise that no AI system can be infallible. As leading research highlights, some real-world questions are inherently unanswerable. That is why our goal is not absolute perfection, but precision-driven AI that makes hallucinations statistically negligible and fully traceable — delivering factual integrity at scale.

Conclusion: Factual Integrity Is the Foundation of Trust
Eliminating hallucinations is not just a technical safeguard. It is a governance imperative. Compliance teams that embed evidence-based outputs, rigorous training, human-in-the-loop validation, and retrieval-anchored design will not only reduce wasted effort but also strengthen regulatory confidence and market reputation.
Key Takeaways from Part 3:
- AI hallucinations erode trust, waste resources, and expose firms to regulatory risk.
- Governance-first frameworks prevent hallucinations by enforcing evidence-backed, auditable outputs.
- Zero-hallucination AI is not optional. It is the foundation of responsible financial crime detection.
Are you asking your AI to show its data?
If not, you may be chasing ghosts.
In the next blog, we will explore how building an integrated, agentic AI strategy, linking model creation to real-time risk detection, can shift compliance from reactive to resilient.

When MAS Calls and It’s Not MAS: Inside Singapore’s Latest Impersonation Scam
A phone rings in Singapore.
The caller ID flashes the name of a trusted brand, M1 Limited.
A stern voice claims to be from the Monetary Authority of Singapore (MAS).
“There’s been suspicious activity linked to your identity. To protect your money, we’ll need you to transfer your funds to a safe account immediately.”
For at least 13 Singaporeans since September 2025, this chilling scenario wasn’t fiction. It was the start of an impersonation scam that cost victims more than S$360,000 in a matter of weeks.
Fraudsters had merged two of Singapore’s most trusted institutions, M1 and MAS, into one seamless illusion. And it worked.
The episode underscores a deeper truth: as digital trust grows, it also becomes a weapon. Scammers no longer just mimic banks or brands. They now borrow institutional credibility itself.

The Anatomy of the Scam
According to police advisories, this new impersonation fraud unfolds in a deceptively simple series of steps:
- The Setup – A Trusted Name on Caller ID
Victims receive calls from numbers spoofed to appear as M1’s customer service line. The scammers claim that the victim’s account or personal data has been compromised and is being used for illegal activity. - The Transfer – The MAS Connection
Mid-call, the victim is redirected to another “officer” who introduces themselves as an investigator from the Monetary Authority of Singapore. The tone shifts to urgency and authority. - The Hook – The ‘Safe Account’ Illusion
The supposed MAS officer instructs the victim to move money into a “temporary safety account” for protection while an “investigation” is ongoing. Every interaction sounds professional, from background call-centre noise to scripted verification questions. - The Extraction – Clean Sweep
Once the transfer is made, communication stops. Victims soon realise that their funds, sometimes their life savings, have been drained into mule accounts and dispersed across digital payment channels.
The brilliance of this scam lies in its institutional layering. By impersonating both a telecom company and the national regulator, the fraudsters created a perfect loop of credibility. Each brand reinforced the other, leaving victims little reason to doubt.
Why Victims Fell for It: The Psychology of Authority
Fraudsters have long understood that fear and trust are two sides of the same coin. This scam exploited both with precision.
1. Authority Bias
When a call appears to come from MAS, Singapore’s financial regulator, victims instinctively comply. MAS is synonymous with legitimacy. Questioning its authority feels almost unthinkable.
2. Urgency and Fear
The narrative of “criminal misuse of your identity” triggers panic. Victims are told their accounts are under investigation, pushing them to act immediately before they “lose everything.”
3. Technical Authenticity
Spoofed numbers, legitimate-sounding scripts, and even hold music similar to M1’s call centre lend realism. The environment feels procedural, not predatory.
4. Empathy and Rapport
Scammers often sound calm and helpful. They “guide” victims through the process, framing transfers as protective, not suspicious.
These psychological levers bypass logic. Even well-educated professionals have fallen victim, proving that awareness alone is not enough when deception feels official.
The Laundering Playbook Behind the Scam
Once the funds leave the victim’s account, they enter a machinery that’s disturbingly efficient: the mule network.
1. Placement
Funds first land in personal accounts controlled by local money mules, individuals who allow access to their bank accounts in exchange for commissions. Many are recruited via Telegram or social media ads promising “easy income.”
2. Layering
Within hours, funds are split and moved:
- To multiple domestic mule accounts under different names.
- Through remittance platforms and e-wallets to obscure trails.
- Occasionally into crypto exchanges for rapid conversion and cross-border transfer.
3. Integration
Once the money has been sufficiently layered, it’s reintroduced into the economy through:
- Purchases of high-value goods such as luxury items or watches.
- Peer-to-peer transfers masked as legitimate business payments.
- Real-estate or vehicle purchases under third-party names.
Each stage widens the distance between the victim’s account and the fraudster’s wallet, making recovery almost impossible.
What begins as a phone scam ends as money laundering in motion, linking consumer fraud directly to compliance risk.
A Surge in Sophisticated Scams
This impersonation scheme is part of a larger wave reshaping Singapore’s fraud landscape:
- Government Agency Impersonations:
Earlier in 2025, scammers posed as the Ministry of Health and SingPost, tricking victims into paying fake fees for “medical” or “parcel-related” issues. - Deepfake CEO and Romance Scams:
In March 2025, a Singapore finance director nearly lost US$499,000 after a deepfake video impersonated her CEO during a virtual meeting. - Job and Mule Recruitment Scams:
Thousands of locals have been drawn into acting as unwitting money mules through fake job ads offering “commission-based transfers.”
The lines between fraud, identity theft, and laundering are blurring, powered by social engineering and emerging AI tools.
Singapore’s Response: Technology Meets Policy
In an unprecedented move, Singapore’s banks are introducing a new anti-scam safeguard beginning 15 October 2025.
Accounts with balances above S$50,000 will face a 24-hour hold or review when withdrawals exceed 50% of their total funds in a single day.
The goal is to give banks and customers time to verify large or unusual transfers, especially those made under pressure.
This measure complements other initiatives:
- Anti-Scam Command (ASC): A joint force between the Singapore Police Force, MAS, and IMDA that coordinates intelligence across sectors.
- Digital Platform Code of Practice: Requiring telcos and platforms to share threat information faster.
- Money Mule Crackdowns: Banks and police continue to identify and freeze mule accounts, often through real-time data exchange.
It’s an ecosystem-wide effort that recognises what scammers already exploit: financial crime doesn’t operate in silos.

Red Flags for Banks and Fintechs
To prevent similar losses, financial institutions must detect the digital fingerprints of impersonation scams long before victims report them.
1. Transaction-Level Indicators
- Sudden high-value transfers from retail accounts to new or unrelated beneficiaries.
- Full-balance withdrawals or transfers shortly after a suspicious inbound call pattern (if linked data exists).
- Transfers labelled “safe account,” “temporary holding,” or other unusual memo descriptors.
- Rapid pass-through transactions to accounts showing no consistent economic activity.
2. KYC/CDD Risk Indicators
- Accounts receiving multiple inbound transfers from unrelated individuals, indicating mule behaviour.
- Beneficiaries with no professional link to the victim or stated purpose.
- Customers with recently opened accounts showing immediate high-velocity fund movements.
- Repeated links to shared devices, IPs, or contact numbers across “unrelated” customers.
3. Behavioural Red Flags
- Elderly or mid-income customers attempting large same-day transfers after phone interactions.
- Requests from customers to “verify” MAS or bank staff, a potential sign of ongoing social engineering.
- Multiple failed transfer attempts followed by a successful large payment to a new payee.
For compliance and fraud teams, these clues form the basis of scenario-driven detection, revealing intent even before loss occurs.
Why Fragmented Defences Keep Failing
Even with advanced fraud controls, isolated detection still struggles against networked crime.
Each bank sees only what happens within its own perimeter.
Each fintech monitors its own platform.
But scammers move across them all, exploiting the blind spots in between.
That’s the paradox: stronger individual controls, yet weaker collaborative defence.
To close this gap, financial institutions need collaborative intelligence, a way to connect insights across banks, payment platforms, and regulators without breaching data privacy.
How Collaborative Intelligence Changes the Game
That’s precisely where Tookitaki’s AFC Ecosystem comes in.
1. Shared Scenarios, Shared Defence
The AFC Ecosystem brings together compliance experts from across ASEAN and ANZ to contribute and analyse real-world scenarios, including impersonation scams, mule networks, and AI-enabled frauds.
When one member flags a new scam pattern, others gain immediate visibility, turning isolated awareness into collaborative defence.
2. FinCense: Scenario-Driven Detection
Tookitaki’s FinCense platform converts these typologies into actionable detection models.
If a bank in Singapore identifies a “safe account” transfer typology, that logic can instantly be adapted to other institutions through federated learning, without sharing customer data.
It’s collaboration powered by AI, built for privacy.
3. AI Agents for Faster Investigations
FinMate, Tookitaki’s AI copilot, assists investigators by summarising cases, linking entities, and surfacing relationships between mule accounts.
Meanwhile, Smart Disposition automatically narrates alerts, helping analysts focus on risk rather than paperwork.
Together, they accelerate how financial institutions identify, understand, and stop impersonation scams before they scale.
Conclusion: Trust as the New Battleground
Singapore’s latest impersonation scam proves that fraud has evolved. It no longer just exploits systems but the very trust those systems represent.
When fraudsters can sound like regulators and mimic entire call-centre environments, detection must move beyond static rules. It must anticipate scenarios, adapt dynamically, and learn collaboratively.
For banks, fintechs, and regulators, the mission is not just to block transactions. It is to protect trust itself.
Because in the digital economy, trust is the currency everything else depends on.
With collaborative intelligence, real-time detection, and the right technology backbone, that trust can be defended, not just restored after losses but safeguarded before they occur.

How Collective Intelligence Can Transform AML Collaboration Across ASEAN
Financial crime in ASEAN doesn’t recognise borders — yet many of the region’s financial institutions still defend against it as if it does.
Across Southeast Asia, a wave of interconnected fraud, mule, and laundering operations is exploiting the cracks between countries, institutions, and regulatory systems. These crimes are increasingly digital, fast-moving, and transnational, moving illicit funds through a web of banks, payment apps, and remittance providers.
No single institution can see the full picture anymore. But what if they could — collectively?
That’s the promise of collective intelligence: a new model of anti-financial crime collaboration that helps banks and fintechs move from isolated detection to shared insight, from reactive controls to proactive defence.

The Fragmented Fight Against Financial Crime
For decades, financial institutions in ASEAN have built compliance systems in silos — each operating within its own data, its own alerts, and its own definitions of risk.
Yet today’s criminals don’t operate that way.
They leverage networks. They use the same mule accounts to move money across different platforms. They exploit delays in cross-border data visibility. And they design schemes that appear harmless when viewed within one institution’s walls — but reveal clear criminal intent when seen across the ecosystem.
The result is an uneven playing field:
- Fragmented visibility: Each bank sees only part of the customer journey.
- Duplicated effort: Hundreds of institutions investigate similar alerts separately.
- Delayed response: Without early warning signals from peers, detection lags behind crime.
Even with strong internal controls, compliance teams are chasing symptoms, not patterns. The fight is asymmetric — and criminals know it.
Scenario 1: The Cross-Border Money Mule Network
In 2024, regulators in Malaysia, Singapore, and the Philippines jointly uncovered a sophisticated mule network linked to online job scams.
Victims were recruited through social media posts promising part-time work, asked to “process transactions,” and unknowingly became money mules.
Funds were deposited into personal accounts in the Philippines, layered through remittance corridors into Malaysia, and cashed out via ATMs in Singapore — all within 48 hours.
Each financial institution saw only a fragment:
- A remittance provider noticed repeated small transfers.
- A bank saw ATM withdrawals.
- A payment platform flagged a sudden spike in deposits.
Individually, none of these signals triggered escalation.
But collectively, they painted a clear picture of laundering activity.
This is where collective intelligence could have made the difference — if these institutions shared typologies, device fingerprints, or transaction patterns, the scheme could have been detected far earlier.
Scenario 2: The Regional Scam Syndicate
In 2025, Thai authorities dismantled a syndicate that defrauded victims across ASEAN through fake investment platforms.
Funds collected in Thailand were sent to shell firms in Cambodia and the Philippines, then layered through e-wallets linked to unlicensed payment agents in Vietnam.
Despite multiple suspicious activity reports (SARs) being filed, no single institution could connect the dots fast enough.
Each SAR told a piece of the story, but without shared context — names, merchant IDs, or recurring payment routes — the underlying network remained invisible for months.
By the time the link was established, millions had already vanished.
This case reflects a growing truth: isolation is the weakest point in financial crime defence.
Why Traditional AML Systems Fall Short
Most AML and fraud systems across ASEAN were designed for a slower era — when payments were batch-processed, customer bases were domestic, and typologies evolved over years, not weeks.
Today, they struggle against the scale and speed of digital crime. The challenges echo what community banks face elsewhere:
- Siloed tools: Transaction monitoring, screening, and onboarding often run on separate platforms.
- Inconsistent entity view: Fraud and AML systems assess the same customer differently.
- Fragmented data: No single source of truth for risk or identity.
- Reactive detection: Alerts are investigated in isolation, without the benefit of peer insights.
The result? High false positives, slow investigations, and missed cross-institutional patterns.
Criminals exploit these blind spots — shifting tactics across borders and platforms faster than detection rules can adapt.

The Case for Collective Intelligence
Collective intelligence offers a new way forward.
It’s the idea that by pooling anonymised insights, institutions can collectively detect threats no single bank could uncover alone. Instead of sharing raw data, banks and fintechs share patterns, typologies, and red flags — learning from each other’s experiences without compromising confidentiality.
In practice, this looks like:
- A payment institution sharing a new mule typology with regional peers.
- A bank leveraging cross-institution risk indicators to validate an alert.
- Multiple FIs aligning detection logic against a shared set of fraud scenarios.
This model turns what used to be isolated vigilance into a networked defence mechanism.
Each participant adds intelligence that strengthens the whole ecosystem.
How ASEAN Regulators Are Encouraging Collaboration
Collaboration isn’t just an innovation — it’s becoming a regulatory expectation.
- Singapore: MAS has called for greater intelligence-sharing through public–private partnerships and cross-border AML/CFT collaboration.
- Philippines: BSP has partnered with industry associations like Fintech Alliance PH to develop joint typology repositories and scenario-based reporting frameworks.
- Malaysia: BNM’s National Risk Assessment and Financial Sector Blueprint both emphasise collective resilience and information exchange between institutions.
The direction is clear — regulators are recognising that fighting financial crime is a shared responsibility.
AFC Ecosystem: Turning Collaboration into Practice
The AFC Ecosystem brings this vision to life.
It is a community-driven platform where compliance professionals, regulators, and industry experts across ASEAN share real-world financial crime scenarios and red-flag indicators in a structured, secure way.
Each month, members contribute and analyse typologies — from mule recruitment through social media to layering through trade and crypto channels — and receive actionable insights they can operationalise in their own systems.
The result is a collective intelligence engine that grows with every contribution.
When one institution detects a new laundering technique, others gain the early warning before it spreads.
This isn’t about sharing customer data — it’s about sharing knowledge.
FinCense: Turning Shared Intelligence into Detection
While the AFC Ecosystem enables the sharing of typologies and patterns, Tookitaki’s FinCense makes those insights operational.
Through its federated learning model, FinCense can ingest new typologies contributed by the community, simulate them in sandbox environments, and automatically tune thresholds and detection models.
This ensures that once a new scenario is identified within the community, every participating institution can strengthen its defences almost instantly — without sharing sensitive data or compromising privacy.
It’s a practical manifestation of collective defence, where each institution benefits from the learnings of all.
Building the Trust Layer for ASEAN’s Financial System
Trust is the cornerstone of financial stability — and it’s under pressure.
Every scam, laundering scheme, or data breach erodes the confidence that customers, regulators, and institutions place in the system.
To rebuild and sustain that trust, ASEAN’s financial ecosystem needs a new foundation — a trust layer built on shared intelligence, advanced AI, and secure collaboration.
This is where Tookitaki’s approach stands out:
- FinCense delivers real-time, AI-powered detection across AML and fraud.
- The AFC Ecosystem unites institutions through shared typologies and collective learning.
- Together, they form a network of defence that grows stronger with each participant.
The vision isn’t just to comply — it’s to outsmart.
To move from isolated controls to connected intelligence.
To make financial crime not just detectable, but preventable.
Conclusion: The Future of AML in ASEAN is Collective
Financial crime has evolved into a networked enterprise — agile, cross-border, and increasingly digital. The only effective response is a networked defence, built on shared knowledge, collaborative detection, and collective intelligence.
By combining the collaborative power of the AFC Ecosystem with the analytical strength of FinCense, Tookitaki is helping financial institutions across ASEAN stay one step ahead of criminals.
When banks, fintechs, and regulators work together — not just to report but to learn collectively — financial crime loses its greatest advantage: fragmentation.

Eliminating AI Hallucinations in Financial Crime Detection: A Governance-First Approach
Introduction: When AI Makes It Up — The High Stakes of “Hallucinations” in AML
This is the third instalment in our series, Governance-First AI Strategy: The Future of Financial Crime Detection.
- In Part 1, we explored the governance crisis created by compliance-heavy frameworks.
- In Part 2, we highlighted how Singapore’s AI Verify program is pioneering independent validation as the new standard.
In this post, we turn to one of the most urgent challenges in AI-driven compliance: AI hallucinations.
Imagine an AML analyst starting their day, greeted by a queue of urgent alerts. One, flagged as “high risk,” is generated by the newest AI tool. But as the analyst investigates, it becomes clear that some transactions cited by the AI never actually happened. The explanation, while plausible, is fabricated: a textbook case of AI hallucination.
Time is wasted. Trust in the AI system is shaken. And worse, while chasing a phantom, a genuine criminal scheme may slip through.
As artificial intelligence becomes the core engine for financial crime detection, the problem of hallucinations, outputs not grounded in real data or facts, poses a serious threat to compliance, regulatory trust, and operational efficiency.
What Are AI Hallucinations and Why Are They So Risky in Finance?
AI hallucinations occur when a model produces statements or explanations that sound correct but are not grounded in real data.
In financial crime compliance, this can lead to:
- Wild goose chases: Analysts waste valuable time chasing non-existent threats.
- Regulatory risk: Fabricated outputs increase the chance of audit failures or penalties.
- Customer harm: Legitimate clients may be incorrectly flagged, damaging trust and relationships.
Generative AI systems are especially vulnerable. Designed to create coherent responses, they can unintentionally invent entire scenarios. In finance, where every “fact” matters to reputations, livelihoods, and regulatory standing, there is no room for guesswork.

Why Do AI Hallucinations Happen?
The drivers are well understood:
- Gaps or bias in training data: Incomplete or outdated records force models to “fill in the blanks” with speculation.
- Overly creative design: Generative models excel at narrative-building but can fabricate plausible-sounding explanations without constraints.
- Ambiguous prompts or unchecked logic: Vague inputs encourage speculation, diverting the model from factual data.
Real-World Misfire: A Costly False Alarm
At a large bank, an AI-powered monitoring tool flagged accounts for “suspicious round-dollar transactions,” producing a detailed narrative about potential laundering.
The problem? Those transactions never occurred.
The AI had hallucinated the explanation, stitching together fragments of unrelated historical data. The result: a week-long audit, wasted resources, and an urgent reminder of the need for stronger governance over AI outputs.
A Governance-First Playbook to Stop Hallucinations
Forward-looking compliance teams are embedding anti-hallucination measures into their AI governance frameworks. Key practices include:
1. Rigorous, Real-World Model Training
AI models must be trained on thousands of verified AML cases, including edge cases and emerging typologies. Exposure to operational complexity reduces speculative outputs.At Tookitaki, scenario-driven drills such as deepfake scam simulations and laundering typologies continuously stress-test the system to identify risks before they reach investigators or regulators.
2. Evidence-Based Outputs, Not Vague Alerts
Traditional systems often produce alerts like: “Possible layering activity detected in account X.” Analysts are left to guess at the reasoning.Governance-first systems enforce data-anchored outputs:“Layering risk detected: five transactions on 20/06/25 match FATF typology #3. See attached evidence.”
This creates traceable, auditable insights, building efficiency and trust.
3. Human-in-the-Loop (HITL) Validation
Even advanced models require human oversight. High-stakes outputs, such as risk narratives or new typology detections, must pass through expert validation.At Tookitaki, HITL ensures:
- Analytical transparency
- Reduced false positives
- No unexplained “black box” reasoning
4. Prompt Engineering and Retrieval-Augmented Generation (RAG)Ambiguity invites hallucinations. Precision prompts, combined with RAG techniques, ensure outputs are tied to verified databases and transaction logs, making fabrication nearly impossible.
Spotlight: Tookitaki’s Precision-First AI Philosophy
Tookitaki’s compliance platform is built on a governance-first architecture that treats hallucination prevention as a measurable objective.
- Scenario-Driven Simulations: Rare typologies and evolving crime patterns are continuously tested to surface potential weaknesses before deployment.
- Community-Powered Validation: Detection logic is refined in real time through feedback from a global network of financial crime experts.
- Mandatory Fact Citations: Every AI-generated narrative is backed by case data and audit references, accelerating compliance reviews and strengthening regulatory confidence.
At Tookitaki, we recognise that no AI system can be infallible. As leading research highlights, some real-world questions are inherently unanswerable. That is why our goal is not absolute perfection, but precision-driven AI that makes hallucinations statistically negligible and fully traceable — delivering factual integrity at scale.

Conclusion: Factual Integrity Is the Foundation of Trust
Eliminating hallucinations is not just a technical safeguard. It is a governance imperative. Compliance teams that embed evidence-based outputs, rigorous training, human-in-the-loop validation, and retrieval-anchored design will not only reduce wasted effort but also strengthen regulatory confidence and market reputation.
Key Takeaways from Part 3:
- AI hallucinations erode trust, waste resources, and expose firms to regulatory risk.
- Governance-first frameworks prevent hallucinations by enforcing evidence-backed, auditable outputs.
- Zero-hallucination AI is not optional. It is the foundation of responsible financial crime detection.
Are you asking your AI to show its data?
If not, you may be chasing ghosts.
In the next blog, we will explore how building an integrated, agentic AI strategy, linking model creation to real-time risk detection, can shift compliance from reactive to resilient.

When MAS Calls and It’s Not MAS: Inside Singapore’s Latest Impersonation Scam
A phone rings in Singapore.
The caller ID flashes the name of a trusted brand, M1 Limited.
A stern voice claims to be from the Monetary Authority of Singapore (MAS).
“There’s been suspicious activity linked to your identity. To protect your money, we’ll need you to transfer your funds to a safe account immediately.”
For at least 13 Singaporeans since September 2025, this chilling scenario wasn’t fiction. It was the start of an impersonation scam that cost victims more than S$360,000 in a matter of weeks.
Fraudsters had merged two of Singapore’s most trusted institutions, M1 and MAS, into one seamless illusion. And it worked.
The episode underscores a deeper truth: as digital trust grows, it also becomes a weapon. Scammers no longer just mimic banks or brands. They now borrow institutional credibility itself.

The Anatomy of the Scam
According to police advisories, this new impersonation fraud unfolds in a deceptively simple series of steps:
- The Setup – A Trusted Name on Caller ID
Victims receive calls from numbers spoofed to appear as M1’s customer service line. The scammers claim that the victim’s account or personal data has been compromised and is being used for illegal activity. - The Transfer – The MAS Connection
Mid-call, the victim is redirected to another “officer” who introduces themselves as an investigator from the Monetary Authority of Singapore. The tone shifts to urgency and authority. - The Hook – The ‘Safe Account’ Illusion
The supposed MAS officer instructs the victim to move money into a “temporary safety account” for protection while an “investigation” is ongoing. Every interaction sounds professional, from background call-centre noise to scripted verification questions. - The Extraction – Clean Sweep
Once the transfer is made, communication stops. Victims soon realise that their funds, sometimes their life savings, have been drained into mule accounts and dispersed across digital payment channels.
The brilliance of this scam lies in its institutional layering. By impersonating both a telecom company and the national regulator, the fraudsters created a perfect loop of credibility. Each brand reinforced the other, leaving victims little reason to doubt.
Why Victims Fell for It: The Psychology of Authority
Fraudsters have long understood that fear and trust are two sides of the same coin. This scam exploited both with precision.
1. Authority Bias
When a call appears to come from MAS, Singapore’s financial regulator, victims instinctively comply. MAS is synonymous with legitimacy. Questioning its authority feels almost unthinkable.
2. Urgency and Fear
The narrative of “criminal misuse of your identity” triggers panic. Victims are told their accounts are under investigation, pushing them to act immediately before they “lose everything.”
3. Technical Authenticity
Spoofed numbers, legitimate-sounding scripts, and even hold music similar to M1’s call centre lend realism. The environment feels procedural, not predatory.
4. Empathy and Rapport
Scammers often sound calm and helpful. They “guide” victims through the process, framing transfers as protective, not suspicious.
These psychological levers bypass logic. Even well-educated professionals have fallen victim, proving that awareness alone is not enough when deception feels official.
The Laundering Playbook Behind the Scam
Once the funds leave the victim’s account, they enter a machinery that’s disturbingly efficient: the mule network.
1. Placement
Funds first land in personal accounts controlled by local money mules, individuals who allow access to their bank accounts in exchange for commissions. Many are recruited via Telegram or social media ads promising “easy income.”
2. Layering
Within hours, funds are split and moved:
- To multiple domestic mule accounts under different names.
- Through remittance platforms and e-wallets to obscure trails.
- Occasionally into crypto exchanges for rapid conversion and cross-border transfer.
3. Integration
Once the money has been sufficiently layered, it’s reintroduced into the economy through:
- Purchases of high-value goods such as luxury items or watches.
- Peer-to-peer transfers masked as legitimate business payments.
- Real-estate or vehicle purchases under third-party names.
Each stage widens the distance between the victim’s account and the fraudster’s wallet, making recovery almost impossible.
What begins as a phone scam ends as money laundering in motion, linking consumer fraud directly to compliance risk.
A Surge in Sophisticated Scams
This impersonation scheme is part of a larger wave reshaping Singapore’s fraud landscape:
- Government Agency Impersonations:
Earlier in 2025, scammers posed as the Ministry of Health and SingPost, tricking victims into paying fake fees for “medical” or “parcel-related” issues. - Deepfake CEO and Romance Scams:
In March 2025, a Singapore finance director nearly lost US$499,000 after a deepfake video impersonated her CEO during a virtual meeting. - Job and Mule Recruitment Scams:
Thousands of locals have been drawn into acting as unwitting money mules through fake job ads offering “commission-based transfers.”
The lines between fraud, identity theft, and laundering are blurring, powered by social engineering and emerging AI tools.
Singapore’s Response: Technology Meets Policy
In an unprecedented move, Singapore’s banks are introducing a new anti-scam safeguard beginning 15 October 2025.
Accounts with balances above S$50,000 will face a 24-hour hold or review when withdrawals exceed 50% of their total funds in a single day.
The goal is to give banks and customers time to verify large or unusual transfers, especially those made under pressure.
This measure complements other initiatives:
- Anti-Scam Command (ASC): A joint force between the Singapore Police Force, MAS, and IMDA that coordinates intelligence across sectors.
- Digital Platform Code of Practice: Requiring telcos and platforms to share threat information faster.
- Money Mule Crackdowns: Banks and police continue to identify and freeze mule accounts, often through real-time data exchange.
It’s an ecosystem-wide effort that recognises what scammers already exploit: financial crime doesn’t operate in silos.

Red Flags for Banks and Fintechs
To prevent similar losses, financial institutions must detect the digital fingerprints of impersonation scams long before victims report them.
1. Transaction-Level Indicators
- Sudden high-value transfers from retail accounts to new or unrelated beneficiaries.
- Full-balance withdrawals or transfers shortly after a suspicious inbound call pattern (if linked data exists).
- Transfers labelled “safe account,” “temporary holding,” or other unusual memo descriptors.
- Rapid pass-through transactions to accounts showing no consistent economic activity.
2. KYC/CDD Risk Indicators
- Accounts receiving multiple inbound transfers from unrelated individuals, indicating mule behaviour.
- Beneficiaries with no professional link to the victim or stated purpose.
- Customers with recently opened accounts showing immediate high-velocity fund movements.
- Repeated links to shared devices, IPs, or contact numbers across “unrelated” customers.
3. Behavioural Red Flags
- Elderly or mid-income customers attempting large same-day transfers after phone interactions.
- Requests from customers to “verify” MAS or bank staff, a potential sign of ongoing social engineering.
- Multiple failed transfer attempts followed by a successful large payment to a new payee.
For compliance and fraud teams, these clues form the basis of scenario-driven detection, revealing intent even before loss occurs.
Why Fragmented Defences Keep Failing
Even with advanced fraud controls, isolated detection still struggles against networked crime.
Each bank sees only what happens within its own perimeter.
Each fintech monitors its own platform.
But scammers move across them all, exploiting the blind spots in between.
That’s the paradox: stronger individual controls, yet weaker collaborative defence.
To close this gap, financial institutions need collaborative intelligence, a way to connect insights across banks, payment platforms, and regulators without breaching data privacy.
How Collaborative Intelligence Changes the Game
That’s precisely where Tookitaki’s AFC Ecosystem comes in.
1. Shared Scenarios, Shared Defence
The AFC Ecosystem brings together compliance experts from across ASEAN and ANZ to contribute and analyse real-world scenarios, including impersonation scams, mule networks, and AI-enabled frauds.
When one member flags a new scam pattern, others gain immediate visibility, turning isolated awareness into collaborative defence.
2. FinCense: Scenario-Driven Detection
Tookitaki’s FinCense platform converts these typologies into actionable detection models.
If a bank in Singapore identifies a “safe account” transfer typology, that logic can instantly be adapted to other institutions through federated learning, without sharing customer data.
It’s collaboration powered by AI, built for privacy.
3. AI Agents for Faster Investigations
FinMate, Tookitaki’s AI copilot, assists investigators by summarising cases, linking entities, and surfacing relationships between mule accounts.
Meanwhile, Smart Disposition automatically narrates alerts, helping analysts focus on risk rather than paperwork.
Together, they accelerate how financial institutions identify, understand, and stop impersonation scams before they scale.
Conclusion: Trust as the New Battleground
Singapore’s latest impersonation scam proves that fraud has evolved. It no longer just exploits systems but the very trust those systems represent.
When fraudsters can sound like regulators and mimic entire call-centre environments, detection must move beyond static rules. It must anticipate scenarios, adapt dynamically, and learn collaboratively.
For banks, fintechs, and regulators, the mission is not just to block transactions. It is to protect trust itself.
Because in the digital economy, trust is the currency everything else depends on.
With collaborative intelligence, real-time detection, and the right technology backbone, that trust can be defended, not just restored after losses but safeguarded before they occur.

How Collective Intelligence Can Transform AML Collaboration Across ASEAN
Financial crime in ASEAN doesn’t recognise borders — yet many of the region’s financial institutions still defend against it as if it does.
Across Southeast Asia, a wave of interconnected fraud, mule, and laundering operations is exploiting the cracks between countries, institutions, and regulatory systems. These crimes are increasingly digital, fast-moving, and transnational, moving illicit funds through a web of banks, payment apps, and remittance providers.
No single institution can see the full picture anymore. But what if they could — collectively?
That’s the promise of collective intelligence: a new model of anti-financial crime collaboration that helps banks and fintechs move from isolated detection to shared insight, from reactive controls to proactive defence.

The Fragmented Fight Against Financial Crime
For decades, financial institutions in ASEAN have built compliance systems in silos — each operating within its own data, its own alerts, and its own definitions of risk.
Yet today’s criminals don’t operate that way.
They leverage networks. They use the same mule accounts to move money across different platforms. They exploit delays in cross-border data visibility. And they design schemes that appear harmless when viewed within one institution’s walls — but reveal clear criminal intent when seen across the ecosystem.
The result is an uneven playing field:
- Fragmented visibility: Each bank sees only part of the customer journey.
- Duplicated effort: Hundreds of institutions investigate similar alerts separately.
- Delayed response: Without early warning signals from peers, detection lags behind crime.
Even with strong internal controls, compliance teams are chasing symptoms, not patterns. The fight is asymmetric — and criminals know it.
Scenario 1: The Cross-Border Money Mule Network
In 2024, regulators in Malaysia, Singapore, and the Philippines jointly uncovered a sophisticated mule network linked to online job scams.
Victims were recruited through social media posts promising part-time work, asked to “process transactions,” and unknowingly became money mules.
Funds were deposited into personal accounts in the Philippines, layered through remittance corridors into Malaysia, and cashed out via ATMs in Singapore — all within 48 hours.
Each financial institution saw only a fragment:
- A remittance provider noticed repeated small transfers.
- A bank saw ATM withdrawals.
- A payment platform flagged a sudden spike in deposits.
Individually, none of these signals triggered escalation.
But collectively, they painted a clear picture of laundering activity.
This is where collective intelligence could have made the difference — if these institutions shared typologies, device fingerprints, or transaction patterns, the scheme could have been detected far earlier.
Scenario 2: The Regional Scam Syndicate
In 2025, Thai authorities dismantled a syndicate that defrauded victims across ASEAN through fake investment platforms.
Funds collected in Thailand were sent to shell firms in Cambodia and the Philippines, then layered through e-wallets linked to unlicensed payment agents in Vietnam.
Despite multiple suspicious activity reports (SARs) being filed, no single institution could connect the dots fast enough.
Each SAR told a piece of the story, but without shared context — names, merchant IDs, or recurring payment routes — the underlying network remained invisible for months.
By the time the link was established, millions had already vanished.
This case reflects a growing truth: isolation is the weakest point in financial crime defence.
Why Traditional AML Systems Fall Short
Most AML and fraud systems across ASEAN were designed for a slower era — when payments were batch-processed, customer bases were domestic, and typologies evolved over years, not weeks.
Today, they struggle against the scale and speed of digital crime. The challenges echo what community banks face elsewhere:
- Siloed tools: Transaction monitoring, screening, and onboarding often run on separate platforms.
- Inconsistent entity view: Fraud and AML systems assess the same customer differently.
- Fragmented data: No single source of truth for risk or identity.
- Reactive detection: Alerts are investigated in isolation, without the benefit of peer insights.
The result? High false positives, slow investigations, and missed cross-institutional patterns.
Criminals exploit these blind spots — shifting tactics across borders and platforms faster than detection rules can adapt.

The Case for Collective Intelligence
Collective intelligence offers a new way forward.
It’s the idea that by pooling anonymised insights, institutions can collectively detect threats no single bank could uncover alone. Instead of sharing raw data, banks and fintechs share patterns, typologies, and red flags — learning from each other’s experiences without compromising confidentiality.
In practice, this looks like:
- A payment institution sharing a new mule typology with regional peers.
- A bank leveraging cross-institution risk indicators to validate an alert.
- Multiple FIs aligning detection logic against a shared set of fraud scenarios.
This model turns what used to be isolated vigilance into a networked defence mechanism.
Each participant adds intelligence that strengthens the whole ecosystem.
How ASEAN Regulators Are Encouraging Collaboration
Collaboration isn’t just an innovation — it’s becoming a regulatory expectation.
- Singapore: MAS has called for greater intelligence-sharing through public–private partnerships and cross-border AML/CFT collaboration.
- Philippines: BSP has partnered with industry associations like Fintech Alliance PH to develop joint typology repositories and scenario-based reporting frameworks.
- Malaysia: BNM’s National Risk Assessment and Financial Sector Blueprint both emphasise collective resilience and information exchange between institutions.
The direction is clear — regulators are recognising that fighting financial crime is a shared responsibility.
AFC Ecosystem: Turning Collaboration into Practice
The AFC Ecosystem brings this vision to life.
It is a community-driven platform where compliance professionals, regulators, and industry experts across ASEAN share real-world financial crime scenarios and red-flag indicators in a structured, secure way.
Each month, members contribute and analyse typologies — from mule recruitment through social media to layering through trade and crypto channels — and receive actionable insights they can operationalise in their own systems.
The result is a collective intelligence engine that grows with every contribution.
When one institution detects a new laundering technique, others gain the early warning before it spreads.
This isn’t about sharing customer data — it’s about sharing knowledge.
FinCense: Turning Shared Intelligence into Detection
While the AFC Ecosystem enables the sharing of typologies and patterns, Tookitaki’s FinCense makes those insights operational.
Through its federated learning model, FinCense can ingest new typologies contributed by the community, simulate them in sandbox environments, and automatically tune thresholds and detection models.
This ensures that once a new scenario is identified within the community, every participating institution can strengthen its defences almost instantly — without sharing sensitive data or compromising privacy.
It’s a practical manifestation of collective defence, where each institution benefits from the learnings of all.
Building the Trust Layer for ASEAN’s Financial System
Trust is the cornerstone of financial stability — and it’s under pressure.
Every scam, laundering scheme, or data breach erodes the confidence that customers, regulators, and institutions place in the system.
To rebuild and sustain that trust, ASEAN’s financial ecosystem needs a new foundation — a trust layer built on shared intelligence, advanced AI, and secure collaboration.
This is where Tookitaki’s approach stands out:
- FinCense delivers real-time, AI-powered detection across AML and fraud.
- The AFC Ecosystem unites institutions through shared typologies and collective learning.
- Together, they form a network of defence that grows stronger with each participant.
The vision isn’t just to comply — it’s to outsmart.
To move from isolated controls to connected intelligence.
To make financial crime not just detectable, but preventable.
Conclusion: The Future of AML in ASEAN is Collective
Financial crime has evolved into a networked enterprise — agile, cross-border, and increasingly digital. The only effective response is a networked defence, built on shared knowledge, collaborative detection, and collective intelligence.
By combining the collaborative power of the AFC Ecosystem with the analytical strength of FinCense, Tookitaki is helping financial institutions across ASEAN stay one step ahead of criminals.
When banks, fintechs, and regulators work together — not just to report but to learn collectively — financial crime loses its greatest advantage: fragmentation.


