Blog

The Transformative Role of Generative AI in Financial Crime Compliance

Site Logo
Anup Gunjan
26 September 2024
read
10 min

When we look at the financial crime landscape today, it’s clear that we’re on the brink of a significant evolution. The traditional methods of combating money laundering and fraud, which have relied heavily on rule-based systems and static models, are rapidly being eclipsed by the transformative potential of artificial intelligence (AI) and machine learning (ML). Over the last two decades, these technologies have fundamentally changed how we identify and respond to illicit activities. But as we look into the next few years, a new tech transformation is set to reshape the field: generative AI.

This isn't just another technological upgrade—it’s a paradigm shift. Generative AI is poised to redefine the rules of the game, offering unprecedented capabilities that go beyond the detection and prevention tools we’ve relied on so far. While ML has already improved our ability to spot suspicious patterns, generative AI promises to tackle more sophisticated threats, adapt faster to evolving tactics, and bring a new level of intelligence to financial crime compliance.

But with this promise comes a critical question: How exactly will generative AI or specifically, Large Language Models (LLM) transform financial crime compliance? The answer lies not just in its advanced capabilities but in its potential to alter the way we approach detection and prevention fundamentally. As we prepare for this next wave of innovation, it’s essential to understand the opportunities—and the challenges—that come with it.

Generative AI in Financial crime compliance

When it comes to leveraging LLM in financial crime compliance, the possibilities are profound. Let’s break down some of the key areas where this technology can make a real impact:

  1. Data Generation and Augmentation: LLM has the unique ability to create synthetic data that closely mirrors real-world financial transactions. This isn’t just about filling in gaps; it’s about creating a rich, diverse dataset that can be used to train machine learning models more effectively. This is particularly valuable for fintech startups that may not have extensive historical data to draw from. With generative AI, they can test and deploy robust financial crime solutions while preserving the privacy of sensitive information. It’s like having a virtual data lab that’s always ready for experimentation.
  2. Unsupervised Anomaly Detection: Traditional systems often struggle to catch the nuanced, sophisticated patterns of modern financial crime. Large language models, however, can learn the complex behaviours of legitimate transactions and use this understanding as a baseline. When a new transaction deviates from this learned norm, it raises a red flag. These models can detect subtle irregularities that traditional rule-based systems or simpler machine learning algorithms might overlook, providing a more refined, proactive defence against potential fraud or money laundering.
  3. Automating the Investigation Process: Compliance professionals know the grind of sifting through endless alerts and drafting investigation reports. Generative AI offers a smarter way forward. By automating the creation of summaries, reports, and investigation notes, it frees up valuable time for compliance teams to focus on what really matters: strategic decision-making and complex case analysis. This isn’t just about making things faster—it’s about enabling a deeper, more insightful investigative process.
  4. Scenario Simulation and Risk Assessment: Generative AI can simulate countless financial transaction scenarios, assessing their risk levels based on historical data and regulatory requirements. This capability allows financial institutions to anticipate and prepare for a wide range of potential threats. It’s not just about reacting to crime; it’s about being ready for what comes next, armed with the insights needed to stay one step ahead.

To truly appreciate the transformative power of generative AI, we need to take a closer look at two critical areas: anomaly detection and explainability. These are the foundations upon which the future of financial crime compliance will be built.

Anomaly detection

One of the perennial challenges in fraud detection is the reliance on labelled data, where traditional machine learning models need clear examples of both legitimate and fraudulent transactions to learn from. This can be a significant bottleneck. After all, obtaining such labelled data—especially for emerging or sophisticated fraud schemes—is not only time-consuming but also often incomplete. This is where generative AI steps in, offering a fresh perspective with its capability for unsupervised anomaly detection, bypassing the need for labelled datasets.

To understand how this works, let’s break it down.

Traditional Unsupervised ML Approach

Typically, financial institutions using unsupervised machine learning might deploy clustering algorithms like k-means. Here’s how it works: transactions are grouped into clusters based on various features—transaction amount, time of day, location, and so on. Anomalies are then identified as transactions that don’t fit neatly into any of these clusters or exhibit characteristics that deviate significantly from the norm.

While this method has its merits, it can struggle to keep up with the complexity of modern fraud patterns. What happens when the anomalies are subtle or when legitimate variations are mistakenly flagged? The result is a system that can’t always distinguish between a genuine threat and a benign fluctuation.

Generative AI Approach

Generative AI offers a more nuanced solution. Consider the use of a variational autoencoder (VAE). Instead of relying on predefined labels, a VAE learns the underlying distribution of normal transactions by reconstructing them during training. Think of it as the model teaching itself what “normal” looks like. As it learns, the VAE can even generate synthetic transactions that closely resemble real ones, effectively creating a virtual landscape of typical behavior.

Once trained, this model becomes a powerful tool for anomaly detection. Here’s how: every incoming transaction is reconstructed by the VAE and compared to its original version. Transactions that deviate significantly, exhibiting high reconstruction errors, are flagged as potential anomalies. It’s like having a highly sensitive radar that picks up on the slightest deviations from the expected course. Moreover, by generating synthetic transactions and comparing them to real ones, the model can spot discrepancies that might otherwise go unnoticed.

This isn’t just an incremental improvement—it’s a leap forward. Generative AI’s ability to capture the intricate relationships within transaction data means it can detect anomalies with greater accuracy, reducing false positives and enhancing the overall effectiveness of fraud detection.

Explainability and Automated STR Reporting in Local Languages

One of the most pressing issues in machine learning (ML)-based systems is their often opaque decision-making process. For compliance officers and regulators tasked with understanding why a certain transaction was flagged, this lack of transparency can be a significant hurdle. Enter explainability techniques like LIME and SHAP. These tools are designed to peel back the layers of complex generative AI models, offering insights into how and why specific decisions were made. It’s like shining a light into the black box, providing much-needed clarity in a landscape where every decision could have significant implications.

But explainability is only one piece of the puzzle. Compliance is a global game, played on a field marked by varied and often stringent regulatory requirements. This is where generative AI’s natural language processing (NLP) capabilities come into play, revolutionizing how suspicious transaction reports (STRs) are generated and communicated. Imagine a system that can not only identify suspicious activities but also automatically draft detailed, accurate STRs in multiple languages, tailored to the specific regulatory nuances of each jurisdiction.

This is more than just a time-saver; it’s a transformative tool that ensures compliance officers can operate seamlessly across borders. By automating the generation of STRs in local languages, AI not only speeds up the process but also reduces the risk of miscommunication or regulatory missteps. It’s about making compliance more accessible and more effective, no matter where you are in the world.

{{cta-whitepaper}}

Upcoming Challenges

While the potential of generative AI is undeniably transformative, it’s not without its hurdles. From technical intricacies to regulatory constraints, there are several challenges that must be navigated to fully harness this technology in the fight against financial crime.

LLMs and Long Text Processing

One of the key challenges is ensuring that Generative Language Models (GLMs) like the Large Language Model (LLM) go beyond simple tasks like summarization to demonstrate true analytical intelligence. The introduction of Gemini 1.5 is a step forward, bringing enhanced capabilities for processing long texts. Yet, the question remains: can these models truly grasp the complexities of financial transactions and provide actionable insights? It’s not just about understanding more data; it’s about understanding it better.

Implementation Hurdles

    1. Data Quality and Preprocessing: Generative AI models are only as good as the data they’re trained on. Inconsistent or low-quality data can skew results, leading to false positives or overlooked threats. For financial institutions, ensuring clean, standardized, and comprehensive datasets is not just important—it’s imperative. This involves meticulous data preprocessing, including feature engineering, normalization, and handling missing values. Each step is crucial to preparing the data for training, ensuring that the models can perform at their best.
    2. Model Training and Scalability: Training large-scale models like LLMs and GANs is no small feat. The process is computationally intensive, requiring vast resources and advanced infrastructure. Scalability becomes a critical issue here. Strategies like distributed training and model parallelization, along with efficient hardware utilization, are needed to make these models not just a technological possibility but a practical tool for real-world AML/CFT systems.
    3. Evaluation Metrics and Interpretability: How do we measure success in generative AI for financial crime compliance? Traditional metrics like reconstruction error or sample quality don’t always capture the whole picture. In this context, evaluation criteria need to be more nuanced, combining these general metrics with domain-specific ones that reflect the unique demands of AML/CFT. But it’s not just about performance. The interpretability of these models is equally vital. Without clear, understandable outputs, building trust with regulators and compliance officers remains a significant challenge.
    4. Potential Limitations and Pitfalls: As powerful as generative AI can be, it’s not infallible. These models can inherit biases and inconsistencies from their training data, leading to unreliable or even harmful outputs. It’s a risk that cannot be ignored. Implementing robust techniques for bias detection and mitigation, alongside rigorous risk assessment and continuous monitoring, is essential to ensure that generative AI is used safely and responsibly in financial crime compliance.
    Navigating these challenges is no small task, but it’s a necessary journey. To truly unlock the potential of generative AI in combating financial crime, we must address these obstacles head-on, with a clear strategy and a commitment to innovation.

Regulatory and Ethical Considerations

As we venture into the integration of generative AI in anti-money laundering (AML) and counter-financing of terrorism (CFT) systems, it’s not just the technological challenges that we need to be mindful of. The regulatory and ethical landscape presents its own set of complexities, demanding careful navigation and proactive engagement with stakeholders.

Regulatory Compliance

The deployment of generative AI in AML/CFT isn’t simply about adopting new technology—it’s about doing so within a framework that respects the rule of law. This means a close, ongoing dialogue with regulatory bodies to ensure that these advanced systems align with existing laws, guidelines, and best practices. Establishing clear standards for the development, validation, and governance of AI models is not just advisable; it’s essential. Without a robust regulatory framework, even the most sophisticated AI models could become liabilities rather than assets.

Ethical AI and Fairness

In the realm of financial crime compliance, the stakes are high. Decisions influenced by AI models can have significant impacts on individuals and businesses, which makes fairness and non-discrimination more than just ethical considerations—they are imperatives. Generative AI systems must be rigorously tested for biases and unintended consequences. This means implementing rigorous validation processes to ensure that these models uphold the principles of ethical AI and fairness, especially in high-stakes scenarios. We’re not just building technology; we’re building trust.

Privacy and Data Protection

With generative AI comes the promise of advanced capabilities like synthetic data generation and privacy-preserving analytics. But these innovations must be handled with care. Compliance with data protection regulations and the safeguarding of customer privacy rights should be at the forefront of any implementation strategy. Clear policies and robust safeguards are crucial to protect sensitive financial information, ensuring that the deployment of these models doesn’t inadvertently compromise the very data they are designed to protect.

Model Security and Robustness

Generative AI models, such as LLMs and GANs, bring immense power but also vulnerabilities. The risk of adversarial attacks or model extraction cannot be overlooked. To safeguard the integrity and confidentiality of these models, robust security measures need to be put in place. Techniques like differential privacy, watermarking, and the use of secure enclaves should be explored and implemented to protect these systems from malicious exploitation. It’s about creating not just intelligent models, but resilient ones.

{{cta-first}}

Gen AI in Tookitaki FinCense

Tookitaki’s FinCense platform is pioneering the use of Generative AI to redefine financial crime compliance. We are actively collaborating with our clients through lighthouse projects to put the advanced Gen AI capabilities of FinCense to the test. Powered by a local LLM engine built on Llama models, FinCense introduces a suite of features designed to transform the compliance landscape.

One standout feature is the Smart Disposition Engine, which automates the handling of alerts with remarkable efficiency. By incorporating rules, policy checklists, and reporting in local languages, this engine streamlines the entire alert management process, cutting manual investigation time by an impressive 50-60%. It’s a game-changer for compliance teams, enabling them to focus on complex cases rather than getting bogged down in routine tasks.

Then there’s FinMate, an AI investigation copilot tailored to the unique needs of AML compliance professionals. Based on a local LLM model, FinMate serves as an intelligent assistant, offering real-time support during investigations. It doesn’t just provide information; it delivers actionable insights and suggestions that help compliance teams navigate through cases more swiftly and effectively.

Moreover, the platform’s Local Language Reporting feature enhances its usability across diverse regions. By supporting multiple local languages, FinCense ensures that compliance teams can manage alerts and generate reports seamlessly, regardless of their location. This localization capability is more than just a convenience—it’s a critical tool that enables teams to work more effectively within their regulatory environments.

With these cutting-edge features, Tookitaki’s FinCense platform is not just keeping up with the evolution of financial crime compliance—it’s leading the way, setting new standards for what’s possible with Generative AI in this critical field.

Final Thoughts

The future of financial crime compliance is set to be revolutionized by the advancements in AI and ML. Over the next few years, generative AI will likely become an integral part of our arsenal, pushing the boundaries of what’s possible in detecting and preventing illicit activities. Large Language Models (LLMs) like GPT-3 and its successors are not just promising—they are poised to transform the landscape. From automating the generation of Suspicious Activity Reports (SARs) to conducting in-depth risk assessments and offering real-time decision support to compliance analysts, these models are redefining what’s possible in the AML/CFT domain.

But LLMs are only part of the equation. Generative Adversarial Networks (GANs) are also emerging as a game-changer. Their ability to create synthetic, privacy-preserving datasets is a breakthrough for financial institutions struggling with limited access to real-world data. These synthetic datasets can be used to train and test machine learning models, making it easier to simulate and study complex financial crime scenarios without compromising sensitive information.

The real magic, however, lies in the convergence of LLMs and GANs. Imagine a system that can not only detect anomalies but also generate synthetic transaction narratives or provide explanations for suspicious activities. This combination could significantly enhance the interpretability and transparency of AML/CFT systems, making it easier for compliance teams to understand and act on the insights provided by these advanced models.

Embracing these technological advancements isn’t just an option—it’s a necessity. The challenge will be in implementing them responsibly, ensuring they are used to build a more secure and transparent financial ecosystem. This will require a collaborative effort between researchers, financial institutions, and regulatory bodies. Only by working together can we address the technical and ethical challenges that come with deploying generative AI, ensuring that these powerful tools are used to their full potential—responsibly and effectively.

The road ahead is filled with promise, but it’s also lined with challenges. By navigating this path with care and foresight, we can leverage generative AI to not only stay ahead of financial criminals but to create a future where the financial system is safer and more resilient than ever before.

By submitting the form, you agree that your personal data will be processed to provide the requested content (and for the purposes you agreed to above) in accordance with the Privacy Notice

success icon

We’ve received your details and our team will be in touch shortly.

In the meantime, explore how Tookitaki is transforming financial crime prevention.
Learn More About Us
Oops! Something went wrong while submitting the form.

Ready to Streamline Your Anti-Financial Crime Compliance?

Our Thought Leadership Guides

Blogs
15 Sep 2025
6 min
read

Fake Bonds, Real Losses: Unpacking the ANZ Premier Wealth Investment Scam

Introduction: A Promise Too Good to Be True

An email lands in an inbox. The sender looks familiar, the branding is flawless, and the offer seems almost irresistible: exclusive Kiwi bonds through ANZ Premier Wealth, safe and guaranteed at market-beating returns.

For many Australians and New Zealanders in June 2025, this was no hypothetical. The emails were real, the branding was convincing, and the investment opportunity appeared to come from one of the region’s most trusted banks.

But it was all a scam.

ANZ was forced to issue a public warning after fraudsters impersonated its Premier Wealth division, sending out fake offers for bond investments. Customers who wired money were not buying bonds — they were handing their savings directly to criminals.

This case is more than a cautionary tale. It represents a growing wave of investment scams across ASEAN and ANZ, where fraudsters weaponise trust, impersonate brands, and launder stolen funds with alarming speed.

Talk to an Expert

The Anatomy of the Scam

According to ANZ’s official notice, fraudsters:

  • Impersonated ANZ Premier Wealth staff. Scam emails carried forged ANZ branding, professional signatures, and contact details that closely mirrored legitimate channels.
  • Promoted fake bonds. Victims were promised access to Kiwi and corporate bonds, products usually seen as safe, government-linked investments.
  • Offered exclusivity. Positioning the deal as a Premier Wealth opportunity added credibility, making the offer seem both exclusive and limited.
  • Spoofed domains. Emails originated from look-alike addresses, making it difficult for the average customer to distinguish real from fake.

The scam’s elegance lay in its simplicity. There was no need for fake apps, complex phishing kits, or deepfakes. Just a trusted brand, professional language, and the lure of safety with superior returns.

Why Victims Fell for It: The Psychology at Play

Fraudsters know that logic bends under the weight of trust and urgency. This scam exploited four psychological levers:

  1. Brand Authority. ANZ is a household name. If “ANZ” says a bond is safe, who questions it?
  2. Exclusivity. By labelling it a Premier Wealth offer, the scam hinted at privileged access — only for the chosen few.
  3. Fear of Missing Out. “Limited time only” messaging pressured quick action. The less time victims had to think, the less likely they were to spot inconsistencies.
  4. Professional Presentation. Logos, formatting, even fake signatures gave the appearance of authenticity, reducing natural scepticism.

The result: even financially literate individuals were vulnerable.

ChatGPT Image Sep 13, 2025, 11_02_17 AM

The Laundering Playbook Behind the Scam

Once funds left victims’ accounts, the fraud didn’t end — it evolved into laundering. While details of this specific case remain under investigation, patterns from similar scams offer a likely playbook:

  1. Placement. Victims wired money into accounts controlled by money mules, often locals recruited under false pretences.
  2. Layering. Funds were split and moved quickly:
    • From mule accounts into shell companies posing as “investment firms.”
    • Through remittance channels across ASEAN.
    • Into cryptocurrency exchanges to break traceability.
  3. Integration. Once disguised, the money resurfaced as seemingly legitimate — in real estate, vehicles, or layered back into financial markets.

This lifecycle illustrates why investment scams are not just consumer fraud. They are also money laundering pipelines that demand the attention of compliance teams and regulators.

A Regional Epidemic

The ANZ Premier Wealth scam is part of a broader pattern sweeping ASEAN and ANZ:

  • New Zealand: The Financial Markets Authority recently warned of deepfake investment schemes featuring fake political endorsements. Victims were shown fabricated “news” videos before being directed to fraudulent platforms.
  • Australia: In Western Australia alone, more than A$10 million was lost in 2025 to celebrity-endorsement scams, many using doctored images and fabricated interviews.
  • Philippines and Cambodia: Scam centres linked to investment fraud continue to proliferate, with US sanctions targeting companies enabling their operations.

These cases underscore a single truth: investment scams are industrialising. They no longer rely on lone actors but on networks, infrastructure, and sophisticated social engineering.

Red Flags for Banks and E-Money Issuers

Financial institutions sit at the intersection of prevention. To stay ahead, they must look for red flags across transactions, customer behaviour, and KYC/CDD profiles.

1. Transaction-Level Indicators

  • Transfers to new beneficiaries described as “bond” or “investment” payments.
  • Repeated mid-value international transfers inconsistent with customer history.
  • Rapid pass-through of funds through personal or SME accounts.
  • Small initial transfers followed by large lump sums after “trust” is established.

2. KYC/CDD Risk Indicators

  • Beneficiary companies lacking investment licenses or regulator registrations.
  • Accounts controlled by individuals with no financial background receiving large investment-related flows.
  • Overlapping ownership across multiple “investment firms” with similar addresses or directors.

3. Customer Behaviour Red Flags

  • Elderly or affluent customers suddenly wiring large sums under urgency.
  • Customers unable to clearly explain the investment’s mechanics.
  • Reports of unsolicited investment opportunities delivered via email or social media.

Together, these signals create the scenarios compliance teams must be trained to detect.

Regulatory and Industry Response

ANZ’s quick warning reflects growing industry awareness, but the response must be collective.

  • ASIC and FMA: Both regulators maintain registers of licensed investments and regularly issue alerts. They stress that legitimate offers will always appear on official websites.
  • Global Coordination: Investment scams often cross borders. Victims in Australia and New Zealand may be wiring money to accounts in Southeast Asia. This makes regulatory cooperation across ASEAN and ANZ critical.
  • Consumer Education: Banks and regulators are doubling down on campaigns warning customers that if an investment looks too good to be true, it usually is.

Still, fraudsters adapt faster than awareness campaigns. Which is why technology-driven detection is essential.

How Tookitaki Strengthens Defences

Tookitaki’s solutions are designed for exactly these challenges — scams that evolve, spread, and cross borders.

1. AFC Ecosystem: Shared Intelligence

The AFC Ecosystem aggregates scenarios from global compliance experts, including typologies for investment scams, impersonation fraud, and mule networks. By sharing knowledge, institutions in Australia and New Zealand can learn from cases in the Philippines, Singapore, or beyond.

2. FinCense: Scenario-Driven Monitoring

FinCense transforms these scenarios into live detection. It can flag:

  • Victim-to-mule account flows tied to investment scams.
  • Patterns of layering through multiple personal accounts.
  • Transactions inconsistent with KYC profiles, such as pensioners wiring large “bond” payments.

3. AI Agents: Faster Investigations

Smart Disposition reduces noise by auto-summarising alerts, while FinMate acts as an AI copilot to link entities and uncover hidden relationships. Together, they help compliance teams act before scam proceeds vanish offshore.

4. The Trust Layer

Ultimately, Tookitaki provides the trust layer between institutions, customers, and regulators. By embedding collective intelligence into detection, banks and EMIs not only comply with AML rules but actively safeguard their reputations and customer trust.

Conclusion: Protecting Trust in the Age of Impersonation

The ANZ Premier Wealth impersonation scam shows that in today’s landscape, trust itself is under attack. Fraudsters no longer just exploit technical loopholes; they weaponise the credibility of established institutions to lure victims.

For banks and fintechs, this means vigilance cannot stop at transaction monitoring. It must extend to understanding scenarios, recognising behavioural red flags, and preparing for scams that look indistinguishable from legitimate offers.

For regulators, the challenge is to build stronger cross-border cooperation and accelerate detection frameworks that can keep pace with the industrialisation of fraud.

And for technology providers like Tookitaki, the mission is clear: to stay ahead of deception with intelligence that learns, adapts, and scales.

Because fake bonds may look convincing, but with the right defences, the real losses they cause can be prevented.

Fake Bonds, Real Losses: Unpacking the ANZ Premier Wealth Investment Scam
Blogs
12 Sep 2025
6 min
read

Flooded with Fraud: Unmasking the Money Trails in Philippine Infrastructure Projects

The Philippines has always lived with the threat of floods. Each typhoon season brings destruction, and the government has poured billions into flood control projects meant to shield vulnerable communities. But while citizens braced for rising waters, another kind of flood was quietly at work: a flood of fraud.

Investigations now reveal that massive chunks of the flood control budget never translated into levees, drainage systems, or protection for communities. Instead, they flowed into the hands of a handful of contractors, politicians, and middlemen.

Since 2012, just 15 contractors cornered nearly ₱100 billion in projects, roughly 20 percent of the total budget. Many projects were “ghosts,” existing only on paper. Meanwhile, luxury cars filled garages, mansions rose in gated villages, and political war chests swelled ahead of elections.

This is not simply corruption. It is a textbook case of money laundering, with ghost projects and inflated contracts acting as conduits for illicit enrichment. For banks, fintechs, and regulators, it is a flashing red signal that the financial system remains a key artery for laundering public funds.

The Anatomy of the Scandal

The Department of Public Works and Highways (DPWH) is tasked with executing infrastructure that keeps cities safe from rising waters. Yet over the past decade, its flood control program has morphed into a honey pot for collusion and fraud.

  • Ghost projects: Entire budgets released for dams, dikes, and drainage systems that were never completed or never built at all.
  • Overpriced contracts: Inflated project costs created buffers for skimming and fund diversion.
  • Kickbacks for campaigns: Portions of project budgets allegedly redirected to finance electoral campaigns, locking in loyalty between politicians and contractors.
  • Cartel behaviour: Fifteen contractors cornering nearly a fifth of the flood control budget, year after year, with suspiciously repeat awards.
  • Lavish lifestyles: Contractors flaunting their wealth through luxury cars, sprawling mansions, and overseas spending.

The human cost is chilling. While typhoon-prone communities remain flooded each year, taxpayer money meant for their protection bankrolls supercars instead of sandbags.

ChatGPT Image Sep 11, 2025, 01_08_50 PM

The Laundering Playbook Behind Ghost Projects

This scandal mirrors the familiar placement-layering-integration framework of money laundering, but applied to public funds.

  1. Placement: Ghost Projects as Entry Points
    Funds are injected into the system under the guise of legitimate project disbursements. With government contracts as a cover, illicit enrichment begins with official-looking payments.
  2. Layering: Overpricing, Subcontracting, and Round-Tripping
    Excess funds are disguised through inflated invoices, subcontractor arrangements, and consultancy contracts. Round-tripping, where money cycles through multiple accounts before returning to the same network, further conceals the origin.
  3. Integration: From Sandbags to Supercars
    Once disguised, the funds re-emerge in legitimate markets such as luxury cars, prime real estate, overseas tuition, or campaign expenses. At this stage, dirty money is fully cleaned and woven into political and economic life.

Globally, procurement-related laundering has been flagged repeatedly by the Financial Action Task Force (FATF). In fact, FATF’s 2023 mutual evaluation warned that the Philippines faces serious challenges in addressing public sector corruption risks. The flood control scandal is not just a local embarrassment; it risks pulling the country deeper into scrutiny by international watchdogs.

What Banks Must Watch

Banks sit at the centre of these laundering flows. Every contractor, subcontractor, or political beneficiary needs accounts to receive, move, and disguise illicit funds. This makes banks the first line of defence, and often the last checkpoint before illicit proceeds are fully integrated.

Transaction-Level Red Flags

  • Large and repeated deposits from government agencies into the same small group of contractors.
  • Transfers to shell subcontractors or consultancy firms with little to no delivery capacity.
  • Sudden spikes in cash withdrawals after receiving government disbursements.
  • Circular transactions between contractors and related parties, indicating round-tripping.
  • Luxury purchases such as cars, property, and overseas spending directly following government project inflows.
  • Campaign-linked transfers, with bursts of outgoing payments to political accounts during election seasons.

KYC/CDD Red Flags

  • Contractors with weak financial standing but billion-peso contracts.
  • Hidden ownership ties to politically exposed persons (PEPs).
  • Corporate overlap among multiple contractors, suggesting collusion.
  • Lack of verifiable track records in infrastructure delivery, yet repeated contract awards.

Cross-Border Concerns

Funds may also be siphoned abroad. Banks must scrutinise:

  • Remittances to offshore accounts labelled as “consultancy” or “procurement.”
  • Purchases of high-value overseas assets.
  • Trade-based laundering through manipulated import or export invoices for construction materials.

Banks must not only flag individual transactions but also connect the narrative across accounts, owners, and transaction patterns.

What BSP-Licensed E-Money Issuers Must Watch

The scandal also casts a spotlight on fintech players. BSP-licensed e-money issuers (EMIs) are increasingly part of laundering networks, especially when illicit funds need to be fragmented, hidden, or redirected.

Key risks include:

  • Wallet misuse for political finance, with illicit funds loaded into multiple wallets to bankroll campaigns.
  • Structuring, where large government disbursements are broken into smaller transfers to dodge reporting thresholds.
  • Proxy accounts, with employees or relatives of contractors opening multiple wallets to spread funds.
  • Layering via wallets, with e-money balances converted into bank transfers, prepaid cards, or even crypto exchanges.
  • Unusual bursts of wallet activity around elections or after government fund releases.

For EMIs, the challenge is to monitor not just high-value transactions but also suspicious transaction clusters, where multiple accounts show parallel spikes or transfers that defy normal spending behaviour.

How Tookitaki Strengthens Defences

Schemes like ghost projects thrive because they exploit systemic blind spots. Static rules cannot keep pace with evolving laundering tactics. This is where Tookitaki brings a sharper edge.

AFC Ecosystem: Collective Intelligence

With over 1,500 expert-contributed typologies, the AFC Ecosystem already covers procurement fraud, campaign finance laundering, and luxury asset misuse. These scenarios can be directly applied by Philippine institutions to detect anomalies tied to public fund diversion.

FinCense: Adaptive Detection

FinCense translates these scenarios into live detection rules. It can flag government-to-contractor payments followed by unusual subcontractor layering or sudden spikes in high-value asset spending. Its federated learning model ensures that detection improves continuously across the network.

AI Agents: Cutting Investigation Time

Smart Disposition reduces false positives with automated, contextual alert summaries, while FinMate acts as an AI copilot for investigators. Together, they help compliance teams trace suspicious flows faster, from government disbursements to the eventual luxury car purchase.

The Trust Layer for BSP Institutions

By embedding collective intelligence into everyday monitoring, Tookitaki becomes the trust layer between financial institutions and regulators. This helps BSP and the Anti-Money Laundering Council (AMLC) strengthen national defences against procurement-linked laundering.

Talk to an Expert

Conclusion: Beyond the Scandal

The flood control scandal is more than an exposé of wasted budgets. It is a stark reminder that public money, once stolen, does not vanish into thin air. It flows through the financial system, often right under the noses of compliance teams.

The typologies on display—ghost projects, contractor cartels, political kickbacks, and luxury laundering—are not unique to the Philippines. They are part of a global playbook of corruption-driven laundering. But in a country already under FATF scrutiny, the stakes are even higher.

For banks and EMIs, the call to action is urgent: strengthen detection, move beyond static rules, and collaborate across institutions. For regulators, it means demanding transparency, closing loopholes, and leveraging technology that learns and adapts in real time.

At Tookitaki, our role is to ensure institutions are not just reacting after scandals break but detecting patterns before they escalate. By unmasking money trails, enabling collaborative intelligence, and embedding AI-driven defences, we can prevent the next flood of fraud from drowning public trust.

Floods may be natural, but fraud floods are man-made. And unlike typhoons, this one is preventable.

Flooded with Fraud: Unmasking the Money Trails in Philippine Infrastructure Projects
Blogs
03 Sep 2025
7 min
read

How Initiatives Like AI Verify Make AI-Governance & Validation Protocols Integral to AI Deployment Strategy

Introduction: Why Governance-First AI is Rewriting the Financial Crime Playbook

This article is the second instalment in our series, Governance-First AI Strategy: The Future of Financial Crime Detection. The series examines how financial institutions can move beyond box-ticking compliance and embrace AI systems that are transparent, trustworthy, and genuinely effective against crime.

If you missed Part 1 — The AI Governance Crisis: How Compliance-First Thinking Undermines Both Innovation and Compliance — we recommend it as a pre-read. There, we explored how today’s compliance-heavy frameworks have created a paradox: soaring costs, mounting false positives, and declining effectiveness in tackling sophisticated financial crime.

In this second part, we shift from diagnosing the crisis to highlighting solutions. We look at how governance-first AI is being operationalised through initiatives like Singapore’s AI Verify program, which is setting global benchmarks for validation, accountability, and continuous trust in financial crime detection.

The Governance Gap: Moving Beyond Checkbox Compliance

Traditionally, many financial institutions have seen governance as a final-layer exercise: a set of boxes to tick just before launching a new AML system or onboarding a new AI solution. But today’s complex, AI-driven systems have outpaced this outdated approach. Here’s why this gap is so dangerous:

The Risks of Outdated Governance

  • Operational Failure: Financial institutions are reporting false positive alert rates reaching 90% or higher. Analysts spend valuable time on non-issues, while genuine risks can slip through unseen, creating an operational black hole.
  • Regulatory Exposure: Regulators are increasingly sceptical of black-box AI systems that cannot be explained or audited. This raises the risk of costly penalties, strict remediation orders, and reputational damage.
  • Stalled Innovation: The fear of non-compliance can make organisations hesitant to adopt even the most promising AI innovations, worried they will face issues during audits.

Towards Living Governance

True governance means embedding transparency, validation, and accountability across the entire AI lifecycle. This is not a static report, but a dynamic, ongoing protocol that evolves as threats and opportunities do.

ChatGPT Image Sep 3, 2025, 01_18_24 PM

AI Verify: Singapore’s Blueprint for Independent AI Validation

Enter AI Verify: Singapore’s response to the governance challenge, and a model now being emulated worldwide. Developed by the IMDA and AI Verify Foundation, this pioneering program aims to transform governance and validation from afterthoughts into core design principles for any AI system, especially those managing financial crime risk.

Key Features of AI Verify

  • Rigorous, Scenario-Based Testing: Every AI model is evaluated against 400+ real-world financial crime detection scenarios, ensuring that outputs perform accurately across the range of complexities institutions actually face.
  • Multi-language and Cross-Border Application: With testing in both English and Mandarin, AI Verify anticipates the needs of global financial institutions with diverse customer bases and regulatory environments.
  • Zero Tolerance for Hallucinations: The program enforces strict protocols to ensure every AI-generated output is grounded in verifiable, auditable facts. This sharply reduces the risk of hallucinations, a key regulatory concern.
  • Continuous Compliance Assurance: Validation is not a single event. Ongoing monitoring, reporting, and built-in alerts ensure the AI adapts to new criminal typologies and evolving regulatory expectations.

Validation in Action: The Tookitaki Case Study

Tookitaki became the first RegTech company to achieve independent validation under Singapore’s AI Verify program, setting a new industry benchmark for governance-first AI solutions.

  • Accuracy Across Complexity: Our AI systems were validated against an extensive suite of real-world AML scenarios, consistently delivering precise, actionable outcomes in both English and Mandarin.
  • No Hallucinations: With guardrails in place, every AI-generated narrative was rigorously checked for factual soundness and traceability. Investigators and regulators were able to audit the reasoning behind each alert, turning AI from a “black box” into a transparent partner.
  • Compliance, Built-In: Stringent regulatory, privacy, and security requirements were checked throughout the process, ensuring our systems could not only pass today’s audits but also stay ahead of tomorrow’s standards.
  • Strategic Trust: As recognised by media coverage in The Straits Times, Tookitaki’s independent validation became a source of trust for clients, regulators, and business partners, transforming governance into a strategic advantage.

Continuous Validation: Governance as Daily Operational Advantage

What sets AI Verify, and governance-first models more broadly, apart is the principle of continuous validation:

  • Pre-deployment: Before launch, every model is stress-tested for robustness, fairness, and regulatory fit in a controlled, simulated real-world setting.
  • Post-deployment: Continuous monitoring ensures that as new fraud threats and compliance rules arise, the AI adapts immediately, preventing operational surprises and keeping regulator confidence high.

This approach lets financial institutions move from a reactive, firefighting mentality to a proactive, resilient operating style.

The Strategic Payoff: Governance as a Differentiator

What is the true value of independent, embedded validation?

  • Faster, Safer Innovation: Launches of new AI models become quicker and less risky, since validation is built in, not tacked on at the end.
  • Operational Efficiency: With fewer false positives and more explainable decisions, investigative teams can focus energy where it matters most: rooting out real financial crime.
  • Market Leadership: Governance-first adopters signal to clients, partners, and regulators that they take trust, transparency, and responsibility seriously, building long-term advantages in reputation and readiness.
Talk to an Expert

Conclusion: Tomorrow’s AI, Built on Governance

As we highlighted in Part 1, compliance-first frameworks have proven costly and ineffective, leaving financial institutions trapped in a cycle of escalating spend and diminishing returns. AI Verify demonstrates what a governance-first approach looks like in practice: validation, accountability, and transparency built directly into the design of AI systems.

For Tookitaki, achieving independent validation under AI Verify was not simply a compliance milestone. It was evidence that governance-first AI can deliver measurable trust, precision, and operational advantage. By embedding continuous validation, institutions can move from reactive firefighting to proactive resilience, strengthening both regulatory confidence and market reputation.

Key Takeaways from Part 2:

  1. Governance-first AI shifts the conversation from “being compliant” to “being trustworthy by design.”
  2. Continuous validation ensures models evolve with emerging financial crime typologies and regulatory expectations.
  3. Independent validation transforms governance from a cost centre into a strategic differentiator.

What’s Next in the Series

In Part 3 of our series, Governance-First AI Strategy: The Future of Financial Crime Detection, we will explore one of the most pressing risks in deploying AI for compliance: AI hallucinations. When models generate misleading or fabricated outputs, trust breaks down, both with regulators and within institutions.

We will examine why hallucinations are such a critical challenge in financial crime detection and how governance-first safeguards, including Tookitaki’s own controls, are designed to eliminate these risks and make every AI-driven decision auditable, transparent, and actionable.

Stay tuned.

How Initiatives Like AI Verify Make AI-Governance & Validation Protocols Integral to AI Deployment Strategy