Blog

The Transformative Role of Generative AI in Financial Crime Compliance

Site Logo
Anup Gunjan
26 September 2024
read
10 min

When we look at the financial crime landscape today, it’s clear that we’re on the brink of a significant evolution. The traditional methods of combating money laundering and fraud, which have relied heavily on rule-based systems and static models, are rapidly being eclipsed by the transformative potential of artificial intelligence (AI) and machine learning (ML). Over the last two decades, these technologies have fundamentally changed how we identify and respond to illicit activities. But as we look into the next few years, a new tech transformation is set to reshape the field: generative AI.

This isn't just another technological upgrade—it’s a paradigm shift. Generative AI is poised to redefine the rules of the game, offering unprecedented capabilities that go beyond the detection and prevention tools we’ve relied on so far. While ML has already improved our ability to spot suspicious patterns, generative AI promises to tackle more sophisticated threats, adapt faster to evolving tactics, and bring a new level of intelligence to financial crime compliance.

But with this promise comes a critical question: How exactly will generative AI or specifically, Large Language Models (LLM) transform financial crime compliance? The answer lies not just in its advanced capabilities but in its potential to alter the way we approach detection and prevention fundamentally. As we prepare for this next wave of innovation, it’s essential to understand the opportunities—and the challenges—that come with it.

Generative AI in Financial crime compliance

When it comes to leveraging LLM in financial crime compliance, the possibilities are profound. Let’s break down some of the key areas where this technology can make a real impact:

  1. Data Generation and Augmentation: LLM has the unique ability to create synthetic data that closely mirrors real-world financial transactions. This isn’t just about filling in gaps; it’s about creating a rich, diverse dataset that can be used to train machine learning models more effectively. This is particularly valuable for fintech startups that may not have extensive historical data to draw from. With generative AI, they can test and deploy robust financial crime solutions while preserving the privacy of sensitive information. It’s like having a virtual data lab that’s always ready for experimentation.
  2. Unsupervised Anomaly Detection: Traditional systems often struggle to catch the nuanced, sophisticated patterns of modern financial crime. Large language models, however, can learn the complex behaviours of legitimate transactions and use this understanding as a baseline. When a new transaction deviates from this learned norm, it raises a red flag. These models can detect subtle irregularities that traditional rule-based systems or simpler machine learning algorithms might overlook, providing a more refined, proactive defence against potential fraud or money laundering.
  3. Automating the Investigation Process: Compliance professionals know the grind of sifting through endless alerts and drafting investigation reports. Generative AI offers a smarter way forward. By automating the creation of summaries, reports, and investigation notes, it frees up valuable time for compliance teams to focus on what really matters: strategic decision-making and complex case analysis. This isn’t just about making things faster—it’s about enabling a deeper, more insightful investigative process.
  4. Scenario Simulation and Risk Assessment: Generative AI can simulate countless financial transaction scenarios, assessing their risk levels based on historical data and regulatory requirements. This capability allows financial institutions to anticipate and prepare for a wide range of potential threats. It’s not just about reacting to crime; it’s about being ready for what comes next, armed with the insights needed to stay one step ahead.

To truly appreciate the transformative power of generative AI, we need to take a closer look at two critical areas: anomaly detection and explainability. These are the foundations upon which the future of financial crime compliance will be built.

Anomaly detection

One of the perennial challenges in fraud detection is the reliance on labelled data, where traditional machine learning models need clear examples of both legitimate and fraudulent transactions to learn from. This can be a significant bottleneck. After all, obtaining such labelled data—especially for emerging or sophisticated fraud schemes—is not only time-consuming but also often incomplete. This is where generative AI steps in, offering a fresh perspective with its capability for unsupervised anomaly detection, bypassing the need for labelled datasets.

To understand how this works, let’s break it down.

Traditional Unsupervised ML Approach

Typically, financial institutions using unsupervised machine learning might deploy clustering algorithms like k-means. Here’s how it works: transactions are grouped into clusters based on various features—transaction amount, time of day, location, and so on. Anomalies are then identified as transactions that don’t fit neatly into any of these clusters or exhibit characteristics that deviate significantly from the norm.

While this method has its merits, it can struggle to keep up with the complexity of modern fraud patterns. What happens when the anomalies are subtle or when legitimate variations are mistakenly flagged? The result is a system that can’t always distinguish between a genuine threat and a benign fluctuation.

Generative AI Approach

Generative AI offers a more nuanced solution. Consider the use of a variational autoencoder (VAE). Instead of relying on predefined labels, a VAE learns the underlying distribution of normal transactions by reconstructing them during training. Think of it as the model teaching itself what “normal” looks like. As it learns, the VAE can even generate synthetic transactions that closely resemble real ones, effectively creating a virtual landscape of typical behavior.

Once trained, this model becomes a powerful tool for anomaly detection. Here’s how: every incoming transaction is reconstructed by the VAE and compared to its original version. Transactions that deviate significantly, exhibiting high reconstruction errors, are flagged as potential anomalies. It’s like having a highly sensitive radar that picks up on the slightest deviations from the expected course. Moreover, by generating synthetic transactions and comparing them to real ones, the model can spot discrepancies that might otherwise go unnoticed.

This isn’t just an incremental improvement—it’s a leap forward. Generative AI’s ability to capture the intricate relationships within transaction data means it can detect anomalies with greater accuracy, reducing false positives and enhancing the overall effectiveness of fraud detection.

Explainability and Automated STR Reporting in Local Languages

One of the most pressing issues in machine learning (ML)-based systems is their often opaque decision-making process. For compliance officers and regulators tasked with understanding why a certain transaction was flagged, this lack of transparency can be a significant hurdle. Enter explainability techniques like LIME and SHAP. These tools are designed to peel back the layers of complex generative AI models, offering insights into how and why specific decisions were made. It’s like shining a light into the black box, providing much-needed clarity in a landscape where every decision could have significant implications.

But explainability is only one piece of the puzzle. Compliance is a global game, played on a field marked by varied and often stringent regulatory requirements. This is where generative AI’s natural language processing (NLP) capabilities come into play, revolutionizing how suspicious transaction reports (STRs) are generated and communicated. Imagine a system that can not only identify suspicious activities but also automatically draft detailed, accurate STRs in multiple languages, tailored to the specific regulatory nuances of each jurisdiction.

This is more than just a time-saver; it’s a transformative tool that ensures compliance officers can operate seamlessly across borders. By automating the generation of STRs in local languages, AI not only speeds up the process but also reduces the risk of miscommunication or regulatory missteps. It’s about making compliance more accessible and more effective, no matter where you are in the world.

{{cta-whitepaper}}

Upcoming Challenges

While the potential of generative AI is undeniably transformative, it’s not without its hurdles. From technical intricacies to regulatory constraints, there are several challenges that must be navigated to fully harness this technology in the fight against financial crime.

LLMs and Long Text Processing

One of the key challenges is ensuring that Generative Language Models (GLMs) like the Large Language Model (LLM) go beyond simple tasks like summarization to demonstrate true analytical intelligence. The introduction of Gemini 1.5 is a step forward, bringing enhanced capabilities for processing long texts. Yet, the question remains: can these models truly grasp the complexities of financial transactions and provide actionable insights? It’s not just about understanding more data; it’s about understanding it better.

Implementation Hurdles

    1. Data Quality and Preprocessing: Generative AI models are only as good as the data they’re trained on. Inconsistent or low-quality data can skew results, leading to false positives or overlooked threats. For financial institutions, ensuring clean, standardized, and comprehensive datasets is not just important—it’s imperative. This involves meticulous data preprocessing, including feature engineering, normalization, and handling missing values. Each step is crucial to preparing the data for training, ensuring that the models can perform at their best.
    2. Model Training and Scalability: Training large-scale models like LLMs and GANs is no small feat. The process is computationally intensive, requiring vast resources and advanced infrastructure. Scalability becomes a critical issue here. Strategies like distributed training and model parallelization, along with efficient hardware utilization, are needed to make these models not just a technological possibility but a practical tool for real-world AML/CFT systems.
    3. Evaluation Metrics and Interpretability: How do we measure success in generative AI for financial crime compliance? Traditional metrics like reconstruction error or sample quality don’t always capture the whole picture. In this context, evaluation criteria need to be more nuanced, combining these general metrics with domain-specific ones that reflect the unique demands of AML/CFT. But it’s not just about performance. The interpretability of these models is equally vital. Without clear, understandable outputs, building trust with regulators and compliance officers remains a significant challenge.
    4. Potential Limitations and Pitfalls: As powerful as generative AI can be, it’s not infallible. These models can inherit biases and inconsistencies from their training data, leading to unreliable or even harmful outputs. It’s a risk that cannot be ignored. Implementing robust techniques for bias detection and mitigation, alongside rigorous risk assessment and continuous monitoring, is essential to ensure that generative AI is used safely and responsibly in financial crime compliance.
    Navigating these challenges is no small task, but it’s a necessary journey. To truly unlock the potential of generative AI in combating financial crime, we must address these obstacles head-on, with a clear strategy and a commitment to innovation.

Regulatory and Ethical Considerations

As we venture into the integration of generative AI in anti-money laundering (AML) and counter-financing of terrorism (CFT) systems, it’s not just the technological challenges that we need to be mindful of. The regulatory and ethical landscape presents its own set of complexities, demanding careful navigation and proactive engagement with stakeholders.

Regulatory Compliance

The deployment of generative AI in AML/CFT isn’t simply about adopting new technology—it’s about doing so within a framework that respects the rule of law. This means a close, ongoing dialogue with regulatory bodies to ensure that these advanced systems align with existing laws, guidelines, and best practices. Establishing clear standards for the development, validation, and governance of AI models is not just advisable; it’s essential. Without a robust regulatory framework, even the most sophisticated AI models could become liabilities rather than assets.

Ethical AI and Fairness

In the realm of financial crime compliance, the stakes are high. Decisions influenced by AI models can have significant impacts on individuals and businesses, which makes fairness and non-discrimination more than just ethical considerations—they are imperatives. Generative AI systems must be rigorously tested for biases and unintended consequences. This means implementing rigorous validation processes to ensure that these models uphold the principles of ethical AI and fairness, especially in high-stakes scenarios. We’re not just building technology; we’re building trust.

Privacy and Data Protection

With generative AI comes the promise of advanced capabilities like synthetic data generation and privacy-preserving analytics. But these innovations must be handled with care. Compliance with data protection regulations and the safeguarding of customer privacy rights should be at the forefront of any implementation strategy. Clear policies and robust safeguards are crucial to protect sensitive financial information, ensuring that the deployment of these models doesn’t inadvertently compromise the very data they are designed to protect.

Model Security and Robustness

Generative AI models, such as LLMs and GANs, bring immense power but also vulnerabilities. The risk of adversarial attacks or model extraction cannot be overlooked. To safeguard the integrity and confidentiality of these models, robust security measures need to be put in place. Techniques like differential privacy, watermarking, and the use of secure enclaves should be explored and implemented to protect these systems from malicious exploitation. It’s about creating not just intelligent models, but resilient ones.

{{cta-first}}

Gen AI in Tookitaki FinCense

Tookitaki’s FinCense platform is pioneering the use of Generative AI to redefine financial crime compliance. We are actively collaborating with our clients through lighthouse projects to put the advanced Gen AI capabilities of FinCense to the test. Powered by a local LLM engine built on Llama models, FinCense introduces a suite of features designed to transform the compliance landscape.

One standout feature is the Smart Disposition Engine, which automates the handling of alerts with remarkable efficiency. By incorporating rules, policy checklists, and reporting in local languages, this engine streamlines the entire alert management process, cutting manual investigation time by an impressive 50-60%. It’s a game-changer for compliance teams, enabling them to focus on complex cases rather than getting bogged down in routine tasks.

Then there’s FinMate, an AI investigation copilot tailored to the unique needs of AML compliance professionals. Based on a local LLM model, FinMate serves as an intelligent assistant, offering real-time support during investigations. It doesn’t just provide information; it delivers actionable insights and suggestions that help compliance teams navigate through cases more swiftly and effectively.

Moreover, the platform’s Local Language Reporting feature enhances its usability across diverse regions. By supporting multiple local languages, FinCense ensures that compliance teams can manage alerts and generate reports seamlessly, regardless of their location. This localization capability is more than just a convenience—it’s a critical tool that enables teams to work more effectively within their regulatory environments.

With these cutting-edge features, Tookitaki’s FinCense platform is not just keeping up with the evolution of financial crime compliance—it’s leading the way, setting new standards for what’s possible with Generative AI in this critical field.

Final Thoughts

The future of financial crime compliance is set to be revolutionized by the advancements in AI and ML. Over the next few years, generative AI will likely become an integral part of our arsenal, pushing the boundaries of what’s possible in detecting and preventing illicit activities. Large Language Models (LLMs) like GPT-3 and its successors are not just promising—they are poised to transform the landscape. From automating the generation of Suspicious Activity Reports (SARs) to conducting in-depth risk assessments and offering real-time decision support to compliance analysts, these models are redefining what’s possible in the AML/CFT domain.

But LLMs are only part of the equation. Generative Adversarial Networks (GANs) are also emerging as a game-changer. Their ability to create synthetic, privacy-preserving datasets is a breakthrough for financial institutions struggling with limited access to real-world data. These synthetic datasets can be used to train and test machine learning models, making it easier to simulate and study complex financial crime scenarios without compromising sensitive information.

The real magic, however, lies in the convergence of LLMs and GANs. Imagine a system that can not only detect anomalies but also generate synthetic transaction narratives or provide explanations for suspicious activities. This combination could significantly enhance the interpretability and transparency of AML/CFT systems, making it easier for compliance teams to understand and act on the insights provided by these advanced models.

Embracing these technological advancements isn’t just an option—it’s a necessity. The challenge will be in implementing them responsibly, ensuring they are used to build a more secure and transparent financial ecosystem. This will require a collaborative effort between researchers, financial institutions, and regulatory bodies. Only by working together can we address the technical and ethical challenges that come with deploying generative AI, ensuring that these powerful tools are used to their full potential—responsibly and effectively.

The road ahead is filled with promise, but it’s also lined with challenges. By navigating this path with care and foresight, we can leverage generative AI to not only stay ahead of financial criminals but to create a future where the financial system is safer and more resilient than ever before.

Talk to an Expert

Ready to Streamline Your Anti-Financial Crime Compliance?

Our Thought Leadership Guides

Blogs
18 Dec 2025
6 min
read

Beyond the Ratings: What FATF’s December 2025 Review Means for Malaysia’s AML Playbook

When the Financial Action Task Force publishes a Mutual Evaluation Report, it is not simply assessing the existence of laws and controls. It is examining whether those measures are producing real, demonstrable outcomes across the financial system.

The FATF Mutual Evaluation Report on Malaysia, published in December 2025, sends a clear signal in this regard. Beyond the headline ratings, the evaluation focuses on how effectively money laundering and terrorist financing risks are understood, prioritised, and mitigated in practice.

For banks, fintechs, and compliance teams operating in Malaysia, the real value of the report lies in these signals. They indicate where supervisory scrutiny is likely to intensify and where institutions are expected to demonstrate stronger alignment between risk understanding and operational controls.

Talk to an Expert

What a FATF Mutual Evaluation Is Really Testing

A FATF Mutual Evaluation assesses two interconnected dimensions.

The first is technical compliance, which looks at whether the legal and institutional framework aligns with FATF Recommendations.

The second, and increasingly decisive, dimension is effectiveness. This examines whether authorities and reporting entities are achieving intended outcomes, including timely detection, meaningful disruption of illicit financial activity, and effective use of financial intelligence.

In recent evaluation cycles, FATF has made it clear that strong frameworks alone are insufficient. Supervisors are looking for evidence that risks are properly understood and that controls are proportionate, targeted, and working as intended. Malaysia’s December 2025 evaluation reflects this emphasis throughout.

Why Malaysia’s Evaluation Carries Regional Significance

Malaysia plays a central role in Southeast Asia’s financial system. It supports significant volumes of cross-border trade, remittance flows, and correspondent banking activity, alongside a rapidly growing digital payments and fintech ecosystem.

This positioning increases exposure to complex and evolving money laundering risks. FATF’s evaluation recognises Malaysia’s progress in strengthening its framework, while also highlighting the need for continued focus on risk-based implementation as financial crime becomes more cross-border, more technology-driven, and more fragmented.

For financial institutions, this reinforces the expectation that controls must evolve alongside the risk landscape, not lag behind it.

Key Signals Emerging from the December 2025 Evaluation

Effectiveness Takes Precedence Over Formal Compliance

One of the strongest signals from the evaluation is the emphasis on demonstrable effectiveness.

Institutions are expected to show that:

  • Higher-risk activities are identified and prioritised
  • Detection mechanisms are capable of identifying complex and layered activity
  • Alerts, investigations, and reporting are aligned with real risk exposure
  • Financial intelligence leads to meaningful outcomes

Controls that exist but do not clearly contribute to these outcomes are unlikely to meet supervisory expectations.

Risk Understanding Must Drive Control Design

The evaluation reinforces that a risk-based approach must extend beyond documentation and enterprise risk assessments.

Financial institutions are expected to:

  • Clearly articulate their understanding of inherent and residual risks
  • Translate that understanding into targeted monitoring scenarios
  • Adjust controls as new products, delivery channels, and typologies emerge

Generic or static monitoring frameworks risk being viewed as insufficiently aligned with actual exposure.

Ongoing Focus on Cross-Border and Predicate Offence Risks

Consistent with Malaysia’s role as a regional financial hub, the evaluation places continued emphasis on cross-border risks.

These include exposure to:

  • Trade-based money laundering
  • Proceeds linked to organised crime and corruption
  • Cross-border remittances and correspondent banking relationships

FATF’s focus here signals that institutions must demonstrate not just transaction monitoring coverage, but the ability to interpret cross-border activity in context and identify suspicious patterns that span multiple channels.

Expanding Attention on Non-Bank and Digital Channels

While banks remain central to Malaysia’s AML framework, the evaluation highlights increasing supervisory attention on:

  • Payment institutions
  • Digital platforms
  • Designated non-financial businesses and professions

As risks shift across the financial ecosystem, regulators expect banks and fintechs to understand how their exposures interact with activity outside traditional banking channels.

Practical Implications for Malaysian Financial Institutions

For compliance teams, the December 2025 evaluation translates into several operational realities.

Supervisory Engagement Will Be More Outcome-Focused

Regulators are likely to probe:

  • Whether monitoring scenarios reflect current risk assessments
  • How detection logic has evolved over time
  • What evidence demonstrates that controls are effective

Institutions that cannot clearly explain how their controls address specific risks may face increased scrutiny.

Alert Volumes Will Be Scrutinised for Quality

High alert volumes are no longer viewed as evidence of strong controls.

Supervisors are increasingly focused on:

  • The relevance of alerts generated
  • The quality of investigations
  • The timeliness and usefulness of suspicious transaction reporting

This places pressure on institutions to improve signal quality while managing operational efficiency.

Static Monitoring Frameworks Will Be Challenged

The pace at which money laundering typologies evolve continues to accelerate.

Institutions that rely on:

  • Infrequent scenario reviews
  • Manual rule tuning
  • Disconnected monitoring systems

may struggle to demonstrate timely adaptation to emerging risks highlighted through national risk assessments or supervisory feedback.

ChatGPT Image Dec 18, 2025, 11_10_16 AM

Common Execution Gaps Highlighted Through FATF Evaluations

Across jurisdictions, FATF evaluations frequently expose similar challenges.

Fragmented Monitoring Approaches

Siloed AML and fraud systems limit the ability to see end-to-end money flows and behavioural patterns.

Slow Adaptation to Emerging Typologies

Scenario libraries can lag behind real-world risk evolution, particularly without access to shared intelligence.

Operational Strain from False Positives

Excessive alert volumes reduce investigator effectiveness and dilute regulatory reporting quality.

Explainability and Governance Limitations

Institutions must be able to explain why controls behave as they do. Opaque or poorly governed models raise supervisory concerns.

What FATF Is Signalling About the Next Phase

While not always stated explicitly, the evaluation reflects expectations that institutions will continue to mature their AML capabilities.

Supervisors are looking for evidence of:

  • Continuous improvement
  • Learning over time
  • Strong governance over model changes
  • Clear auditability and explainability

This represents a shift from compliance as a static obligation to compliance as an evolving capability.

Translating Supervisory Expectations into Practice

To meet these expectations, many institutions are adopting modern AML approaches built around scenario-led detection, continuous refinement, and strong governance.

Such approaches enable compliance teams to:

  • Respond more quickly to emerging risks
  • Improve detection quality while managing noise
  • Maintain transparency and regulatory confidence

Platforms that combine shared intelligence, explainable analytics, and unified monitoring across AML and fraud domains align closely with the direction signalled by recent FATF evaluations. Solutions such as Tookitaki’s FinCense illustrate how technology can support these outcomes while maintaining auditability and supervisory trust.

From Compliance to Confidence

The FATF Mutual Evaluation of Malaysia should be viewed as more than a formal assessment. It is a forward-looking signal.

Institutions that treat it purely as a compliance exercise may meet minimum standards. Those that use it as a reference point for strengthening risk understanding and control effectiveness are better positioned for sustained supervisory confidence.

Final Reflection

FATF evaluations increasingly focus on whether systems work in practice, not just whether they exist.

For Malaysian banks and fintechs, the December 2025 review reinforces a clear message. The institutions best prepared for the next supervisory cycle will be those that can demonstrate strong risk understanding, effective controls, and the ability to adapt as threats evolve.

Beyond the Ratings: What FATF’s December 2025 Review Means for Malaysia’s AML Playbook
Blogs
16 Dec 2025
6 min
read

RBNZ vs ASB: Why New Zealand’s AML Expectations Just Changed

In December 2025, the Reserve Bank of New Zealand sent one of its clearest signals yet to the financial sector. By filing civil proceedings against ASB Bank for breaches of the AML/CFT Act, the regulator made it clear that compliance in name alone is no longer sufficient. What matters now is whether anti-money laundering controls actually work in practice.

This was not a case about proven money laundering or terrorism financing. It was about operational effectiveness, timeliness, and accountability. For banks and financial institutions across New Zealand, that distinction is significant.

The action marks a turning point in how AML compliance will be assessed going forward. It reflects a shift from reviewing policies and frameworks to testing whether institutions can demonstrate real-world outcomes under scrutiny.

Talk to an Expert

What Happened and Why It Matters

The Reserve Bank’s filing outlines multiple failures by ASB to meet core obligations under the AML/CFT Act. These included shortcomings in maintaining an effective AML programme, carrying out ongoing customer due diligence, applying enhanced due diligence when required, and reporting suspicious activity within mandated timeframes.

ASB admitted liability across all causes of action and cooperated with the regulator. The Reserve Bank also clarified that it was not alleging ASB knowingly facilitated money laundering or terrorism financing.

This clarification is important. The case is not about intent or criminal involvement. It is about whether an institution’s AML framework operated effectively and consistently over time.

For the wider market, this is a regulatory signal rather than an isolated enforcement action.

What the Reserve Bank Is Really Signalling

Read carefully, the Reserve Bank’s message goes beyond one bank. It reflects a broader recalibration of supervisory expectations.

First, AML effectiveness is now central. Regulators are no longer satisfied with documented programmes alone. Institutions must show that controls detect risk, escalate appropriately, and lead to timely action.

Second, speed matters. Delays in suspicious transaction reporting, extended remediation timelines, and slow responses to emerging risks are viewed as material failures, not operational inconveniences.

Third, governance and accountability are under the spotlight. AML effectiveness is not just a technology issue. It reflects resourcing decisions, prioritisation, escalation pathways, and senior oversight.

This mirrors developments in other comparable jurisdictions, including Australia, Singapore, and the United Kingdom, where regulators are increasingly outcome-focused.

Why This Is a Critical Moment for New Zealand’s Financial System

New Zealand’s AML regime has matured significantly over the past decade. Financial institutions have invested heavily in frameworks, teams, and tools. Yet the RBNZ action highlights a persistent gap between programme design and day-to-day execution.

This matters for several reasons.

Public confidence in the financial system depends not only on preventing crime, but on the belief that institutions can detect and respond to risk quickly and effectively.

From an international perspective, New Zealand’s reputation as a well-regulated financial centre supports correspondent banking relationships and cross-border trust. Supervisory actions like this are closely observed beyond domestic borders.

For compliance teams, the message is clear. Supervisory reviews will increasingly test how AML frameworks perform under real-world conditions, not how well they are documented.

Common AML Gaps Brought to Light

While the specifics of each institution differ, the issues raised by the Reserve Bank are widely recognised across the industry.

One common challenge is fragmented visibility. Customer risk data, transaction monitoring outputs, and historical alerts often sit in separate systems. This makes it difficult to build a unified view of risk or spot patterns over time.

Another challenge is static monitoring logic. Rule-based thresholds that are rarely reviewed struggle to keep pace with evolving typologies, particularly in an environment shaped by real-time payments and digital channels.

Ongoing customer due diligence also remains difficult to operationalise at scale. While onboarding checks are often robust, keeping customer risk profiles current requires continuous recalibration based on behaviour, exposure, and external intelligence.

Finally, reporting delays are frequently driven by workflow inefficiencies. Manual reviews, alert backlogs, and inconsistent escalation criteria can all slow the path from detection to reporting.

Individually, these issues may appear manageable. Together, they undermine AML effectiveness.

Why Traditional AML Models Are Under Strain

Many of these gaps stem from legacy AML operating models.

Traditional architectures rely heavily on static rules, manual investigations, and institution-specific intelligence. This approach struggles in an environment where financial crime is increasingly fast-moving, cross-border, and digitally enabled.

Compliance teams face persistent pressure. Alert volumes remain high, false positives consume investigator capacity, and regulatory expectations continue to rise. When resources are stretched, timeliness becomes harder to maintain.

Explainability is another challenge. Regulators expect institutions to articulate why decisions were made, not just that actions occurred. Systems that operate as black boxes make this difficult.

The result is a growing disconnect between regulatory expectations and operational reality.

The Shift Toward Effectiveness-Led AML

The RBNZ action reflects a broader move toward effectiveness-led AML supervision.

Under this approach, success is measured by outcomes rather than intent. Regulators are asking:

  • Are risks identified early or only after escalation?
  • Are enhanced due diligence triggers applied consistently?
  • Are suspicious activities reported promptly and with sufficient context?
  • Can institutions clearly explain and evidence their decisions?

Answering these questions requires more than incremental improvements. It requires a rethinking of how AML intelligence is sourced, applied, and validated.

ChatGPT Image Dec 16, 2025, 12_04_39 PM

Rethinking AML for the New Zealand Context

Modernising AML does not mean abandoning regulatory principles. It means strengthening how those principles are executed.

One important shift is toward scenario-driven detection. Instead of relying solely on generic thresholds, institutions increasingly use typologies grounded in real-world crime patterns. This aligns monitoring logic more closely with how financial crime actually occurs.

Another shift is toward continuous risk recalibration. Customer risk is not static. Systems that update risk profiles dynamically support more effective ongoing due diligence and reduce downstream escalation issues.

Collaboration also plays a growing role. Financial crime does not respect institutional boundaries. Access to shared intelligence helps institutions stay ahead of emerging threats rather than reacting in isolation.

Finally, transparency matters. Regulators expect clear, auditable logic that explains how risks are assessed and decisions are made.

Where Technology Can Support Better Outcomes

Technology alone does not solve AML challenges, but the right architecture can materially improve effectiveness.

Modern AML platforms increasingly support end-to-end workflows, covering onboarding, screening, transaction monitoring, risk scoring, investigation, and reporting within a connected environment.

Advanced analytics and machine learning can help reduce false positives while improving detection quality, when applied carefully and transparently.

Equally important is the ability to incorporate new intelligence quickly. Systems that can ingest updated typologies without lengthy redevelopment cycles are better suited to evolving risk landscapes.

How Tookitaki Supports This Evolution

Within this shifting environment, Tookitaki supports institutions as they move toward more effective AML outcomes.

FinCense, Tookitaki’s end-to-end compliance platform, is designed to support the full AML lifecycle, from real-time onboarding and screening to transaction monitoring, dynamic risk scoring, investigation, and reporting.

A distinguishing element is its connection to the AFC Ecosystem. This is a collaborative intelligence network where compliance professionals contribute, validate, and refine real-world scenarios based on emerging risks. These scenarios are continuously updated, allowing institutions to benefit from collective insights rather than relying solely on internal discovery.

For New Zealand institutions, this approach supports regulatory priorities around effectiveness, timeliness, and explainability. It strengthens detection quality while maintaining transparency and governance.

Importantly, technology is positioned as an enabler of better outcomes, not a substitute for oversight or accountability.

What Compliance Leaders in New Zealand Should Be Asking Now

In light of the RBNZ action, there are several questions worth asking internally.

  • Can we evidence the effectiveness of our AML controls, not just their existence?
  • How quickly do alerts move from detection to suspicious transaction reporting?
  • Are enhanced due diligence triggers dynamic or static?
  • Do we regularly test monitoring logic against emerging typologies?
  • Could we confidently explain our AML decisions to the regulator tomorrow?

These questions are not about fault-finding. They are about readiness.

Looking Ahead

The Reserve Bank’s action against ASB marks a clear shift in New Zealand’s AML supervisory landscape. Effectiveness, timeliness, and accountability are now firmly in focus.

For financial institutions, this is both a challenge and an opportunity. Those that proactively strengthen their AML operating models will be better positioned to meet regulatory expectations and build long-term trust.

Ultimately, the lesson extends beyond one case. AML compliance in New Zealand is entering a new phase, one where outcomes matter as much as intent. Institutions that adapt early will define the next standard for financial crime prevention in the market.

RBNZ vs ASB: Why New Zealand’s AML Expectations Just Changed
Blogs
12 Dec 2025
7 min
read

AFASA Explained: What the Philippines’ New Anti-Scam Law Really Means for Banks, Fintechs, and Consumers

If there is one thing everyone in the financial industry felt in the last few years, it was the speed at which scams evolved. Fraudsters became smarter, attacks became faster, and stolen funds moved through dozens of accounts in seconds. Consumers were losing life savings. Banks and fintechs were overwhelmed. And regulators had to act.

This is the backdrop behind the Anti-Financial Account Scamming Act (AFASA), Republic Act No. 12010 — the Philippines’ most robust anti-scam law to date. AFASA reshapes how financial institutions detect fraud, protect accounts, coordinate with one another, and respond to disputes.

But while many have written about the law, most explanations feel overly legalistic or too high-level. What institutions really need is a practical, human-friendly breakdown of what AFASA truly means in day-to-day operations.

This blog does exactly that.

Talk to an Expert

What Is AFASA? A Simple Explanation

AFASA exists for a clear purpose: to protect consumers from rapidly evolving digital fraud. The law recognises that as more Filipinos use e-wallets, online banking, and instant payments, scammers have gained more opportunities to exploit vulnerabilities.

Under AFASA, the term financial account is broad. It includes:

  • Bank deposit accounts
  • Credit card and investment accounts
  • E-wallets
  • Any account used to access financial products and services

The law focuses on three main categories of offences:

1. Money Muling

This covers the buying, selling, renting, lending, recruiting, or using of financial accounts to receive or move illicit funds. Many young people and jobseekers were unknowingly lured into mule networks — something AFASA squarely targets.

2. Social Engineering Schemes

From phishing to impersonation, scammers have mastered psychological manipulation. AFASA penalises the use of deception to obtain sensitive information or access accounts.

3. Digital Fraud and Account Tampering

This includes unauthorised transfers, synthetic identities, hacking incidents, and scams executed through electronic communication channels.

In short: AFASA criminalises both the scammer and the infrastructure used for the scam — the accounts, the networks, and the people recruited into them.

Why AFASA Became Necessary

Scams in the Philippines reached a point where traditional fraud rules, old operational processes, and siloed detection systems were not enough.

Scam Trend 1: Social engineering became hyper-personal

Fraudsters learned to sound like bank agents, government officers, delivery riders, HR recruiters — even loved ones. OTP harvesting and remote access scams became common.

Scam Trend 2: Real-time payments made fraud instant

InstaPay and other instant channels made moving money convenient — but also made stolen funds disappear before anyone could react.

Scam Trend 3: Mule networks became organised

Criminal groups built structured pipelines of mule accounts, often recruiting vulnerable populations such as students, OFWs, and low-income households.

Scam Trend 4: E-wallet adoption outpaced awareness

A fast-growing digital economy meant millions of first-time digital users were exposed to sophisticated scams they were not prepared for.

AFASA was designed to break this cycle and create a safer digital financial environment.

New Responsibilities for Banks and Fintechs Under AFASA

AFASA introduces significant changes to how institutions must protect accounts. It is not just a compliance exercise — it demands real operational transformation.

These responsibilities are further detailed in new BSP circulars that accompany the law.

1. Stronger IT Risk Controls

Financial institutions must now implement advanced fraud and cybersecurity controls such as:

  • Device fingerprinting
  • Geolocation monitoring
  • Bot detection
  • Blacklist screening for devices, merchants, and IPs

These measures allow institutions to understand who is accessing accounts, how, and from where — giving them the tools to detect anomalies before fraud occurs.

2. Mandatory Fraud Management Systems (FMS)

Both financial institutions and clearing switch operators (including InstaPay and PESONet) must operate real-time systems that:

  • Flag suspicious activity
  • Block disputed or high-risk transactions
  • Detect behavioural anomalies

This ensures that fraud monitoring is consistent across the payment ecosystem — not just within individual institutions.

3. Prohibition on unsolicited clickable links

Institutions can no longer send clickable links or QR codes to customers unless explicitly initiated by the customer. This directly tackles phishing attacks that relied on spoofed messages.

4. Continuous customer awareness

Banks and fintechs must actively educate customers about:

  • Cyber hygiene
  • Secure account practices
  • Fraud patterns and red flags
  • How to report incidents quickly

Customer education is no longer optional — it is a formally recognised part of fraud prevention.

5. Shared accountability framework

AFASA moves away from the old “blame the victim” mentality. Fraud prevention is now a shared responsibility across:

  • Financial institutions
  • Account owners
  • Third-party service providers

This model recognises that no single party can combat fraud alone.

The Heart of AFASA: Temporary Holding of Funds & Coordinated Verification

Among all the changes introduced by AFASA, this is the one that represents a true paradigm shift.

Previously, once stolen funds were transferred out, recovery was almost impossible. Banks had little authority to stop or hold the movement of funds.

AFASA changes that.

Temporary Holding of Funds

Financial institutions now have the authority — and obligation — to temporarily hold disputed funds for up to 30 days. This includes both the initial hold and any permitted extension. The purpose is simple:
freeze the money before it disappears.

Triggers for Temporary Holding

A hold can be initiated through:

  • A victim’s complaint
  • A suspicious transaction flagged by the institution’s FMS
  • A request from another financial institution

This ensures that action can be taken proactively or reactively depending on the scenario.

Coordinated Verification Process

Once funds are held, institutions must immediately begin a coordinated process that involves:

  • The originating institution
  • Receiving institutions
  • Clearing entities
  • The account owners involved

This process validates whether the transaction was legitimate or fraudulent. It creates a formal, structured, and time-bound mechanism for investigation.

Detailed Transaction Logs Are Now Mandatory

Institutions must maintain comprehensive transaction logs — including device information, authentication events, IP addresses, timestamps, password changes, and more. Logs must be retained for at least five years.

This gives investigators the ability to reconstruct transactions and understand the full context of a disputed transfer.

An Industry-Wide Protocol Must Be Built

AFASA requires the entire industry to co-develop a unified protocol for handling disputed funds and verification. This ensures consistency, promotes collaboration, and reduces delays during investigations.

This is one of the most forward-thinking aspects of the law — and one that will significantly raise the standard of scam response in the country.

BSP’s Expanded Powers Through CAPO

AFASA also strengthens regulatory oversight.

BSP’s Consumer Account Protection Office (CAPO) now has the authority to:

  • Conduct inquiries into financial accounts suspected of involvement in fraud
  • Access financial account information required to investigate prohibited acts
  • Coordinate with law enforcement agencies

Crucially, during these inquiries, bank secrecy laws and the Data Privacy Act do not apply.

This is a major shift that reflects the urgency of combating digital fraud.

Crucially, during these inquiries, bank secrecy laws and the Data Privacy Act do not apply.

This is a major shift that reflects the urgency of combating digital fraud.

ChatGPT Image Dec 11, 2025, 04_47_15 PM

Penalties Under AFASA

AFASA imposes serious penalties to deter both scammers and enablers:

1. Criminal penalties for money muling

Anyone who knowingly participates in using, recruiting, or providing accounts for illicit transfers is liable to face imprisonment and fines.

2. Liability for failing to protect funds

Institutions may be held accountable if they fail to properly execute a temporary hold when a dispute is raised.

3. Penalties for improper holding

Institutions that hold funds without valid reason may also face sanctions.

4. Penalties for malicious reporting

Consumers or individuals who intentionally file false reports may also be punished.

5. Administrative sanctions

Financial institutions that fail to comply with AFASA requirements may be penalised by BSP.

The penalties underscore the seriousness with which the government views scam prevention.

What AFASA Means for Banks and Fintechs: The Practical Reality

Here’s what changes on the ground:

1. Fraud detection becomes real-time — not after-the-fact

Institutions need modern systems that can flag abnormal behaviour within seconds.

2. Dispute response becomes faster

Timeframes are tight, and institutions need streamlined internal workflows.

3. Collaboration is no longer optional

Banks, e-wallets, payment operators, and regulators must work as one system.

4. Operational pressure increases

Fraud teams must handle verification, logging, documentation, and communication under strict timelines.

5. Liability is higher

Institutions may be held responsible for lapses in protection, detection, or response.

6. Technology uplift becomes non-negotiable

Legacy systems will struggle to meet AFASA’s requirements — particularly around logging, behavioural analytics, and real-time detection.

How Tookitaki Helps Institutions Align With AFASA

AFASA sets a higher bar for fraud prevention. Tookitaki’s role as the Trust Layer to Fight Financial Crime helps institutions strengthen their AFASA readiness with intelligent, real-time, and collaborative capabilities.

1. Early detection of money mule networks

Through the AFC Ecosystem’s collective intelligence, institutions can detect mule-like patterns sooner and prevent illicit transactions before they spread across the system.

2. Real-time monitoring aligned with AFASA needs

FinCense’s advanced transaction monitoring engine flags suspicious activity instantly — helping institutions support temporary holding procedures and respond within required timelines.

3. Deep behavioural intelligence and comprehensive logs

Tookitaki provides the contextual understanding needed to trace disputed transfers, reconstruct transaction paths, and support investigative workflows.

4. Agentic AI to accelerate investigations

FinMate, the AI investigation copilot, streamlines case analysis, surfaces insights quickly, and reduces investigation workload — especially crucial when time-sensitive AFASA processes are triggered.

5. Federated learning for privacy-preserving model improvement

Institutions can enhance detection models without sharing raw data, aligning with AFASA’s broader emphasis on secure and responsible handling of financial information.

Together, these capabilities enable banks and fintechs to strengthen fraud defences, modernise their operations, and protect financial accounts with confidence.

Looking Ahead: AFASA’s Long-Term Impact

AFASA is not a one-time regulatory update — it is a structural shift in how the Philippine financial ecosystem handles scams.

Expect to see:

  • More real-time fraud rules and guidance
  • Industry-wide technical standards for dispute management
  • Higher expectations for digital onboarding and authentication
  • Increased coordination between banks, fintechs, and regulators
  • Greater focus on intelligence-sharing and network-level detection

Most importantly, AFASA lays the foundation for a safer, more trusted digital economy — one where consumers have confidence that institutions and regulators can protect them from fast-evolving threats.

Conclusion

AFASA represents a turning point in the Philippines’ fight against financial scams. It transforms how institutions detect fraud, protect accounts, collaborate with others, and support customers. For banks and fintechs, the message is clear: the era of passive fraud response is over.

The institutions that will thrive under AFASA are those that embrace real-time intelligence, strengthen operational resilience, and adopt technology that enables them to stay ahead of criminal innovation.

The Philippines has taken a bold step toward a safer financial system — and now, it’s time for the industry to match that ambition.

AFASA Explained: What the Philippines’ New Anti-Scam Law Really Means for Banks, Fintechs, and Consumers