In the digital age, fraud investigations have evolved. They've moved beyond traditional methods, embracing technology's potential.
Data analysis, artificial intelligence, and digital forensics are now integral parts of the process. These tools help detect patterns, anomalies, and fraudulent activities that might otherwise go unnoticed.
But it's not just about detection. Technology also plays a crucial role in fraud prevention. From predictive analytics to blockchain, innovative solutions are being used to forecast and prevent potential fraud risks.
This article explores the role of technology in fraud investigations. It delves into how it's transforming the landscape, making fraud examination more efficient and effective.
Whether you're a business professional, a legal expert, or simply interested in the intersection of technology and fraud investigations, this piece will provide valuable insights. Let's delve into the world of tech-driven fraud investigations.
{{cta-first}}
Understanding Fraud Investigations
Fraud investigations involve the use of various techniques to detect and prevent fraudulent activities. These activities can range from financial fraud, such as embezzlement and money laundering, to identity theft and cybercrime.
With the rise of digital transactions and online activities, the scope of fraud has expanded. This has made fraud investigations more complex, necessitating the use of advanced technology. From data analysis to digital forensics, technology is now a key player in the fight against fraud.
The Evolution of Fraud Examination with Technology
The traditional methods of fraud examination involved manual processes and paper trails. Investigators would sift through stacks of documents, looking for discrepancies and signs of fraudulent activity. This was a time-consuming and labor-intensive process, with a high risk of human error.
With the advent of technology, these methods have been transformed. Today, fraud examination involves the use of sophisticated software and algorithms to analyze large volumes of data. This not only increases the efficiency and accuracy of fraud detection but also allows for the identification of complex fraud schemes that would be difficult to detect manually.
Key Technological Tools in Fraud Investigations
In the realm of fraud investigations, several technological tools have emerged as game-changers. These tools not only streamline the investigation process but also enhance the accuracy and effectiveness of fraud detection.
Data analysis tools, artificial intelligence, digital forensics, and blockchain technology are among the key technological tools used in fraud investigations. Each of these tools plays a unique role in detecting, preventing, and investigating fraudulent activities.
- Data analysis tools help in identifying patterns and anomalies indicative of fraud.
- Artificial intelligence and machine learning algorithms can detect complex fraud schemes.
- Digital forensics is crucial in gathering and preserving electronic evidence.
- Blockchain technology aids in preventing and tracing fraudulent transactions.
Data Analysis and Anomaly Detection
Data analysis is a powerful tool in fraud investigations. It involves the use of software to analyze large volumes of data, looking for patterns and anomalies that could indicate fraudulent activity. This process is much faster and more accurate than manual analysis, allowing investigators to detect fraud more efficiently.
Anomaly detection systems, a subset of data analysis, are particularly useful in fraud investigations. These systems flag unusual activities or transactions that deviate from the norm, alerting investigators to potential fraud.
Artificial Intelligence and Machine Learning
Artificial intelligence (AI) and machine learning (ML) have revolutionized fraud investigations. These technologies can analyze vast amounts of data, learning from patterns and making predictions about future behavior. This makes them incredibly effective at detecting complex fraud schemes that would be difficult to identify manually.
Moreover, AI and ML algorithms can adapt and evolve over time. This means they can keep up with changing fraud patterns, making them a powerful tool in the fight against fraud.
Digital Forensics and E-Discovery
Digital forensics plays a crucial role in fraud investigations. This involves the collection and analysis of electronic data, which can provide valuable evidence in a fraud investigation. Digital forensics tools can recover deleted or hidden data, track digital footprints, and preserve electronic evidence for legal proceedings.
E-discovery, or electronic discovery, is a related field that involves the identification, collection, and production of electronic evidence. This is particularly important in legal proceedings related to fraud, where electronic evidence can be crucial.
Blockchain and Cryptocurrency Tracing
Blockchain technology has a unique role in fraud investigations, particularly in cases involving cryptocurrency. Blockchain provides a transparent and immutable record of transactions, making it an effective tool for tracing fraudulent transactions.
Moreover, blockchain can help prevent fraud by providing a secure and tamper-proof platform for transactions. This makes it increasingly popular in sectors such as finance and supply chain, where fraud prevention is a top priority.
FMLA Fraud Investigations and Technology's Role
The Family and Medical Leave Act (FMLA) fraud investigations are a unique subset of fraud investigations. They involve detecting fraudulent claims for leave under the FMLA, which can be a complex and time-consuming process. However, technology has proven to be a valuable ally in these investigations.
Data analysis tools, for instance, can help identify patterns and anomalies in leave requests, flagging potential fraud. Digital forensics can uncover electronic evidence of fraud, such as falsified documents or emails. Thus, technology not only enhances the efficiency of FMLA fraud investigations but also their effectiveness in detecting and preventing fraud.
Challenges and Ethical Considerations in Tech-Driven Investigations
While technology has revolutionized fraud investigations, it also presents new challenges. For instance, the vast amount of data that needs to be analyzed can be overwhelming. Additionally, the rapid pace of technological advancements means that investigators must continually update their skills and tools to stay effective.
Moreover, the use of technology in fraud investigations raises ethical considerations. Investigators must balance the need for thorough investigations with respect for privacy rights. They must also ensure that the methods used to gather and analyze data are legal and ethical, to maintain the integrity of the investigation process.
Case Studies: Success Stories of Technology in Fraud Investigations
There are numerous instances where technology has played a pivotal role in fraud investigations. For example, in a recent case, a large corporation was able to detect an internal fraud scheme through the use of data analysis tools. The software flagged unusual patterns in financial transactions, leading to a thorough investigation and the eventual apprehension of the culprits.
In another case, a government agency used artificial intelligence to detect fraudulent claims in a public benefits program. The AI system was able to identify anomalies in the application data, leading to the discovery of a large-scale fraud operation.
Preparing for the Future: Trends and Predictions in Fraud Investigations
As we look to the future, the role of technology in fraud investigations is expected to grow even more significant. Predictive analytics, machine learning, and artificial intelligence will likely become standard tools in the arsenal of fraud examiners.
Moreover, as fraud schemes become more sophisticated, the need for advanced technological solutions will only increase. This makes staying updated with the latest developments in technology crucial for successful fraud investigations.
{{cta-ebook}}
Conclusion: Embracing Technology for Robust Fraud Prevention
In conclusion, technology plays a pivotal role in modern fraud investigations. It not only enhances the efficiency and effectiveness of these investigations but also helps in proactive fraud detection and prevention.
Embracing technology is no longer optional for organizations. It is a necessity for maintaining integrity, ensuring compliance, and safeguarding against financial and reputational damage. As technology continues to evolve, so too will its applications in fraud investigations, promising a future of more robust and resilient fraud prevention strategies.
To further enhance your fraud prevention efforts and learn more about Tookitaki's FRAML solution for real-time fraud prevention, we encourage you to book a slot with our experts. Together, we can strengthen your fraud detection strategies and safeguard your organization from potential risks.
Experience the most intelligent AML and fraud prevention platform
Experience the most intelligent AML and fraud prevention platform
Experience the most intelligent AML and fraud prevention platform
Top AML Scenarios in ASEAN

The Role of AML Software in Compliance

The Role of AML Software in Compliance

Talk to an Expert
Ready to Streamline Your Anti-Financial Crime Compliance?
Our Thought Leadership Guides
The “King” Who Promised Wealth: Inside the Philippines Investment Scam That Fooled Many
When authority is fabricated and trust is engineered, even the most implausible promises can start to feel real.
The Scam That Made Headlines
In a recent crackdown, the Philippine National Police arrested 15 individuals linked to an alleged investment scam that had been quietly unfolding across parts of the country.
At the centre of it all was a man posing as a “King” — a self-styled figure of authority who convinced victims that he had access to exclusive investment opportunities capable of delivering extraordinary returns.
Victims were drawn in through a mix of persuasion, perceived legitimacy, and carefully orchestrated narratives. Money was collected, trust was exploited, and by the time doubts surfaced, the damage had already been done.
While the arrests mark a significant step forward, the mechanics behind this scam reveal something far more concerning, a pattern that financial institutions are increasingly struggling to detect in real time.

Inside the Illusion: How the “King” Investment Scam Worked
At first glance, the premise sounds almost unbelievable. But scams like these rarely rely on logic, they rely on psychology.
The operation appears to have followed a familiar but evolving playbook:
1. Authority Creation
The central figure positioned himself as a “King” — not in a literal sense, but as someone with influence, access, and insider privilege. This created an immediate power dynamic. People tend to trust authority, especially when it is presented confidently and consistently.
2. Exclusive Opportunity Framing
Victims were offered access to “limited” investment opportunities. The framing was deliberate — not everyone could participate. This sense of exclusivity reduced skepticism and increased urgency.
3. Social Proof and Reinforcement
Scams of this nature often rely on group dynamics. Early participants, whether real or planted, reinforce credibility. Testimonials, referrals, and word-of-mouth create a false sense of validation.
4. Controlled Payment Channels
Funds were collected through a combination of cash handling and potentially structured transfers. This reduces traceability and delays detection.
5. Delayed Realisation
By the time inconsistencies surfaced, victims had already committed funds. The illusion held just long enough for the operators to extract value and move on.
This wasn’t just deception. It was structured manipulation, designed to bypass rational thinking and exploit human behaviour.
Why This Scam Is More Dangerous Than It Looks
It’s easy to dismiss this as an isolated case of fraud. But that would be a mistake.
What makes this incident particularly concerning is not the narrative — it’s the adaptability of the model.
Unlike traditional fraud schemes that rely heavily on digital infrastructure, this scam blended offline trust-building with flexible payment collection methods. That makes it significantly harder to detect using conventional monitoring systems.
More importantly, it highlights a shift: Fraud is no longer just about exploiting system vulnerabilities. It’s about exploiting human behaviour and using financial systems as the final execution layer.
For banks and fintechs, this creates a blind spot.
Following the Money: The Likely Financial Footprint
From a compliance and AML perspective, scams like this leave behind patterns — but rarely in a clean, linear form.
Based on the nature of the operation, the financial footprint may include:
- Multiple small-value deposits or transfers from different individuals, often appearing unrelated
- Use of intermediary accounts to collect and consolidate funds
- Rapid movement of funds across accounts to break transaction trails
- Cash-heavy collection points, reducing digital visibility
- Inconsistent transaction behaviour compared to customer profiles
Individually, these signals may not trigger alerts. But together, they form a pattern — one that requires contextual intelligence to detect.
Red Flags Financial Institutions Should Watch
For compliance teams, the challenge lies in identifying these patterns early — before the damage escalates.
Transaction-Level Indicators
- Sudden inflow of funds from multiple unrelated individuals into a single account
- Frequent small-value transfers followed by rapid aggregation
- Outbound transfers shortly after deposits, often to new or unverified beneficiaries
- Structuring behaviour that avoids typical threshold-based alerts
- Unusual spikes in account activity inconsistent with historical patterns
Behavioural Indicators
- Customers participating in transactions tied to “investment opportunities” without clear documentation
- Increased urgency in fund transfers, often under external pressure
- Reluctance or inability to explain transaction purpose clearly
- Repeated interactions with a specific set of counterparties
Channel & Activity Indicators
- Use of informal or non-digital communication channels to coordinate transactions
- Sudden activation of dormant accounts
- Multiple accounts linked indirectly through shared beneficiaries or devices
- Patterns suggesting third-party control or influence
These are not standalone signals. They need to be connected, contextualised, and interpreted in real time.
The Real Challenge: Why These Scams Slip Through
This is where things get complicated.
Scams like the “King” investment scheme are difficult to detect because they often appear legitimate — at least on the surface.
- Transactions are customer-initiated, not system-triggered
- Payment amounts are often below risk thresholds
- There is no immediate fraud signal at the point of transaction
- The story behind the payment exists outside the financial system
Traditional rule-based systems struggle in such scenarios. They are designed to detect known patterns, not evolving behaviours.
And by the time a pattern becomes obvious, the funds have usually moved.

Where Technology Makes the Difference
Addressing these risks requires a shift in how financial institutions approach detection.
Instead of looking at transactions in isolation, institutions need to focus on behavioural patterns, contextual signals, and scenario-based intelligence.
This is where modern platforms like Tookitaki’s FinCense play a critical role.
By leveraging:
- Scenario-driven detection models informed by real-world cases
- Cross-entity behavioural analysis to identify hidden connections
- Real-time monitoring capabilities for faster intervention
- Collaborative intelligence from ecosystems like the AFC Ecosystem
…institutions can move from reactive detection to proactive prevention.
The goal is not just to catch fraud after it happens, but to interrupt it while it is still unfolding.
From Headlines to Prevention
The arrest of those involved in the “King” investment scam is a reminder that enforcement is catching up. But it also highlights a deeper truth: Scams are evolving faster than traditional detection systems.
What starts as an unbelievable story can quickly become a widespread financial risk — especially when trust is weaponised and financial systems are used as conduits.
For banks and fintechs, the takeaway is clear.
Prevention cannot rely on static rules or delayed signals. It requires continuous adaptation, shared intelligence, and a deeper understanding of how modern scams operate.
Because the next “King” may not call himself one.
But the playbook will look very familiar.

Transaction Monitoring in Singapore: MAS Requirements and Best Practices
In August 2023, Singapore Police Force executed the largest money laundering operation in the country's history. S$3 billion in assets were seized from ten foreign nationals who had moved funds through Singapore's financial system for years — through banks, through licensed payment institutions, through corporate accounts holding everything from luxury cars to commercial property.
For compliance teams at Singapore-licensed financial institutions, the question that followed was not abstract. It was: would our transaction monitoring have caught this?
MAS has been examining that question across the industry since, through an intensified supervisory programme that has put transaction monitoring under closer scrutiny than at any point in the past decade. This guide covers what Singapore law requires, what MAS examiners actually check, and what a genuinely effective transaction monitoring programme looks like in a Singapore context.

Singapore's Transaction Monitoring Regulatory Framework
Transaction monitoring obligations in Singapore flow from three regulatory instruments. Understanding the differences between them matters — particularly for payment service providers, whose obligations are sometimes confused with bank requirements.
MAS Notice 626 (Banks)
MAS Notice 626, issued under the Banking Act, is the primary AML/CFT requirement for Singapore-licensed banks. Paragraphs 19–27 set out monitoring requirements: banks must implement systems to detect unusual or suspicious transactions, investigate alerts within defined timeframes, and document monitoring outcomes in a form that MAS can review.
The full obligations under Notice 626 are covered in detail in our [MAS Notice 626 Transaction Monitoring Requirements guide](/compliance-hub/mas-notice-626-transaction-monitoring). What matters for this discussion is that Notice 626 sets a floor, not a ceiling. MAS expectations in examination have consistently run ahead of the minimum text.
MAS Notices PSN01 and PSN02 (Payment Service Providers)
Since the Payment Services Act (PSA) came into force in 2020, licensed payment institutions — standard payment institutions and major payment institutions — have had AML/CFT obligations that mirror the core requirements of Notice 626, adapted for the payment services context.
A cross-border remittance operator has the same obligation to monitor for unusual activity as a bank. The typologies look different — faster transaction cycling, higher cross-border transfer volumes, shorter customer history — but the regulatory requirement is equivalent.
This matters because some licensed payment institutions still treat their monitoring obligations as lighter than bank-grade. MAS examination findings published in the 2024 supervisory expectations document specifically noted that AML controls at payment institutions were "less mature" than at banks — which means this is now an examination priority.
MAS AML/CFT Supervisory Expectations (2024)
The 2024 MAS supervisory expectations document is the most direct signal of what MAS is looking for. It followed the 2023 enforcement action and a broader review of AML/CFT controls across supervised institutions.
Transaction monitoring appears in three of the five priority areas in that document:
- Alert logic that is not calibrated to the institution's specific risk profile
- Insufficient monitoring intensity for high-risk customers
- Weak documentation of alert investigation outcomes
None of these are technical failures. They are process and governance failures — which is what makes them significant. An institution can have sophisticated monitoring software and still fail on all three.
What MAS Examiners Actually Check
Notice 626 describes what is required. MAS examinations test whether requirements are met in practice. Based on examination findings and regulatory guidance, MAS reviewers focus on four areas in transaction monitoring assessments.
Alert calibration against actual risk
MAS does not expect every institution to use the same alert thresholds. It expects every institution to use thresholds that reflect its own customer risk profile.
An institution whose customers are predominantly high-net-worth individuals with complex cross-border financial structures should have monitoring rules calibrated for that population — not rules designed for retail banking that happen to flag some of the same transactions.
In practice, examiners ask: how were these thresholds set? When were they last reviewed? What changed in your customer book since the last calibration, and how did the monitoring reflect that? Institutions that cannot answer these questions specifically — with dates, documented rationale, and sign-off from a named senior officer — are likely to receive findings.
Alert investigation documentation
This is where most examination failures occur, and it is not because institutions failed to review alerts.
MAS expects a written record for each alert: what the analyst found, why the transaction was or was not considered suspicious, and what action was or was not taken. A disposition of "reviewed — no SAR required" without supporting rationale does not satisfy this requirement. The expectation is closer to: "reviewed the customer's transaction history, the stated purpose of the account, and the counterparty profile. The transaction pattern is consistent with the customer's documented business activities and does not meet the threshold for filing."
Institutions that have good detection logic but poor investigation documentation often present worse in examination than institutions with simpler detection that document everything carefully.
Coverage of high-risk customers
FATF Recommendation 10 and Notice 626 both require enhanced monitoring for high-risk customers. MAS examiners check whether the monitoring programme reflects this operationally — not just in policy.
A specific check: do high-risk customers generate more alerts per capita than standard-risk customers? If not, one of two things is happening: either the monitoring programme is not applying enhanced measures to high-risk accounts, or it is applying enhanced measures but they are not generating additional alerts — which means the enhanced measures are not actually detecting more.
Either way, the institution needs to be able to explain the distribution clearly.
The audit trail
When MAS examines a monitoring programme, examiners review a sample of alerts from the past 12 months. For each sampled alert, they should be able to see: which rule or model triggered it, when it was assigned for investigation, who reviewed it, what the disposition decision was, the written rationale, and whether an STR was filed.
If any of these elements cannot be produced — because the system does not log them, or because records were not retained — the examination finding is straightforward.
Post-2023: What Changed
The 2023 enforcement action changed the operational context for transaction monitoring in Singapore in three specific ways.
Typology libraries need to reflect the patterns that were missed. The S$3 billion case involved specific patterns: shell companies receiving large transfers followed by property purchases, multiple entities with overlapping beneficial ownership, cash-intensive businesses used to layer funds into the formal banking system. These are not novel typologies — FATF and MAS had documented them before 2023. The question is whether monitoring rules were actually in place to detect them.
MAS has increased examination intensity. Following the 2023 case, MAS publicly committed to strengthening AML/CFT supervision, including more frequent and more intrusive examinations of systemically important institutions. Compliance teams that previously experienced relatively light-touch monitoring reviews should expect more detailed examination engagement going forward.
The reputational context for non-compliance has shifted. Before 2023, AML failures in Singapore were largely a technical compliance matter. After an enforcement action that received global coverage and led to diplomatic implications, the reputational consequences of a significant AML failure for a Singapore-licensed institution are much more visible.
Transaction Monitoring for PSA-Licensed Payment Institutions
For firms licensed under the PSA, there are specific practical considerations that bank-focused guidance does not address.
Shorter customer history. Payment service firms typically have shorter customer relationships than banks — sometimes months rather than years. ML-based anomaly detection models need historical data to establish baseline behaviour. When that history is limited, rules-based detection of known typologies needs to carry more weight in the alert logic.
Cross-border transaction volumes. PSA licensees handling international remittances have inherently higher cross-border exposure. Monitoring typologies must specifically address: structuring across multiple corridors, unusual shifts in destination country distribution, and dormant accounts that suddenly receive high-volume cross-border inflows.
Account lifecycle monitoring. New accounts that begin transacting immediately at high volume, or accounts that show no activity for an extended period before suddenly becoming active, are specific patterns that PSA-specific monitoring rules should address.
MAS has stated directly that it expects payment institutions to "uplift" their AML/CFT controls to a level closer to bank-grade. For transaction monitoring specifically, that means investment in calibration, documentation, and governance — not simply deploying a vendor system and assuming requirements are met.

What Effective Transaction Monitoring Looks Like in Singapore
Across MAS guidance, examination findings, and the post-2023 supervisory environment, an effective Singapore TM programme has six characteristics:
1. Documented calibration rationale. Alert thresholds are set with reference to the institution's customer risk assessment and reviewed when the customer book changes. Every threshold has a documented basis.
2. Coverage of Singapore-specific typologies. Beyond generic AML typologies, the monitoring library includes patterns documented in Singapore enforcement actions: shell company structuring, property-linked layering, cross-border transfer cycling across high-risk jurisdictions.
3. Alert investigation documentation that can survive examination. Every alert has a written disposition, not a checkbox. High-risk customer alerts have enhanced documentation. STR filings link back to specific alerts.
4. Defined escalation process. When an analyst is uncertain, there is a clear path to the Money Laundering Reporting Officer. Escalation decisions are recorded.
5. Regular calibration review. The monitoring programme is tested — whether through independent review, internal audit, or structured self-assessment — at least annually. Results and follow-up actions are documented.
6. Model governance for ML components. Where ML-based detection is used, model performance is tracked, validation is documented, and retraining triggers are defined. The validation record sits with the institution.
Taking the Next Step
If your institution is preparing for a MAS examination, reviewing its monitoring programme post-2023, or evaluating new transaction monitoring software, the starting point is a clear-eyed assessment of where your current programme sits against MAS expectations.
Tookitaki's FinCense platform is used by financial institutions across Singapore, Malaysia, Australia, and the Philippines. It is pre-configured with APAC-specific typologies — including patterns documented in Singapore enforcement actions and produces alert documentation in the format MAS examiners review.
Book a discussion with Tookitaki's team to see FinCense in a live environment calibrated for your institution type and region.
For a broader introduction to transaction monitoring requirements across all five APAC markets — Singapore, Australia, Malaysia, Philippines, and New Zealand — see our [complete transaction monitoring guide].

Transaction Monitoring Software: A Buyer's Guide for Banks and Fintechs
The compliance officer who bought their current transaction monitoring system probably saw a very good demo. Alert accuracy was 90% in the sandbox. Implementation was "6–8 weeks." The vendor had a case study from a Tier-1 bank.
Eighteen months later, the team processes 600 alerts per day, 530 of which are false positives. Two analysts have left. The backlog is three weeks long. An AUSTRAC examination is booked for Q4.
What happened between the demo and now is usually the same story: the sandbox didn't reflect production data, the rules weren't tuned for the actual customer base, and the implementation timeline quietly became six months.
This guide is not a vendor comparison. It is a diagnostic framework for telling effective transaction monitoring software from systems that look good until they're live.

Why Most TM Software Evaluations Go Wrong
Most procurement processes ask vendors to list their features. That is the wrong test.
Features are table stakes. What matters is performance in your specific environment — your customer mix, your transaction volumes, your risk profile. And vendor demonstrations are optimised to impress, not to replicate reality.
Three problems appear repeatedly in post-implementation reviews:
Alert accuracy drops between demo and production. Sandbox environments use curated, clean datasets. Production data is messier: duplicate records, legacy fields, missing counterparty data. Alert models calibrated on clean data degrade when they hit the real thing.
Rule libraries built for someone else. A retail bank in Sydney and a cross-border remittance operator in Singapore do not share transaction patterns. A rule library tuned for one will generate noise for the other. Most vendors deploy the same library for both and call it "risk-based."
"Transparent" models that cannot be tuned. Vendors frequently describe their ML systems as transparent and auditable. The test is whether your team can actually adjust the models when performance drifts, or whether every change requires a vendor engagement.
What "Effective" Means to Regulators
Before comparing systems, it is worth knowing what your regulator will assess. In APAC, the standard is consistent: regulators do not want to see a system that exists. They want evidence it works.
AUSTRAC (Australia): AML/CTF Rule 16 requires monitoring to be risk-based — thresholds must reflect your specific customer risk assessment, not generic defaults. AUSTRAC's enforcement record is specific on this point: both the Commonwealth Bank's AUD 700 million settlement in 2018 and Westpac's AUD 1.3 billion settlement in 2021 cited inadequate transaction monitoring as a direct failure — not the absence of a system, but the failure of one already in place.
MAS (Singapore): Notice 626 (paragraphs 19–27) requires FIs to detect, monitor, and report unusual transactions. MAS supervisory expectations published in 2024 flagged two recurring weaknesses across supervised firms: inadequate alert calibration and insufficient documentation of monitoring outcomes. Both are failures of execution, not of system selection.
BNM (Malaysia): The AML/CFT Policy Document (2023) requires an "effective" monitoring programme. Effectiveness is assessed through examination — specifically, whether the alerts generated correspond to the actual risk in the institution's customer base.
The practical consequence: an RFP that evaluates features without assessing tuning capability, calibration flexibility, and audit trail quality is not evaluating what regulators will look at.
7 Questions to Ask Any TM Vendor
1. What is your false positive rate in a live environment comparable to ours?
This is the single number that determines analyst workload. A false positive rate of 98% means 98 of every 100 alerts require investigation time before the analyst can close them as non-suspicious. At a mid-sized bank processing 500 alerts per day, that is 490 dead-end investigations.
The benchmark: well-tuned AI-augmented systems reach false positive rates of 80–85% in production. Legacy rule-only systems routinely run at 97–99%.
Ask the vendor to show actual data from a comparable client, not an anonymised case study. If they cannot, ask why.
2. How are alerts generated — rules, models, or a combination?
Pure rules-based systems are easy to validate for audit purposes but brittle: they miss patterns they were not programmed to detect, and new typologies go unnoticed until the rules are manually updated.
Pure ML systems can detect novel patterns but are harder to validate and explain to regulators who need to understand why an alert was raised.
Hybrid systems — rules for known typologies, models for anomaly detection — are generally more defensible. Ask specifically: how does the vendor update the rules and models when the regulatory environment changes? What happened when AUSTRAC updated its rules in 2023, or when MAS revised its supervisory expectations in 2024?
3. What does the analyst workflow look like after an alert fires?
Detection is only the first step. Analysts spend more time on alert investigation than on any other compliance task. A system that generates 200 precise, context-rich alerts is worth more operationally than one that generates 500 alerts requiring 40 minutes of manual research each before a disposition decision can be made.
Ask to see the actual analyst interface, not the executive dashboard. Check whether the alert displays customer history, previous alerts, peer comparison, and relevant counterparty data — or whether the analyst has to pull all of that separately.
4. What does a MAS- or AUSTRAC-ready audit log look like?
When a regulator examines your monitoring programme, they review the logic that generated each alert, the analyst's disposition decision, and the written rationale. They check whether high-risk customers received appropriate monitoring intensity and whether there is a documented escalation path for uncertain cases.
Ask the vendor to show you a sample audit log from a recent client examination. It should show: the rule or model that triggered the alert, the analyst who reviewed it, the decision, the rationale, and the time between alert generation and disposition. If the vendor cannot produce this, the system is not regulatory-examination-ready.
5. What does implementation actually take?
Ask for the implementation timeline — from contract to production-ready performance — for the vendor's most recent three comparable deployments. Not the standard brochure. Not the best case. Three actual recent clients.
Specifically: how long from contract signature to go-live? How long from go-live to the point where alert accuracy reached its steady-state level? Those are two different numbers, and the second one is the one that matters for planning.
6. How does the vendor handle model drift?
ML models degrade over time as transaction patterns change. A model trained on 2023 data will underperform against 2026 transaction patterns if it has not been retrained. Ask how frequently models are retrained, who initiates the review, and what triggers a retraining event.
Also ask: who holds the model validation documentation? Model governance is an emerging examination focus for MAS, AUSTRAC, and BNM. The validation record needs to sit with the institution, not only with the vendor.
7. How does the system handle regulatory updates?
APAC's AML/CFT rules change more frequently than in other regions. AUSTRAC updated Chapter 16 in 2023. MAS revised its AML/CFT supervisory expectations in 2024. BNM issued a revised AML/CFT Policy Document in 2023.
When these changes occur, who updates the system — and how quickly? Some vendors treat regulatory updates as professional services engagements billed separately. Others maintain a regulatory content team that pushes updates to all clients. Ask which model applies and get the answer in writing.

Banks vs. Fintechs: Different Needs, Different Priorities
A Tier-2 bank with 8 million retail customers and a PSA-licensed payment institution handling cross-border transfers have different TM requirements. The evaluation criteria shift accordingly.
For banks:
Volume and integration architecture matter first. A system processing 500,000 transactions per day needs different infrastructure than one processing 5,000. Ask specifically about latency in real-time monitoring scenarios and how the system handles peak volumes. Integration with core banking — particularly if the core is a legacy platform — is where implementations most commonly fail.
For fintechs and payment service providers:
Real-time detection weight is higher relative to batch processing. Cross-border typologies differ from domestic banking typologies — the vendor's rule library should include patterns specific to cross-border payment fraud, structuring across multiple jurisdictions, and rapid account cycling. Customer history is often short, which means models that require 12+ months of transaction data to perform will underperform in fast-growing books.
Total Cost of Ownership: The Number Most RFPs Undercount
The licence fee is the visible cost. The actual costs include:
- Implementation and integration: Typically 2–4x the first-year licence cost for a mid-size institution. A vendor that quotes "6–8 weeks" for implementation should be asked for the last five clients' actual implementation timelines before that number is used in any business case.
- Analyst capacity: A high false positive rate is not just an accuracy problem — it is a staffing cost. At a 97% false positive rate, a team processing 400 daily alerts spends approximately 85% of its investigation time on non-suspicious transactions. A 10-percentage-point improvement in accuracy frees roughly 2,400 analyst-hours per year at a 30-person operations team.
- Regulatory risk: The cost of an enforcement action should be in the risk-adjusted total cost of ownership calculation. Westpac's 2021 settlement was AUD 1.3 billion. The remediation programme that followed cost additional hundreds of millions. Against those figures, the difference between a well-tuned system and an adequate one looks very different on a business case.
What Tookitaki's FinCense Does Differently
FinCense is Tookitaki's transaction monitoring platform, built specifically for APAC financial institutions.
The core technical differentiator is federated learning. Most ML-based TM systems train models on a single institution's data, which limits pattern diversity. FinCense's models learn from typology patterns across the Tookitaki client network — without sharing raw transaction data between institutions. The result is detection capability that reflects a broader range of financial crime patterns than any single institution's data could produce.
In production deployments across APAC, FinCense has reduced false positive rates by up to 50% compared to legacy rule-based systems. In analyst workflow terms: a team processing 400 alerts per day at a 97% false positive rate could reduce that to approximately 200 alerts at the same investigation standard — roughly halving the time spent on non-productive reviews.
The platform is pre-integrated with APAC-specific typologies for AUSTRAC, MAS, BNM, BSP, and FMA regulatory environments. Regulatory updates are included in the standard contract.
Ready to Evaluate?
If your institution is reviewing its transaction monitoring system or implementing one for the first time, the seven questions in this guide are a starting framework. The answers will tell you more about a vendor's actual capability than any feature demonstration.
Book a discussion with Tookitaki's team to see FinCense in a live environment calibrated for your institution type and region. Or read our complete guide to "what is transaction monitoring? The Complete 2026 Guide" before the vendor conversations begin.

The “King” Who Promised Wealth: Inside the Philippines Investment Scam That Fooled Many
When authority is fabricated and trust is engineered, even the most implausible promises can start to feel real.
The Scam That Made Headlines
In a recent crackdown, the Philippine National Police arrested 15 individuals linked to an alleged investment scam that had been quietly unfolding across parts of the country.
At the centre of it all was a man posing as a “King” — a self-styled figure of authority who convinced victims that he had access to exclusive investment opportunities capable of delivering extraordinary returns.
Victims were drawn in through a mix of persuasion, perceived legitimacy, and carefully orchestrated narratives. Money was collected, trust was exploited, and by the time doubts surfaced, the damage had already been done.
While the arrests mark a significant step forward, the mechanics behind this scam reveal something far more concerning, a pattern that financial institutions are increasingly struggling to detect in real time.

Inside the Illusion: How the “King” Investment Scam Worked
At first glance, the premise sounds almost unbelievable. But scams like these rarely rely on logic, they rely on psychology.
The operation appears to have followed a familiar but evolving playbook:
1. Authority Creation
The central figure positioned himself as a “King” — not in a literal sense, but as someone with influence, access, and insider privilege. This created an immediate power dynamic. People tend to trust authority, especially when it is presented confidently and consistently.
2. Exclusive Opportunity Framing
Victims were offered access to “limited” investment opportunities. The framing was deliberate — not everyone could participate. This sense of exclusivity reduced skepticism and increased urgency.
3. Social Proof and Reinforcement
Scams of this nature often rely on group dynamics. Early participants, whether real or planted, reinforce credibility. Testimonials, referrals, and word-of-mouth create a false sense of validation.
4. Controlled Payment Channels
Funds were collected through a combination of cash handling and potentially structured transfers. This reduces traceability and delays detection.
5. Delayed Realisation
By the time inconsistencies surfaced, victims had already committed funds. The illusion held just long enough for the operators to extract value and move on.
This wasn’t just deception. It was structured manipulation, designed to bypass rational thinking and exploit human behaviour.
Why This Scam Is More Dangerous Than It Looks
It’s easy to dismiss this as an isolated case of fraud. But that would be a mistake.
What makes this incident particularly concerning is not the narrative — it’s the adaptability of the model.
Unlike traditional fraud schemes that rely heavily on digital infrastructure, this scam blended offline trust-building with flexible payment collection methods. That makes it significantly harder to detect using conventional monitoring systems.
More importantly, it highlights a shift: Fraud is no longer just about exploiting system vulnerabilities. It’s about exploiting human behaviour and using financial systems as the final execution layer.
For banks and fintechs, this creates a blind spot.
Following the Money: The Likely Financial Footprint
From a compliance and AML perspective, scams like this leave behind patterns — but rarely in a clean, linear form.
Based on the nature of the operation, the financial footprint may include:
- Multiple small-value deposits or transfers from different individuals, often appearing unrelated
- Use of intermediary accounts to collect and consolidate funds
- Rapid movement of funds across accounts to break transaction trails
- Cash-heavy collection points, reducing digital visibility
- Inconsistent transaction behaviour compared to customer profiles
Individually, these signals may not trigger alerts. But together, they form a pattern — one that requires contextual intelligence to detect.
Red Flags Financial Institutions Should Watch
For compliance teams, the challenge lies in identifying these patterns early — before the damage escalates.
Transaction-Level Indicators
- Sudden inflow of funds from multiple unrelated individuals into a single account
- Frequent small-value transfers followed by rapid aggregation
- Outbound transfers shortly after deposits, often to new or unverified beneficiaries
- Structuring behaviour that avoids typical threshold-based alerts
- Unusual spikes in account activity inconsistent with historical patterns
Behavioural Indicators
- Customers participating in transactions tied to “investment opportunities” without clear documentation
- Increased urgency in fund transfers, often under external pressure
- Reluctance or inability to explain transaction purpose clearly
- Repeated interactions with a specific set of counterparties
Channel & Activity Indicators
- Use of informal or non-digital communication channels to coordinate transactions
- Sudden activation of dormant accounts
- Multiple accounts linked indirectly through shared beneficiaries or devices
- Patterns suggesting third-party control or influence
These are not standalone signals. They need to be connected, contextualised, and interpreted in real time.
The Real Challenge: Why These Scams Slip Through
This is where things get complicated.
Scams like the “King” investment scheme are difficult to detect because they often appear legitimate — at least on the surface.
- Transactions are customer-initiated, not system-triggered
- Payment amounts are often below risk thresholds
- There is no immediate fraud signal at the point of transaction
- The story behind the payment exists outside the financial system
Traditional rule-based systems struggle in such scenarios. They are designed to detect known patterns, not evolving behaviours.
And by the time a pattern becomes obvious, the funds have usually moved.

Where Technology Makes the Difference
Addressing these risks requires a shift in how financial institutions approach detection.
Instead of looking at transactions in isolation, institutions need to focus on behavioural patterns, contextual signals, and scenario-based intelligence.
This is where modern platforms like Tookitaki’s FinCense play a critical role.
By leveraging:
- Scenario-driven detection models informed by real-world cases
- Cross-entity behavioural analysis to identify hidden connections
- Real-time monitoring capabilities for faster intervention
- Collaborative intelligence from ecosystems like the AFC Ecosystem
…institutions can move from reactive detection to proactive prevention.
The goal is not just to catch fraud after it happens, but to interrupt it while it is still unfolding.
From Headlines to Prevention
The arrest of those involved in the “King” investment scam is a reminder that enforcement is catching up. But it also highlights a deeper truth: Scams are evolving faster than traditional detection systems.
What starts as an unbelievable story can quickly become a widespread financial risk — especially when trust is weaponised and financial systems are used as conduits.
For banks and fintechs, the takeaway is clear.
Prevention cannot rely on static rules or delayed signals. It requires continuous adaptation, shared intelligence, and a deeper understanding of how modern scams operate.
Because the next “King” may not call himself one.
But the playbook will look very familiar.

Transaction Monitoring in Singapore: MAS Requirements and Best Practices
In August 2023, Singapore Police Force executed the largest money laundering operation in the country's history. S$3 billion in assets were seized from ten foreign nationals who had moved funds through Singapore's financial system for years — through banks, through licensed payment institutions, through corporate accounts holding everything from luxury cars to commercial property.
For compliance teams at Singapore-licensed financial institutions, the question that followed was not abstract. It was: would our transaction monitoring have caught this?
MAS has been examining that question across the industry since, through an intensified supervisory programme that has put transaction monitoring under closer scrutiny than at any point in the past decade. This guide covers what Singapore law requires, what MAS examiners actually check, and what a genuinely effective transaction monitoring programme looks like in a Singapore context.

Singapore's Transaction Monitoring Regulatory Framework
Transaction monitoring obligations in Singapore flow from three regulatory instruments. Understanding the differences between them matters — particularly for payment service providers, whose obligations are sometimes confused with bank requirements.
MAS Notice 626 (Banks)
MAS Notice 626, issued under the Banking Act, is the primary AML/CFT requirement for Singapore-licensed banks. Paragraphs 19–27 set out monitoring requirements: banks must implement systems to detect unusual or suspicious transactions, investigate alerts within defined timeframes, and document monitoring outcomes in a form that MAS can review.
The full obligations under Notice 626 are covered in detail in our [MAS Notice 626 Transaction Monitoring Requirements guide](/compliance-hub/mas-notice-626-transaction-monitoring). What matters for this discussion is that Notice 626 sets a floor, not a ceiling. MAS expectations in examination have consistently run ahead of the minimum text.
MAS Notices PSN01 and PSN02 (Payment Service Providers)
Since the Payment Services Act (PSA) came into force in 2020, licensed payment institutions — standard payment institutions and major payment institutions — have had AML/CFT obligations that mirror the core requirements of Notice 626, adapted for the payment services context.
A cross-border remittance operator has the same obligation to monitor for unusual activity as a bank. The typologies look different — faster transaction cycling, higher cross-border transfer volumes, shorter customer history — but the regulatory requirement is equivalent.
This matters because some licensed payment institutions still treat their monitoring obligations as lighter than bank-grade. MAS examination findings published in the 2024 supervisory expectations document specifically noted that AML controls at payment institutions were "less mature" than at banks — which means this is now an examination priority.
MAS AML/CFT Supervisory Expectations (2024)
The 2024 MAS supervisory expectations document is the most direct signal of what MAS is looking for. It followed the 2023 enforcement action and a broader review of AML/CFT controls across supervised institutions.
Transaction monitoring appears in three of the five priority areas in that document:
- Alert logic that is not calibrated to the institution's specific risk profile
- Insufficient monitoring intensity for high-risk customers
- Weak documentation of alert investigation outcomes
None of these are technical failures. They are process and governance failures — which is what makes them significant. An institution can have sophisticated monitoring software and still fail on all three.
What MAS Examiners Actually Check
Notice 626 describes what is required. MAS examinations test whether requirements are met in practice. Based on examination findings and regulatory guidance, MAS reviewers focus on four areas in transaction monitoring assessments.
Alert calibration against actual risk
MAS does not expect every institution to use the same alert thresholds. It expects every institution to use thresholds that reflect its own customer risk profile.
An institution whose customers are predominantly high-net-worth individuals with complex cross-border financial structures should have monitoring rules calibrated for that population — not rules designed for retail banking that happen to flag some of the same transactions.
In practice, examiners ask: how were these thresholds set? When were they last reviewed? What changed in your customer book since the last calibration, and how did the monitoring reflect that? Institutions that cannot answer these questions specifically — with dates, documented rationale, and sign-off from a named senior officer — are likely to receive findings.
Alert investigation documentation
This is where most examination failures occur, and it is not because institutions failed to review alerts.
MAS expects a written record for each alert: what the analyst found, why the transaction was or was not considered suspicious, and what action was or was not taken. A disposition of "reviewed — no SAR required" without supporting rationale does not satisfy this requirement. The expectation is closer to: "reviewed the customer's transaction history, the stated purpose of the account, and the counterparty profile. The transaction pattern is consistent with the customer's documented business activities and does not meet the threshold for filing."
Institutions that have good detection logic but poor investigation documentation often present worse in examination than institutions with simpler detection that document everything carefully.
Coverage of high-risk customers
FATF Recommendation 10 and Notice 626 both require enhanced monitoring for high-risk customers. MAS examiners check whether the monitoring programme reflects this operationally — not just in policy.
A specific check: do high-risk customers generate more alerts per capita than standard-risk customers? If not, one of two things is happening: either the monitoring programme is not applying enhanced measures to high-risk accounts, or it is applying enhanced measures but they are not generating additional alerts — which means the enhanced measures are not actually detecting more.
Either way, the institution needs to be able to explain the distribution clearly.
The audit trail
When MAS examines a monitoring programme, examiners review a sample of alerts from the past 12 months. For each sampled alert, they should be able to see: which rule or model triggered it, when it was assigned for investigation, who reviewed it, what the disposition decision was, the written rationale, and whether an STR was filed.
If any of these elements cannot be produced — because the system does not log them, or because records were not retained — the examination finding is straightforward.
Post-2023: What Changed
The 2023 enforcement action changed the operational context for transaction monitoring in Singapore in three specific ways.
Typology libraries need to reflect the patterns that were missed. The S$3 billion case involved specific patterns: shell companies receiving large transfers followed by property purchases, multiple entities with overlapping beneficial ownership, cash-intensive businesses used to layer funds into the formal banking system. These are not novel typologies — FATF and MAS had documented them before 2023. The question is whether monitoring rules were actually in place to detect them.
MAS has increased examination intensity. Following the 2023 case, MAS publicly committed to strengthening AML/CFT supervision, including more frequent and more intrusive examinations of systemically important institutions. Compliance teams that previously experienced relatively light-touch monitoring reviews should expect more detailed examination engagement going forward.
The reputational context for non-compliance has shifted. Before 2023, AML failures in Singapore were largely a technical compliance matter. After an enforcement action that received global coverage and led to diplomatic implications, the reputational consequences of a significant AML failure for a Singapore-licensed institution are much more visible.
Transaction Monitoring for PSA-Licensed Payment Institutions
For firms licensed under the PSA, there are specific practical considerations that bank-focused guidance does not address.
Shorter customer history. Payment service firms typically have shorter customer relationships than banks — sometimes months rather than years. ML-based anomaly detection models need historical data to establish baseline behaviour. When that history is limited, rules-based detection of known typologies needs to carry more weight in the alert logic.
Cross-border transaction volumes. PSA licensees handling international remittances have inherently higher cross-border exposure. Monitoring typologies must specifically address: structuring across multiple corridors, unusual shifts in destination country distribution, and dormant accounts that suddenly receive high-volume cross-border inflows.
Account lifecycle monitoring. New accounts that begin transacting immediately at high volume, or accounts that show no activity for an extended period before suddenly becoming active, are specific patterns that PSA-specific monitoring rules should address.
MAS has stated directly that it expects payment institutions to "uplift" their AML/CFT controls to a level closer to bank-grade. For transaction monitoring specifically, that means investment in calibration, documentation, and governance — not simply deploying a vendor system and assuming requirements are met.

What Effective Transaction Monitoring Looks Like in Singapore
Across MAS guidance, examination findings, and the post-2023 supervisory environment, an effective Singapore TM programme has six characteristics:
1. Documented calibration rationale. Alert thresholds are set with reference to the institution's customer risk assessment and reviewed when the customer book changes. Every threshold has a documented basis.
2. Coverage of Singapore-specific typologies. Beyond generic AML typologies, the monitoring library includes patterns documented in Singapore enforcement actions: shell company structuring, property-linked layering, cross-border transfer cycling across high-risk jurisdictions.
3. Alert investigation documentation that can survive examination. Every alert has a written disposition, not a checkbox. High-risk customer alerts have enhanced documentation. STR filings link back to specific alerts.
4. Defined escalation process. When an analyst is uncertain, there is a clear path to the Money Laundering Reporting Officer. Escalation decisions are recorded.
5. Regular calibration review. The monitoring programme is tested — whether through independent review, internal audit, or structured self-assessment — at least annually. Results and follow-up actions are documented.
6. Model governance for ML components. Where ML-based detection is used, model performance is tracked, validation is documented, and retraining triggers are defined. The validation record sits with the institution.
Taking the Next Step
If your institution is preparing for a MAS examination, reviewing its monitoring programme post-2023, or evaluating new transaction monitoring software, the starting point is a clear-eyed assessment of where your current programme sits against MAS expectations.
Tookitaki's FinCense platform is used by financial institutions across Singapore, Malaysia, Australia, and the Philippines. It is pre-configured with APAC-specific typologies — including patterns documented in Singapore enforcement actions and produces alert documentation in the format MAS examiners review.
Book a discussion with Tookitaki's team to see FinCense in a live environment calibrated for your institution type and region.
For a broader introduction to transaction monitoring requirements across all five APAC markets — Singapore, Australia, Malaysia, Philippines, and New Zealand — see our [complete transaction monitoring guide].

Transaction Monitoring Software: A Buyer's Guide for Banks and Fintechs
The compliance officer who bought their current transaction monitoring system probably saw a very good demo. Alert accuracy was 90% in the sandbox. Implementation was "6–8 weeks." The vendor had a case study from a Tier-1 bank.
Eighteen months later, the team processes 600 alerts per day, 530 of which are false positives. Two analysts have left. The backlog is three weeks long. An AUSTRAC examination is booked for Q4.
What happened between the demo and now is usually the same story: the sandbox didn't reflect production data, the rules weren't tuned for the actual customer base, and the implementation timeline quietly became six months.
This guide is not a vendor comparison. It is a diagnostic framework for telling effective transaction monitoring software from systems that look good until they're live.

Why Most TM Software Evaluations Go Wrong
Most procurement processes ask vendors to list their features. That is the wrong test.
Features are table stakes. What matters is performance in your specific environment — your customer mix, your transaction volumes, your risk profile. And vendor demonstrations are optimised to impress, not to replicate reality.
Three problems appear repeatedly in post-implementation reviews:
Alert accuracy drops between demo and production. Sandbox environments use curated, clean datasets. Production data is messier: duplicate records, legacy fields, missing counterparty data. Alert models calibrated on clean data degrade when they hit the real thing.
Rule libraries built for someone else. A retail bank in Sydney and a cross-border remittance operator in Singapore do not share transaction patterns. A rule library tuned for one will generate noise for the other. Most vendors deploy the same library for both and call it "risk-based."
"Transparent" models that cannot be tuned. Vendors frequently describe their ML systems as transparent and auditable. The test is whether your team can actually adjust the models when performance drifts, or whether every change requires a vendor engagement.
What "Effective" Means to Regulators
Before comparing systems, it is worth knowing what your regulator will assess. In APAC, the standard is consistent: regulators do not want to see a system that exists. They want evidence it works.
AUSTRAC (Australia): AML/CTF Rule 16 requires monitoring to be risk-based — thresholds must reflect your specific customer risk assessment, not generic defaults. AUSTRAC's enforcement record is specific on this point: both the Commonwealth Bank's AUD 700 million settlement in 2018 and Westpac's AUD 1.3 billion settlement in 2021 cited inadequate transaction monitoring as a direct failure — not the absence of a system, but the failure of one already in place.
MAS (Singapore): Notice 626 (paragraphs 19–27) requires FIs to detect, monitor, and report unusual transactions. MAS supervisory expectations published in 2024 flagged two recurring weaknesses across supervised firms: inadequate alert calibration and insufficient documentation of monitoring outcomes. Both are failures of execution, not of system selection.
BNM (Malaysia): The AML/CFT Policy Document (2023) requires an "effective" monitoring programme. Effectiveness is assessed through examination — specifically, whether the alerts generated correspond to the actual risk in the institution's customer base.
The practical consequence: an RFP that evaluates features without assessing tuning capability, calibration flexibility, and audit trail quality is not evaluating what regulators will look at.
7 Questions to Ask Any TM Vendor
1. What is your false positive rate in a live environment comparable to ours?
This is the single number that determines analyst workload. A false positive rate of 98% means 98 of every 100 alerts require investigation time before the analyst can close them as non-suspicious. At a mid-sized bank processing 500 alerts per day, that is 490 dead-end investigations.
The benchmark: well-tuned AI-augmented systems reach false positive rates of 80–85% in production. Legacy rule-only systems routinely run at 97–99%.
Ask the vendor to show actual data from a comparable client, not an anonymised case study. If they cannot, ask why.
2. How are alerts generated — rules, models, or a combination?
Pure rules-based systems are easy to validate for audit purposes but brittle: they miss patterns they were not programmed to detect, and new typologies go unnoticed until the rules are manually updated.
Pure ML systems can detect novel patterns but are harder to validate and explain to regulators who need to understand why an alert was raised.
Hybrid systems — rules for known typologies, models for anomaly detection — are generally more defensible. Ask specifically: how does the vendor update the rules and models when the regulatory environment changes? What happened when AUSTRAC updated its rules in 2023, or when MAS revised its supervisory expectations in 2024?
3. What does the analyst workflow look like after an alert fires?
Detection is only the first step. Analysts spend more time on alert investigation than on any other compliance task. A system that generates 200 precise, context-rich alerts is worth more operationally than one that generates 500 alerts requiring 40 minutes of manual research each before a disposition decision can be made.
Ask to see the actual analyst interface, not the executive dashboard. Check whether the alert displays customer history, previous alerts, peer comparison, and relevant counterparty data — or whether the analyst has to pull all of that separately.
4. What does a MAS- or AUSTRAC-ready audit log look like?
When a regulator examines your monitoring programme, they review the logic that generated each alert, the analyst's disposition decision, and the written rationale. They check whether high-risk customers received appropriate monitoring intensity and whether there is a documented escalation path for uncertain cases.
Ask the vendor to show you a sample audit log from a recent client examination. It should show: the rule or model that triggered the alert, the analyst who reviewed it, the decision, the rationale, and the time between alert generation and disposition. If the vendor cannot produce this, the system is not regulatory-examination-ready.
5. What does implementation actually take?
Ask for the implementation timeline — from contract to production-ready performance — for the vendor's most recent three comparable deployments. Not the standard brochure. Not the best case. Three actual recent clients.
Specifically: how long from contract signature to go-live? How long from go-live to the point where alert accuracy reached its steady-state level? Those are two different numbers, and the second one is the one that matters for planning.
6. How does the vendor handle model drift?
ML models degrade over time as transaction patterns change. A model trained on 2023 data will underperform against 2026 transaction patterns if it has not been retrained. Ask how frequently models are retrained, who initiates the review, and what triggers a retraining event.
Also ask: who holds the model validation documentation? Model governance is an emerging examination focus for MAS, AUSTRAC, and BNM. The validation record needs to sit with the institution, not only with the vendor.
7. How does the system handle regulatory updates?
APAC's AML/CFT rules change more frequently than in other regions. AUSTRAC updated Chapter 16 in 2023. MAS revised its AML/CFT supervisory expectations in 2024. BNM issued a revised AML/CFT Policy Document in 2023.
When these changes occur, who updates the system — and how quickly? Some vendors treat regulatory updates as professional services engagements billed separately. Others maintain a regulatory content team that pushes updates to all clients. Ask which model applies and get the answer in writing.

Banks vs. Fintechs: Different Needs, Different Priorities
A Tier-2 bank with 8 million retail customers and a PSA-licensed payment institution handling cross-border transfers have different TM requirements. The evaluation criteria shift accordingly.
For banks:
Volume and integration architecture matter first. A system processing 500,000 transactions per day needs different infrastructure than one processing 5,000. Ask specifically about latency in real-time monitoring scenarios and how the system handles peak volumes. Integration with core banking — particularly if the core is a legacy platform — is where implementations most commonly fail.
For fintechs and payment service providers:
Real-time detection weight is higher relative to batch processing. Cross-border typologies differ from domestic banking typologies — the vendor's rule library should include patterns specific to cross-border payment fraud, structuring across multiple jurisdictions, and rapid account cycling. Customer history is often short, which means models that require 12+ months of transaction data to perform will underperform in fast-growing books.
Total Cost of Ownership: The Number Most RFPs Undercount
The licence fee is the visible cost. The actual costs include:
- Implementation and integration: Typically 2–4x the first-year licence cost for a mid-size institution. A vendor that quotes "6–8 weeks" for implementation should be asked for the last five clients' actual implementation timelines before that number is used in any business case.
- Analyst capacity: A high false positive rate is not just an accuracy problem — it is a staffing cost. At a 97% false positive rate, a team processing 400 daily alerts spends approximately 85% of its investigation time on non-suspicious transactions. A 10-percentage-point improvement in accuracy frees roughly 2,400 analyst-hours per year at a 30-person operations team.
- Regulatory risk: The cost of an enforcement action should be in the risk-adjusted total cost of ownership calculation. Westpac's 2021 settlement was AUD 1.3 billion. The remediation programme that followed cost additional hundreds of millions. Against those figures, the difference between a well-tuned system and an adequate one looks very different on a business case.
What Tookitaki's FinCense Does Differently
FinCense is Tookitaki's transaction monitoring platform, built specifically for APAC financial institutions.
The core technical differentiator is federated learning. Most ML-based TM systems train models on a single institution's data, which limits pattern diversity. FinCense's models learn from typology patterns across the Tookitaki client network — without sharing raw transaction data between institutions. The result is detection capability that reflects a broader range of financial crime patterns than any single institution's data could produce.
In production deployments across APAC, FinCense has reduced false positive rates by up to 50% compared to legacy rule-based systems. In analyst workflow terms: a team processing 400 alerts per day at a 97% false positive rate could reduce that to approximately 200 alerts at the same investigation standard — roughly halving the time spent on non-productive reviews.
The platform is pre-integrated with APAC-specific typologies for AUSTRAC, MAS, BNM, BSP, and FMA regulatory environments. Regulatory updates are included in the standard contract.
Ready to Evaluate?
If your institution is reviewing its transaction monitoring system or implementing one for the first time, the seven questions in this guide are a starting framework. The answers will tell you more about a vendor's actual capability than any feature demonstration.
Book a discussion with Tookitaki's team to see FinCense in a live environment calibrated for your institution type and region. Or read our complete guide to "what is transaction monitoring? The Complete 2026 Guide" before the vendor conversations begin.


