A suspense account is one that temporarily records transactions that have yet to be assigned to their proper accounts. The suspense account is situated on the general ledger and is used to temporarily store specific transaction amounts. Having said that, any sums recorded in this account will ultimately be transferred to another permanent account.
So, what is the requirement for a suspense account in the first place?
A suspense account is needed because the appropriate account was not determined at the time the transaction was being recorded. As long as a transaction is found in a suspense account and hasn’t yet been transferred to its permanent account, it is placed in the suspense account, acting as its holding account for the transaction. Having a larger number of unreported transactions would mean that it won’t be recorded by the end of the reporting period, resulting in inaccurate financial outcomes.
Why are these accounts so important?
- They allow the transactions to be posted before any sufficient information is available to create an entry for the correct account(s). Without posting these transactions, there may be transactions that aren’t recorded by the end of a reporting period, which could result in inaccurate financial results.
- The items in a suspense account represent unallocated amounts. As such, having the account presented on financial statements with a remaining balance may be viewed negatively by outside investors. Therefore, suspense accounts should be cleared by the end of each financial period.
- Using a suspense account allows the accountant to review each individual transaction in the account before they clear it. The objective here is to shift the transaction to its original/permanent account in time.
- With more time, transactions can become difficult to clear, especially with minimal documentation. This explains why the transaction was put in a suspense account in the first place. To minimise this possibility in the future, items are tracked with the balance sheet.
- Suspense accounts are also known to be a control risk and, under the Sarbanes-Oxley (SOX) Act of 2002, it’s required that the accounts are analysed by the type of product, its aging category, and business justification, so that it’s understood exactly what is in the account. Also, this information needs to be shared with the auditors on a regular basis.
Examples
The following are a few examples of suspense accounts, or when is it viable to use or open one:
- If the payee is unknown
If a payment is made to the business but the accountant does not know who sent it, the sum must be placed in a suspense account until additional inquiry is completed. Once the accountant has reviewed the invoices or other communications and validated them with the client/customer, the funds can be sent to the appropriate account.
- In the event of partial payments
Partial payments, whether intended or unintentional, can be difficult to reconcile with bills. The accountant or those in control can place the payments in a suspense account until they can determine whose accounts the transactions belong to. For example, if a financial institution gets a $50 partial payment from a customer, it must first create a suspense account.
The accountant will then credit the suspense account with $50 and debit the cash account with the same transaction amount. When the company gets the entire payment from the customer, they will debit $50 from the suspense account and credit the receivable accounts with the same amount. When the process is finished, the accountant may finally terminate the suspense account and transfer the money to the correct account.
- In case one can’t classify a transaction
This situation can arise when a small business owner or senior executive is unsure how to classify a transaction. If this is the case, they might create a suspense account before they receive aid from their accountant. For example, a supplier may deliver a $1,000 invoice for services. If the person in charge is unclear which department of their company should be charged, they can temporarily store this sum in a suspense account.
To do so, users must first create a suspense account. After which, they need to debit the suspense account and credit the accounts payable. Once the department has been specified, the accountant or management will be able to quickly bill that department. For example, the buying department’s supply account. Finally, for the buying department to complete the transaction, the accountant will credit the suspense account and debit the supply account.
Best Practices for Accounting
Best practices for a suspense account:
- The accounting head or those in charge of the firm should evaluate the things in a suspense account on a regular basis. This is done to ensure that the transaction monies are returned to their originating accounts as soon as possible. Otherwise, the balances in the suspense account may increase to significant proportions and become difficult to manage over time. This is especially true for transactions with little evidence as to why they were kept in suspense in the first place.
- There should also be an everyday measurement of the balance sheet in the suspense account, utilised by the controller as the trigger for ongoing investigations. This data is valuable for tracking transactions that are regularly redirected to the suspense account. It helps to strengthen the processes and makes it simpler to recognise similar products in the future, hence keeping them out of the account.
- It is recommended practice to erase all things in a suspense account at the end of the fiscal year, or otherwise the company may issue statements that may contain unidentified transactions, which could lead to mistakes in the statement.
Suspense Account on Balance Sheets
For an accountant to show a suspense account on balance sheet documents is more direct than it seems, because it isn’t much different from other accounts. For instance, if the accountant or the owner isn’t sure which account to place a transaction into, then it’ll be moved to the suspense account for the time being. Also, a balance sheet will be placed on that account.
Following additional research, the accountant may discover that the money is intended for their marketing section, in which case he or she will transfer the funds to the correct account, ensuring that it balances on the balance sheet.
So, in terms of a balance sheet, the goal of a suspense account is always to have a balance of zero, indicating that everything has been accurately recorded and that there are no abnormalities unaccounted for in terms of the transaction. Suspense accounts on balance sheets are not desirable since they might make it difficult to balance the books appropriately.
Using a suspense account in accounting, on the other hand, is analogous to putting a paper on a pile of ‘to file.’ Suspense accounts, like any other stacks that must be filed eventually, cannot store anonymous sums indefinitely, therefore their correct account will be found at some point. Large corporations can clear their suspense accounts periodically, whereas small enterprises can do so more often.
Experience the most intelligent AML and fraud prevention platform
Experience the most intelligent AML and fraud prevention platform
Experience the most intelligent AML and fraud prevention platform
Top AML Scenarios in ASEAN

The Role of AML Software in Compliance

The Role of AML Software in Compliance

Talk to an Expert
Ready to Streamline Your Anti-Financial Crime Compliance?
Our Thought Leadership Guides
The “King” Who Promised Wealth: Inside the Philippines Investment Scam That Fooled Many
When authority is fabricated and trust is engineered, even the most implausible promises can start to feel real.
The Scam That Made Headlines
In a recent crackdown, the Philippine National Police arrested 15 individuals linked to an alleged investment scam that had been quietly unfolding across parts of the country.
At the centre of it all was a man posing as a “King” — a self-styled figure of authority who convinced victims that he had access to exclusive investment opportunities capable of delivering extraordinary returns.
Victims were drawn in through a mix of persuasion, perceived legitimacy, and carefully orchestrated narratives. Money was collected, trust was exploited, and by the time doubts surfaced, the damage had already been done.
While the arrests mark a significant step forward, the mechanics behind this scam reveal something far more concerning, a pattern that financial institutions are increasingly struggling to detect in real time.

Inside the Illusion: How the “King” Investment Scam Worked
At first glance, the premise sounds almost unbelievable. But scams like these rarely rely on logic, they rely on psychology.
The operation appears to have followed a familiar but evolving playbook:
1. Authority Creation
The central figure positioned himself as a “King” — not in a literal sense, but as someone with influence, access, and insider privilege. This created an immediate power dynamic. People tend to trust authority, especially when it is presented confidently and consistently.
2. Exclusive Opportunity Framing
Victims were offered access to “limited” investment opportunities. The framing was deliberate — not everyone could participate. This sense of exclusivity reduced skepticism and increased urgency.
3. Social Proof and Reinforcement
Scams of this nature often rely on group dynamics. Early participants, whether real or planted, reinforce credibility. Testimonials, referrals, and word-of-mouth create a false sense of validation.
4. Controlled Payment Channels
Funds were collected through a combination of cash handling and potentially structured transfers. This reduces traceability and delays detection.
5. Delayed Realisation
By the time inconsistencies surfaced, victims had already committed funds. The illusion held just long enough for the operators to extract value and move on.
This wasn’t just deception. It was structured manipulation, designed to bypass rational thinking and exploit human behaviour.
Why This Scam Is More Dangerous Than It Looks
It’s easy to dismiss this as an isolated case of fraud. But that would be a mistake.
What makes this incident particularly concerning is not the narrative — it’s the adaptability of the model.
Unlike traditional fraud schemes that rely heavily on digital infrastructure, this scam blended offline trust-building with flexible payment collection methods. That makes it significantly harder to detect using conventional monitoring systems.
More importantly, it highlights a shift: Fraud is no longer just about exploiting system vulnerabilities. It’s about exploiting human behaviour and using financial systems as the final execution layer.
For banks and fintechs, this creates a blind spot.
Following the Money: The Likely Financial Footprint
From a compliance and AML perspective, scams like this leave behind patterns — but rarely in a clean, linear form.
Based on the nature of the operation, the financial footprint may include:
- Multiple small-value deposits or transfers from different individuals, often appearing unrelated
- Use of intermediary accounts to collect and consolidate funds
- Rapid movement of funds across accounts to break transaction trails
- Cash-heavy collection points, reducing digital visibility
- Inconsistent transaction behaviour compared to customer profiles
Individually, these signals may not trigger alerts. But together, they form a pattern — one that requires contextual intelligence to detect.
Red Flags Financial Institutions Should Watch
For compliance teams, the challenge lies in identifying these patterns early — before the damage escalates.
Transaction-Level Indicators
- Sudden inflow of funds from multiple unrelated individuals into a single account
- Frequent small-value transfers followed by rapid aggregation
- Outbound transfers shortly after deposits, often to new or unverified beneficiaries
- Structuring behaviour that avoids typical threshold-based alerts
- Unusual spikes in account activity inconsistent with historical patterns
Behavioural Indicators
- Customers participating in transactions tied to “investment opportunities” without clear documentation
- Increased urgency in fund transfers, often under external pressure
- Reluctance or inability to explain transaction purpose clearly
- Repeated interactions with a specific set of counterparties
Channel & Activity Indicators
- Use of informal or non-digital communication channels to coordinate transactions
- Sudden activation of dormant accounts
- Multiple accounts linked indirectly through shared beneficiaries or devices
- Patterns suggesting third-party control or influence
These are not standalone signals. They need to be connected, contextualised, and interpreted in real time.
The Real Challenge: Why These Scams Slip Through
This is where things get complicated.
Scams like the “King” investment scheme are difficult to detect because they often appear legitimate — at least on the surface.
- Transactions are customer-initiated, not system-triggered
- Payment amounts are often below risk thresholds
- There is no immediate fraud signal at the point of transaction
- The story behind the payment exists outside the financial system
Traditional rule-based systems struggle in such scenarios. They are designed to detect known patterns, not evolving behaviours.
And by the time a pattern becomes obvious, the funds have usually moved.

Where Technology Makes the Difference
Addressing these risks requires a shift in how financial institutions approach detection.
Instead of looking at transactions in isolation, institutions need to focus on behavioural patterns, contextual signals, and scenario-based intelligence.
This is where modern platforms like Tookitaki’s FinCense play a critical role.
By leveraging:
- Scenario-driven detection models informed by real-world cases
- Cross-entity behavioural analysis to identify hidden connections
- Real-time monitoring capabilities for faster intervention
- Collaborative intelligence from ecosystems like the AFC Ecosystem
…institutions can move from reactive detection to proactive prevention.
The goal is not just to catch fraud after it happens, but to interrupt it while it is still unfolding.
From Headlines to Prevention
The arrest of those involved in the “King” investment scam is a reminder that enforcement is catching up. But it also highlights a deeper truth: Scams are evolving faster than traditional detection systems.
What starts as an unbelievable story can quickly become a widespread financial risk — especially when trust is weaponised and financial systems are used as conduits.
For banks and fintechs, the takeaway is clear.
Prevention cannot rely on static rules or delayed signals. It requires continuous adaptation, shared intelligence, and a deeper understanding of how modern scams operate.
Because the next “King” may not call himself one.
But the playbook will look very familiar.

Transaction Monitoring in Singapore: MAS Requirements and Best Practices
In August 2023, Singapore Police Force executed the largest money laundering operation in the country's history. S$3 billion in assets were seized from ten foreign nationals who had moved funds through Singapore's financial system for years — through banks, through licensed payment institutions, through corporate accounts holding everything from luxury cars to commercial property.
For compliance teams at Singapore-licensed financial institutions, the question that followed was not abstract. It was: would our transaction monitoring have caught this?
MAS has been examining that question across the industry since, through an intensified supervisory programme that has put transaction monitoring under closer scrutiny than at any point in the past decade. This guide covers what Singapore law requires, what MAS examiners actually check, and what a genuinely effective transaction monitoring programme looks like in a Singapore context.

Singapore's Transaction Monitoring Regulatory Framework
Transaction monitoring obligations in Singapore flow from three regulatory instruments. Understanding the differences between them matters — particularly for payment service providers, whose obligations are sometimes confused with bank requirements.
MAS Notice 626 (Banks)
MAS Notice 626, issued under the Banking Act, is the primary AML/CFT requirement for Singapore-licensed banks. Paragraphs 19–27 set out monitoring requirements: banks must implement systems to detect unusual or suspicious transactions, investigate alerts within defined timeframes, and document monitoring outcomes in a form that MAS can review.
The full obligations under Notice 626 are covered in detail in our [MAS Notice 626 Transaction Monitoring Requirements guide](/compliance-hub/mas-notice-626-transaction-monitoring). What matters for this discussion is that Notice 626 sets a floor, not a ceiling. MAS expectations in examination have consistently run ahead of the minimum text.
MAS Notices PSN01 and PSN02 (Payment Service Providers)
Since the Payment Services Act (PSA) came into force in 2020, licensed payment institutions — standard payment institutions and major payment institutions — have had AML/CFT obligations that mirror the core requirements of Notice 626, adapted for the payment services context.
A cross-border remittance operator has the same obligation to monitor for unusual activity as a bank. The typologies look different — faster transaction cycling, higher cross-border transfer volumes, shorter customer history — but the regulatory requirement is equivalent.
This matters because some licensed payment institutions still treat their monitoring obligations as lighter than bank-grade. MAS examination findings published in the 2024 supervisory expectations document specifically noted that AML controls at payment institutions were "less mature" than at banks — which means this is now an examination priority.
MAS AML/CFT Supervisory Expectations (2024)
The 2024 MAS supervisory expectations document is the most direct signal of what MAS is looking for. It followed the 2023 enforcement action and a broader review of AML/CFT controls across supervised institutions.
Transaction monitoring appears in three of the five priority areas in that document:
- Alert logic that is not calibrated to the institution's specific risk profile
- Insufficient monitoring intensity for high-risk customers
- Weak documentation of alert investigation outcomes
None of these are technical failures. They are process and governance failures — which is what makes them significant. An institution can have sophisticated monitoring software and still fail on all three.
What MAS Examiners Actually Check
Notice 626 describes what is required. MAS examinations test whether requirements are met in practice. Based on examination findings and regulatory guidance, MAS reviewers focus on four areas in transaction monitoring assessments.
Alert calibration against actual risk
MAS does not expect every institution to use the same alert thresholds. It expects every institution to use thresholds that reflect its own customer risk profile.
An institution whose customers are predominantly high-net-worth individuals with complex cross-border financial structures should have monitoring rules calibrated for that population — not rules designed for retail banking that happen to flag some of the same transactions.
In practice, examiners ask: how were these thresholds set? When were they last reviewed? What changed in your customer book since the last calibration, and how did the monitoring reflect that? Institutions that cannot answer these questions specifically — with dates, documented rationale, and sign-off from a named senior officer — are likely to receive findings.
Alert investigation documentation
This is where most examination failures occur, and it is not because institutions failed to review alerts.
MAS expects a written record for each alert: what the analyst found, why the transaction was or was not considered suspicious, and what action was or was not taken. A disposition of "reviewed — no SAR required" without supporting rationale does not satisfy this requirement. The expectation is closer to: "reviewed the customer's transaction history, the stated purpose of the account, and the counterparty profile. The transaction pattern is consistent with the customer's documented business activities and does not meet the threshold for filing."
Institutions that have good detection logic but poor investigation documentation often present worse in examination than institutions with simpler detection that document everything carefully.
Coverage of high-risk customers
FATF Recommendation 10 and Notice 626 both require enhanced monitoring for high-risk customers. MAS examiners check whether the monitoring programme reflects this operationally — not just in policy.
A specific check: do high-risk customers generate more alerts per capita than standard-risk customers? If not, one of two things is happening: either the monitoring programme is not applying enhanced measures to high-risk accounts, or it is applying enhanced measures but they are not generating additional alerts — which means the enhanced measures are not actually detecting more.
Either way, the institution needs to be able to explain the distribution clearly.
The audit trail
When MAS examines a monitoring programme, examiners review a sample of alerts from the past 12 months. For each sampled alert, they should be able to see: which rule or model triggered it, when it was assigned for investigation, who reviewed it, what the disposition decision was, the written rationale, and whether an STR was filed.
If any of these elements cannot be produced — because the system does not log them, or because records were not retained — the examination finding is straightforward.
Post-2023: What Changed
The 2023 enforcement action changed the operational context for transaction monitoring in Singapore in three specific ways.
Typology libraries need to reflect the patterns that were missed. The S$3 billion case involved specific patterns: shell companies receiving large transfers followed by property purchases, multiple entities with overlapping beneficial ownership, cash-intensive businesses used to layer funds into the formal banking system. These are not novel typologies — FATF and MAS had documented them before 2023. The question is whether monitoring rules were actually in place to detect them.
MAS has increased examination intensity. Following the 2023 case, MAS publicly committed to strengthening AML/CFT supervision, including more frequent and more intrusive examinations of systemically important institutions. Compliance teams that previously experienced relatively light-touch monitoring reviews should expect more detailed examination engagement going forward.
The reputational context for non-compliance has shifted. Before 2023, AML failures in Singapore were largely a technical compliance matter. After an enforcement action that received global coverage and led to diplomatic implications, the reputational consequences of a significant AML failure for a Singapore-licensed institution are much more visible.
Transaction Monitoring for PSA-Licensed Payment Institutions
For firms licensed under the PSA, there are specific practical considerations that bank-focused guidance does not address.
Shorter customer history. Payment service firms typically have shorter customer relationships than banks — sometimes months rather than years. ML-based anomaly detection models need historical data to establish baseline behaviour. When that history is limited, rules-based detection of known typologies needs to carry more weight in the alert logic.
Cross-border transaction volumes. PSA licensees handling international remittances have inherently higher cross-border exposure. Monitoring typologies must specifically address: structuring across multiple corridors, unusual shifts in destination country distribution, and dormant accounts that suddenly receive high-volume cross-border inflows.
Account lifecycle monitoring. New accounts that begin transacting immediately at high volume, or accounts that show no activity for an extended period before suddenly becoming active, are specific patterns that PSA-specific monitoring rules should address.
MAS has stated directly that it expects payment institutions to "uplift" their AML/CFT controls to a level closer to bank-grade. For transaction monitoring specifically, that means investment in calibration, documentation, and governance — not simply deploying a vendor system and assuming requirements are met.

What Effective Transaction Monitoring Looks Like in Singapore
Across MAS guidance, examination findings, and the post-2023 supervisory environment, an effective Singapore TM programme has six characteristics:
1. Documented calibration rationale. Alert thresholds are set with reference to the institution's customer risk assessment and reviewed when the customer book changes. Every threshold has a documented basis.
2. Coverage of Singapore-specific typologies. Beyond generic AML typologies, the monitoring library includes patterns documented in Singapore enforcement actions: shell company structuring, property-linked layering, cross-border transfer cycling across high-risk jurisdictions.
3. Alert investigation documentation that can survive examination. Every alert has a written disposition, not a checkbox. High-risk customer alerts have enhanced documentation. STR filings link back to specific alerts.
4. Defined escalation process. When an analyst is uncertain, there is a clear path to the Money Laundering Reporting Officer. Escalation decisions are recorded.
5. Regular calibration review. The monitoring programme is tested — whether through independent review, internal audit, or structured self-assessment — at least annually. Results and follow-up actions are documented.
6. Model governance for ML components. Where ML-based detection is used, model performance is tracked, validation is documented, and retraining triggers are defined. The validation record sits with the institution.
Taking the Next Step
If your institution is preparing for a MAS examination, reviewing its monitoring programme post-2023, or evaluating new transaction monitoring software, the starting point is a clear-eyed assessment of where your current programme sits against MAS expectations.
Tookitaki's FinCense platform is used by financial institutions across Singapore, Malaysia, Australia, and the Philippines. It is pre-configured with APAC-specific typologies — including patterns documented in Singapore enforcement actions and produces alert documentation in the format MAS examiners review.
Book a discussion with Tookitaki's team to see FinCense in a live environment calibrated for your institution type and region.
For a broader introduction to transaction monitoring requirements across all five APAC markets — Singapore, Australia, Malaysia, Philippines, and New Zealand — see our [complete transaction monitoring guide].

Transaction Monitoring Software: A Buyer's Guide for Banks and Fintechs
The compliance officer who bought their current transaction monitoring system probably saw a very good demo. Alert accuracy was 90% in the sandbox. Implementation was "6–8 weeks." The vendor had a case study from a Tier-1 bank.
Eighteen months later, the team processes 600 alerts per day, 530 of which are false positives. Two analysts have left. The backlog is three weeks long. An AUSTRAC examination is booked for Q4.
What happened between the demo and now is usually the same story: the sandbox didn't reflect production data, the rules weren't tuned for the actual customer base, and the implementation timeline quietly became six months.
This guide is not a vendor comparison. It is a diagnostic framework for telling effective transaction monitoring software from systems that look good until they're live.

Why Most TM Software Evaluations Go Wrong
Most procurement processes ask vendors to list their features. That is the wrong test.
Features are table stakes. What matters is performance in your specific environment — your customer mix, your transaction volumes, your risk profile. And vendor demonstrations are optimised to impress, not to replicate reality.
Three problems appear repeatedly in post-implementation reviews:
Alert accuracy drops between demo and production. Sandbox environments use curated, clean datasets. Production data is messier: duplicate records, legacy fields, missing counterparty data. Alert models calibrated on clean data degrade when they hit the real thing.
Rule libraries built for someone else. A retail bank in Sydney and a cross-border remittance operator in Singapore do not share transaction patterns. A rule library tuned for one will generate noise for the other. Most vendors deploy the same library for both and call it "risk-based."
"Transparent" models that cannot be tuned. Vendors frequently describe their ML systems as transparent and auditable. The test is whether your team can actually adjust the models when performance drifts, or whether every change requires a vendor engagement.
What "Effective" Means to Regulators
Before comparing systems, it is worth knowing what your regulator will assess. In APAC, the standard is consistent: regulators do not want to see a system that exists. They want evidence it works.
AUSTRAC (Australia): AML/CTF Rule 16 requires monitoring to be risk-based — thresholds must reflect your specific customer risk assessment, not generic defaults. AUSTRAC's enforcement record is specific on this point: both the Commonwealth Bank's AUD 700 million settlement in 2018 and Westpac's AUD 1.3 billion settlement in 2021 cited inadequate transaction monitoring as a direct failure — not the absence of a system, but the failure of one already in place.
MAS (Singapore): Notice 626 (paragraphs 19–27) requires FIs to detect, monitor, and report unusual transactions. MAS supervisory expectations published in 2024 flagged two recurring weaknesses across supervised firms: inadequate alert calibration and insufficient documentation of monitoring outcomes. Both are failures of execution, not of system selection.
BNM (Malaysia): The AML/CFT Policy Document (2023) requires an "effective" monitoring programme. Effectiveness is assessed through examination — specifically, whether the alerts generated correspond to the actual risk in the institution's customer base.
The practical consequence: an RFP that evaluates features without assessing tuning capability, calibration flexibility, and audit trail quality is not evaluating what regulators will look at.
7 Questions to Ask Any TM Vendor
1. What is your false positive rate in a live environment comparable to ours?
This is the single number that determines analyst workload. A false positive rate of 98% means 98 of every 100 alerts require investigation time before the analyst can close them as non-suspicious. At a mid-sized bank processing 500 alerts per day, that is 490 dead-end investigations.
The benchmark: well-tuned AI-augmented systems reach false positive rates of 80–85% in production. Legacy rule-only systems routinely run at 97–99%.
Ask the vendor to show actual data from a comparable client, not an anonymised case study. If they cannot, ask why.
2. How are alerts generated — rules, models, or a combination?
Pure rules-based systems are easy to validate for audit purposes but brittle: they miss patterns they were not programmed to detect, and new typologies go unnoticed until the rules are manually updated.
Pure ML systems can detect novel patterns but are harder to validate and explain to regulators who need to understand why an alert was raised.
Hybrid systems — rules for known typologies, models for anomaly detection — are generally more defensible. Ask specifically: how does the vendor update the rules and models when the regulatory environment changes? What happened when AUSTRAC updated its rules in 2023, or when MAS revised its supervisory expectations in 2024?
3. What does the analyst workflow look like after an alert fires?
Detection is only the first step. Analysts spend more time on alert investigation than on any other compliance task. A system that generates 200 precise, context-rich alerts is worth more operationally than one that generates 500 alerts requiring 40 minutes of manual research each before a disposition decision can be made.
Ask to see the actual analyst interface, not the executive dashboard. Check whether the alert displays customer history, previous alerts, peer comparison, and relevant counterparty data — or whether the analyst has to pull all of that separately.
4. What does a MAS- or AUSTRAC-ready audit log look like?
When a regulator examines your monitoring programme, they review the logic that generated each alert, the analyst's disposition decision, and the written rationale. They check whether high-risk customers received appropriate monitoring intensity and whether there is a documented escalation path for uncertain cases.
Ask the vendor to show you a sample audit log from a recent client examination. It should show: the rule or model that triggered the alert, the analyst who reviewed it, the decision, the rationale, and the time between alert generation and disposition. If the vendor cannot produce this, the system is not regulatory-examination-ready.
5. What does implementation actually take?
Ask for the implementation timeline — from contract to production-ready performance — for the vendor's most recent three comparable deployments. Not the standard brochure. Not the best case. Three actual recent clients.
Specifically: how long from contract signature to go-live? How long from go-live to the point where alert accuracy reached its steady-state level? Those are two different numbers, and the second one is the one that matters for planning.
6. How does the vendor handle model drift?
ML models degrade over time as transaction patterns change. A model trained on 2023 data will underperform against 2026 transaction patterns if it has not been retrained. Ask how frequently models are retrained, who initiates the review, and what triggers a retraining event.
Also ask: who holds the model validation documentation? Model governance is an emerging examination focus for MAS, AUSTRAC, and BNM. The validation record needs to sit with the institution, not only with the vendor.
7. How does the system handle regulatory updates?
APAC's AML/CFT rules change more frequently than in other regions. AUSTRAC updated Chapter 16 in 2023. MAS revised its AML/CFT supervisory expectations in 2024. BNM issued a revised AML/CFT Policy Document in 2023.
When these changes occur, who updates the system — and how quickly? Some vendors treat regulatory updates as professional services engagements billed separately. Others maintain a regulatory content team that pushes updates to all clients. Ask which model applies and get the answer in writing.

Banks vs. Fintechs: Different Needs, Different Priorities
A Tier-2 bank with 8 million retail customers and a PSA-licensed payment institution handling cross-border transfers have different TM requirements. The evaluation criteria shift accordingly.
For banks:
Volume and integration architecture matter first. A system processing 500,000 transactions per day needs different infrastructure than one processing 5,000. Ask specifically about latency in real-time monitoring scenarios and how the system handles peak volumes. Integration with core banking — particularly if the core is a legacy platform — is where implementations most commonly fail.
For fintechs and payment service providers:
Real-time detection weight is higher relative to batch processing. Cross-border typologies differ from domestic banking typologies — the vendor's rule library should include patterns specific to cross-border payment fraud, structuring across multiple jurisdictions, and rapid account cycling. Customer history is often short, which means models that require 12+ months of transaction data to perform will underperform in fast-growing books.
Total Cost of Ownership: The Number Most RFPs Undercount
The licence fee is the visible cost. The actual costs include:
- Implementation and integration: Typically 2–4x the first-year licence cost for a mid-size institution. A vendor that quotes "6–8 weeks" for implementation should be asked for the last five clients' actual implementation timelines before that number is used in any business case.
- Analyst capacity: A high false positive rate is not just an accuracy problem — it is a staffing cost. At a 97% false positive rate, a team processing 400 daily alerts spends approximately 85% of its investigation time on non-suspicious transactions. A 10-percentage-point improvement in accuracy frees roughly 2,400 analyst-hours per year at a 30-person operations team.
- Regulatory risk: The cost of an enforcement action should be in the risk-adjusted total cost of ownership calculation. Westpac's 2021 settlement was AUD 1.3 billion. The remediation programme that followed cost additional hundreds of millions. Against those figures, the difference between a well-tuned system and an adequate one looks very different on a business case.
What Tookitaki's FinCense Does Differently
FinCense is Tookitaki's transaction monitoring platform, built specifically for APAC financial institutions.
The core technical differentiator is federated learning. Most ML-based TM systems train models on a single institution's data, which limits pattern diversity. FinCense's models learn from typology patterns across the Tookitaki client network — without sharing raw transaction data between institutions. The result is detection capability that reflects a broader range of financial crime patterns than any single institution's data could produce.
In production deployments across APAC, FinCense has reduced false positive rates by up to 50% compared to legacy rule-based systems. In analyst workflow terms: a team processing 400 alerts per day at a 97% false positive rate could reduce that to approximately 200 alerts at the same investigation standard — roughly halving the time spent on non-productive reviews.
The platform is pre-integrated with APAC-specific typologies for AUSTRAC, MAS, BNM, BSP, and FMA regulatory environments. Regulatory updates are included in the standard contract.
Ready to Evaluate?
If your institution is reviewing its transaction monitoring system or implementing one for the first time, the seven questions in this guide are a starting framework. The answers will tell you more about a vendor's actual capability than any feature demonstration.
Book a discussion with Tookitaki's team to see FinCense in a live environment calibrated for your institution type and region. Or read our complete guide to "what is transaction monitoring? The Complete 2026 Guide" before the vendor conversations begin.

The “King” Who Promised Wealth: Inside the Philippines Investment Scam That Fooled Many
When authority is fabricated and trust is engineered, even the most implausible promises can start to feel real.
The Scam That Made Headlines
In a recent crackdown, the Philippine National Police arrested 15 individuals linked to an alleged investment scam that had been quietly unfolding across parts of the country.
At the centre of it all was a man posing as a “King” — a self-styled figure of authority who convinced victims that he had access to exclusive investment opportunities capable of delivering extraordinary returns.
Victims were drawn in through a mix of persuasion, perceived legitimacy, and carefully orchestrated narratives. Money was collected, trust was exploited, and by the time doubts surfaced, the damage had already been done.
While the arrests mark a significant step forward, the mechanics behind this scam reveal something far more concerning, a pattern that financial institutions are increasingly struggling to detect in real time.

Inside the Illusion: How the “King” Investment Scam Worked
At first glance, the premise sounds almost unbelievable. But scams like these rarely rely on logic, they rely on psychology.
The operation appears to have followed a familiar but evolving playbook:
1. Authority Creation
The central figure positioned himself as a “King” — not in a literal sense, but as someone with influence, access, and insider privilege. This created an immediate power dynamic. People tend to trust authority, especially when it is presented confidently and consistently.
2. Exclusive Opportunity Framing
Victims were offered access to “limited” investment opportunities. The framing was deliberate — not everyone could participate. This sense of exclusivity reduced skepticism and increased urgency.
3. Social Proof and Reinforcement
Scams of this nature often rely on group dynamics. Early participants, whether real or planted, reinforce credibility. Testimonials, referrals, and word-of-mouth create a false sense of validation.
4. Controlled Payment Channels
Funds were collected through a combination of cash handling and potentially structured transfers. This reduces traceability and delays detection.
5. Delayed Realisation
By the time inconsistencies surfaced, victims had already committed funds. The illusion held just long enough for the operators to extract value and move on.
This wasn’t just deception. It was structured manipulation, designed to bypass rational thinking and exploit human behaviour.
Why This Scam Is More Dangerous Than It Looks
It’s easy to dismiss this as an isolated case of fraud. But that would be a mistake.
What makes this incident particularly concerning is not the narrative — it’s the adaptability of the model.
Unlike traditional fraud schemes that rely heavily on digital infrastructure, this scam blended offline trust-building with flexible payment collection methods. That makes it significantly harder to detect using conventional monitoring systems.
More importantly, it highlights a shift: Fraud is no longer just about exploiting system vulnerabilities. It’s about exploiting human behaviour and using financial systems as the final execution layer.
For banks and fintechs, this creates a blind spot.
Following the Money: The Likely Financial Footprint
From a compliance and AML perspective, scams like this leave behind patterns — but rarely in a clean, linear form.
Based on the nature of the operation, the financial footprint may include:
- Multiple small-value deposits or transfers from different individuals, often appearing unrelated
- Use of intermediary accounts to collect and consolidate funds
- Rapid movement of funds across accounts to break transaction trails
- Cash-heavy collection points, reducing digital visibility
- Inconsistent transaction behaviour compared to customer profiles
Individually, these signals may not trigger alerts. But together, they form a pattern — one that requires contextual intelligence to detect.
Red Flags Financial Institutions Should Watch
For compliance teams, the challenge lies in identifying these patterns early — before the damage escalates.
Transaction-Level Indicators
- Sudden inflow of funds from multiple unrelated individuals into a single account
- Frequent small-value transfers followed by rapid aggregation
- Outbound transfers shortly after deposits, often to new or unverified beneficiaries
- Structuring behaviour that avoids typical threshold-based alerts
- Unusual spikes in account activity inconsistent with historical patterns
Behavioural Indicators
- Customers participating in transactions tied to “investment opportunities” without clear documentation
- Increased urgency in fund transfers, often under external pressure
- Reluctance or inability to explain transaction purpose clearly
- Repeated interactions with a specific set of counterparties
Channel & Activity Indicators
- Use of informal or non-digital communication channels to coordinate transactions
- Sudden activation of dormant accounts
- Multiple accounts linked indirectly through shared beneficiaries or devices
- Patterns suggesting third-party control or influence
These are not standalone signals. They need to be connected, contextualised, and interpreted in real time.
The Real Challenge: Why These Scams Slip Through
This is where things get complicated.
Scams like the “King” investment scheme are difficult to detect because they often appear legitimate — at least on the surface.
- Transactions are customer-initiated, not system-triggered
- Payment amounts are often below risk thresholds
- There is no immediate fraud signal at the point of transaction
- The story behind the payment exists outside the financial system
Traditional rule-based systems struggle in such scenarios. They are designed to detect known patterns, not evolving behaviours.
And by the time a pattern becomes obvious, the funds have usually moved.

Where Technology Makes the Difference
Addressing these risks requires a shift in how financial institutions approach detection.
Instead of looking at transactions in isolation, institutions need to focus on behavioural patterns, contextual signals, and scenario-based intelligence.
This is where modern platforms like Tookitaki’s FinCense play a critical role.
By leveraging:
- Scenario-driven detection models informed by real-world cases
- Cross-entity behavioural analysis to identify hidden connections
- Real-time monitoring capabilities for faster intervention
- Collaborative intelligence from ecosystems like the AFC Ecosystem
…institutions can move from reactive detection to proactive prevention.
The goal is not just to catch fraud after it happens, but to interrupt it while it is still unfolding.
From Headlines to Prevention
The arrest of those involved in the “King” investment scam is a reminder that enforcement is catching up. But it also highlights a deeper truth: Scams are evolving faster than traditional detection systems.
What starts as an unbelievable story can quickly become a widespread financial risk — especially when trust is weaponised and financial systems are used as conduits.
For banks and fintechs, the takeaway is clear.
Prevention cannot rely on static rules or delayed signals. It requires continuous adaptation, shared intelligence, and a deeper understanding of how modern scams operate.
Because the next “King” may not call himself one.
But the playbook will look very familiar.

Transaction Monitoring in Singapore: MAS Requirements and Best Practices
In August 2023, Singapore Police Force executed the largest money laundering operation in the country's history. S$3 billion in assets were seized from ten foreign nationals who had moved funds through Singapore's financial system for years — through banks, through licensed payment institutions, through corporate accounts holding everything from luxury cars to commercial property.
For compliance teams at Singapore-licensed financial institutions, the question that followed was not abstract. It was: would our transaction monitoring have caught this?
MAS has been examining that question across the industry since, through an intensified supervisory programme that has put transaction monitoring under closer scrutiny than at any point in the past decade. This guide covers what Singapore law requires, what MAS examiners actually check, and what a genuinely effective transaction monitoring programme looks like in a Singapore context.

Singapore's Transaction Monitoring Regulatory Framework
Transaction monitoring obligations in Singapore flow from three regulatory instruments. Understanding the differences between them matters — particularly for payment service providers, whose obligations are sometimes confused with bank requirements.
MAS Notice 626 (Banks)
MAS Notice 626, issued under the Banking Act, is the primary AML/CFT requirement for Singapore-licensed banks. Paragraphs 19–27 set out monitoring requirements: banks must implement systems to detect unusual or suspicious transactions, investigate alerts within defined timeframes, and document monitoring outcomes in a form that MAS can review.
The full obligations under Notice 626 are covered in detail in our [MAS Notice 626 Transaction Monitoring Requirements guide](/compliance-hub/mas-notice-626-transaction-monitoring). What matters for this discussion is that Notice 626 sets a floor, not a ceiling. MAS expectations in examination have consistently run ahead of the minimum text.
MAS Notices PSN01 and PSN02 (Payment Service Providers)
Since the Payment Services Act (PSA) came into force in 2020, licensed payment institutions — standard payment institutions and major payment institutions — have had AML/CFT obligations that mirror the core requirements of Notice 626, adapted for the payment services context.
A cross-border remittance operator has the same obligation to monitor for unusual activity as a bank. The typologies look different — faster transaction cycling, higher cross-border transfer volumes, shorter customer history — but the regulatory requirement is equivalent.
This matters because some licensed payment institutions still treat their monitoring obligations as lighter than bank-grade. MAS examination findings published in the 2024 supervisory expectations document specifically noted that AML controls at payment institutions were "less mature" than at banks — which means this is now an examination priority.
MAS AML/CFT Supervisory Expectations (2024)
The 2024 MAS supervisory expectations document is the most direct signal of what MAS is looking for. It followed the 2023 enforcement action and a broader review of AML/CFT controls across supervised institutions.
Transaction monitoring appears in three of the five priority areas in that document:
- Alert logic that is not calibrated to the institution's specific risk profile
- Insufficient monitoring intensity for high-risk customers
- Weak documentation of alert investigation outcomes
None of these are technical failures. They are process and governance failures — which is what makes them significant. An institution can have sophisticated monitoring software and still fail on all three.
What MAS Examiners Actually Check
Notice 626 describes what is required. MAS examinations test whether requirements are met in practice. Based on examination findings and regulatory guidance, MAS reviewers focus on four areas in transaction monitoring assessments.
Alert calibration against actual risk
MAS does not expect every institution to use the same alert thresholds. It expects every institution to use thresholds that reflect its own customer risk profile.
An institution whose customers are predominantly high-net-worth individuals with complex cross-border financial structures should have monitoring rules calibrated for that population — not rules designed for retail banking that happen to flag some of the same transactions.
In practice, examiners ask: how were these thresholds set? When were they last reviewed? What changed in your customer book since the last calibration, and how did the monitoring reflect that? Institutions that cannot answer these questions specifically — with dates, documented rationale, and sign-off from a named senior officer — are likely to receive findings.
Alert investigation documentation
This is where most examination failures occur, and it is not because institutions failed to review alerts.
MAS expects a written record for each alert: what the analyst found, why the transaction was or was not considered suspicious, and what action was or was not taken. A disposition of "reviewed — no SAR required" without supporting rationale does not satisfy this requirement. The expectation is closer to: "reviewed the customer's transaction history, the stated purpose of the account, and the counterparty profile. The transaction pattern is consistent with the customer's documented business activities and does not meet the threshold for filing."
Institutions that have good detection logic but poor investigation documentation often present worse in examination than institutions with simpler detection that document everything carefully.
Coverage of high-risk customers
FATF Recommendation 10 and Notice 626 both require enhanced monitoring for high-risk customers. MAS examiners check whether the monitoring programme reflects this operationally — not just in policy.
A specific check: do high-risk customers generate more alerts per capita than standard-risk customers? If not, one of two things is happening: either the monitoring programme is not applying enhanced measures to high-risk accounts, or it is applying enhanced measures but they are not generating additional alerts — which means the enhanced measures are not actually detecting more.
Either way, the institution needs to be able to explain the distribution clearly.
The audit trail
When MAS examines a monitoring programme, examiners review a sample of alerts from the past 12 months. For each sampled alert, they should be able to see: which rule or model triggered it, when it was assigned for investigation, who reviewed it, what the disposition decision was, the written rationale, and whether an STR was filed.
If any of these elements cannot be produced — because the system does not log them, or because records were not retained — the examination finding is straightforward.
Post-2023: What Changed
The 2023 enforcement action changed the operational context for transaction monitoring in Singapore in three specific ways.
Typology libraries need to reflect the patterns that were missed. The S$3 billion case involved specific patterns: shell companies receiving large transfers followed by property purchases, multiple entities with overlapping beneficial ownership, cash-intensive businesses used to layer funds into the formal banking system. These are not novel typologies — FATF and MAS had documented them before 2023. The question is whether monitoring rules were actually in place to detect them.
MAS has increased examination intensity. Following the 2023 case, MAS publicly committed to strengthening AML/CFT supervision, including more frequent and more intrusive examinations of systemically important institutions. Compliance teams that previously experienced relatively light-touch monitoring reviews should expect more detailed examination engagement going forward.
The reputational context for non-compliance has shifted. Before 2023, AML failures in Singapore were largely a technical compliance matter. After an enforcement action that received global coverage and led to diplomatic implications, the reputational consequences of a significant AML failure for a Singapore-licensed institution are much more visible.
Transaction Monitoring for PSA-Licensed Payment Institutions
For firms licensed under the PSA, there are specific practical considerations that bank-focused guidance does not address.
Shorter customer history. Payment service firms typically have shorter customer relationships than banks — sometimes months rather than years. ML-based anomaly detection models need historical data to establish baseline behaviour. When that history is limited, rules-based detection of known typologies needs to carry more weight in the alert logic.
Cross-border transaction volumes. PSA licensees handling international remittances have inherently higher cross-border exposure. Monitoring typologies must specifically address: structuring across multiple corridors, unusual shifts in destination country distribution, and dormant accounts that suddenly receive high-volume cross-border inflows.
Account lifecycle monitoring. New accounts that begin transacting immediately at high volume, or accounts that show no activity for an extended period before suddenly becoming active, are specific patterns that PSA-specific monitoring rules should address.
MAS has stated directly that it expects payment institutions to "uplift" their AML/CFT controls to a level closer to bank-grade. For transaction monitoring specifically, that means investment in calibration, documentation, and governance — not simply deploying a vendor system and assuming requirements are met.

What Effective Transaction Monitoring Looks Like in Singapore
Across MAS guidance, examination findings, and the post-2023 supervisory environment, an effective Singapore TM programme has six characteristics:
1. Documented calibration rationale. Alert thresholds are set with reference to the institution's customer risk assessment and reviewed when the customer book changes. Every threshold has a documented basis.
2. Coverage of Singapore-specific typologies. Beyond generic AML typologies, the monitoring library includes patterns documented in Singapore enforcement actions: shell company structuring, property-linked layering, cross-border transfer cycling across high-risk jurisdictions.
3. Alert investigation documentation that can survive examination. Every alert has a written disposition, not a checkbox. High-risk customer alerts have enhanced documentation. STR filings link back to specific alerts.
4. Defined escalation process. When an analyst is uncertain, there is a clear path to the Money Laundering Reporting Officer. Escalation decisions are recorded.
5. Regular calibration review. The monitoring programme is tested — whether through independent review, internal audit, or structured self-assessment — at least annually. Results and follow-up actions are documented.
6. Model governance for ML components. Where ML-based detection is used, model performance is tracked, validation is documented, and retraining triggers are defined. The validation record sits with the institution.
Taking the Next Step
If your institution is preparing for a MAS examination, reviewing its monitoring programme post-2023, or evaluating new transaction monitoring software, the starting point is a clear-eyed assessment of where your current programme sits against MAS expectations.
Tookitaki's FinCense platform is used by financial institutions across Singapore, Malaysia, Australia, and the Philippines. It is pre-configured with APAC-specific typologies — including patterns documented in Singapore enforcement actions and produces alert documentation in the format MAS examiners review.
Book a discussion with Tookitaki's team to see FinCense in a live environment calibrated for your institution type and region.
For a broader introduction to transaction monitoring requirements across all five APAC markets — Singapore, Australia, Malaysia, Philippines, and New Zealand — see our [complete transaction monitoring guide].

Transaction Monitoring Software: A Buyer's Guide for Banks and Fintechs
The compliance officer who bought their current transaction monitoring system probably saw a very good demo. Alert accuracy was 90% in the sandbox. Implementation was "6–8 weeks." The vendor had a case study from a Tier-1 bank.
Eighteen months later, the team processes 600 alerts per day, 530 of which are false positives. Two analysts have left. The backlog is three weeks long. An AUSTRAC examination is booked for Q4.
What happened between the demo and now is usually the same story: the sandbox didn't reflect production data, the rules weren't tuned for the actual customer base, and the implementation timeline quietly became six months.
This guide is not a vendor comparison. It is a diagnostic framework for telling effective transaction monitoring software from systems that look good until they're live.

Why Most TM Software Evaluations Go Wrong
Most procurement processes ask vendors to list their features. That is the wrong test.
Features are table stakes. What matters is performance in your specific environment — your customer mix, your transaction volumes, your risk profile. And vendor demonstrations are optimised to impress, not to replicate reality.
Three problems appear repeatedly in post-implementation reviews:
Alert accuracy drops between demo and production. Sandbox environments use curated, clean datasets. Production data is messier: duplicate records, legacy fields, missing counterparty data. Alert models calibrated on clean data degrade when they hit the real thing.
Rule libraries built for someone else. A retail bank in Sydney and a cross-border remittance operator in Singapore do not share transaction patterns. A rule library tuned for one will generate noise for the other. Most vendors deploy the same library for both and call it "risk-based."
"Transparent" models that cannot be tuned. Vendors frequently describe their ML systems as transparent and auditable. The test is whether your team can actually adjust the models when performance drifts, or whether every change requires a vendor engagement.
What "Effective" Means to Regulators
Before comparing systems, it is worth knowing what your regulator will assess. In APAC, the standard is consistent: regulators do not want to see a system that exists. They want evidence it works.
AUSTRAC (Australia): AML/CTF Rule 16 requires monitoring to be risk-based — thresholds must reflect your specific customer risk assessment, not generic defaults. AUSTRAC's enforcement record is specific on this point: both the Commonwealth Bank's AUD 700 million settlement in 2018 and Westpac's AUD 1.3 billion settlement in 2021 cited inadequate transaction monitoring as a direct failure — not the absence of a system, but the failure of one already in place.
MAS (Singapore): Notice 626 (paragraphs 19–27) requires FIs to detect, monitor, and report unusual transactions. MAS supervisory expectations published in 2024 flagged two recurring weaknesses across supervised firms: inadequate alert calibration and insufficient documentation of monitoring outcomes. Both are failures of execution, not of system selection.
BNM (Malaysia): The AML/CFT Policy Document (2023) requires an "effective" monitoring programme. Effectiveness is assessed through examination — specifically, whether the alerts generated correspond to the actual risk in the institution's customer base.
The practical consequence: an RFP that evaluates features without assessing tuning capability, calibration flexibility, and audit trail quality is not evaluating what regulators will look at.
7 Questions to Ask Any TM Vendor
1. What is your false positive rate in a live environment comparable to ours?
This is the single number that determines analyst workload. A false positive rate of 98% means 98 of every 100 alerts require investigation time before the analyst can close them as non-suspicious. At a mid-sized bank processing 500 alerts per day, that is 490 dead-end investigations.
The benchmark: well-tuned AI-augmented systems reach false positive rates of 80–85% in production. Legacy rule-only systems routinely run at 97–99%.
Ask the vendor to show actual data from a comparable client, not an anonymised case study. If they cannot, ask why.
2. How are alerts generated — rules, models, or a combination?
Pure rules-based systems are easy to validate for audit purposes but brittle: they miss patterns they were not programmed to detect, and new typologies go unnoticed until the rules are manually updated.
Pure ML systems can detect novel patterns but are harder to validate and explain to regulators who need to understand why an alert was raised.
Hybrid systems — rules for known typologies, models for anomaly detection — are generally more defensible. Ask specifically: how does the vendor update the rules and models when the regulatory environment changes? What happened when AUSTRAC updated its rules in 2023, or when MAS revised its supervisory expectations in 2024?
3. What does the analyst workflow look like after an alert fires?
Detection is only the first step. Analysts spend more time on alert investigation than on any other compliance task. A system that generates 200 precise, context-rich alerts is worth more operationally than one that generates 500 alerts requiring 40 minutes of manual research each before a disposition decision can be made.
Ask to see the actual analyst interface, not the executive dashboard. Check whether the alert displays customer history, previous alerts, peer comparison, and relevant counterparty data — or whether the analyst has to pull all of that separately.
4. What does a MAS- or AUSTRAC-ready audit log look like?
When a regulator examines your monitoring programme, they review the logic that generated each alert, the analyst's disposition decision, and the written rationale. They check whether high-risk customers received appropriate monitoring intensity and whether there is a documented escalation path for uncertain cases.
Ask the vendor to show you a sample audit log from a recent client examination. It should show: the rule or model that triggered the alert, the analyst who reviewed it, the decision, the rationale, and the time between alert generation and disposition. If the vendor cannot produce this, the system is not regulatory-examination-ready.
5. What does implementation actually take?
Ask for the implementation timeline — from contract to production-ready performance — for the vendor's most recent three comparable deployments. Not the standard brochure. Not the best case. Three actual recent clients.
Specifically: how long from contract signature to go-live? How long from go-live to the point where alert accuracy reached its steady-state level? Those are two different numbers, and the second one is the one that matters for planning.
6. How does the vendor handle model drift?
ML models degrade over time as transaction patterns change. A model trained on 2023 data will underperform against 2026 transaction patterns if it has not been retrained. Ask how frequently models are retrained, who initiates the review, and what triggers a retraining event.
Also ask: who holds the model validation documentation? Model governance is an emerging examination focus for MAS, AUSTRAC, and BNM. The validation record needs to sit with the institution, not only with the vendor.
7. How does the system handle regulatory updates?
APAC's AML/CFT rules change more frequently than in other regions. AUSTRAC updated Chapter 16 in 2023. MAS revised its AML/CFT supervisory expectations in 2024. BNM issued a revised AML/CFT Policy Document in 2023.
When these changes occur, who updates the system — and how quickly? Some vendors treat regulatory updates as professional services engagements billed separately. Others maintain a regulatory content team that pushes updates to all clients. Ask which model applies and get the answer in writing.

Banks vs. Fintechs: Different Needs, Different Priorities
A Tier-2 bank with 8 million retail customers and a PSA-licensed payment institution handling cross-border transfers have different TM requirements. The evaluation criteria shift accordingly.
For banks:
Volume and integration architecture matter first. A system processing 500,000 transactions per day needs different infrastructure than one processing 5,000. Ask specifically about latency in real-time monitoring scenarios and how the system handles peak volumes. Integration with core banking — particularly if the core is a legacy platform — is where implementations most commonly fail.
For fintechs and payment service providers:
Real-time detection weight is higher relative to batch processing. Cross-border typologies differ from domestic banking typologies — the vendor's rule library should include patterns specific to cross-border payment fraud, structuring across multiple jurisdictions, and rapid account cycling. Customer history is often short, which means models that require 12+ months of transaction data to perform will underperform in fast-growing books.
Total Cost of Ownership: The Number Most RFPs Undercount
The licence fee is the visible cost. The actual costs include:
- Implementation and integration: Typically 2–4x the first-year licence cost for a mid-size institution. A vendor that quotes "6–8 weeks" for implementation should be asked for the last five clients' actual implementation timelines before that number is used in any business case.
- Analyst capacity: A high false positive rate is not just an accuracy problem — it is a staffing cost. At a 97% false positive rate, a team processing 400 daily alerts spends approximately 85% of its investigation time on non-suspicious transactions. A 10-percentage-point improvement in accuracy frees roughly 2,400 analyst-hours per year at a 30-person operations team.
- Regulatory risk: The cost of an enforcement action should be in the risk-adjusted total cost of ownership calculation. Westpac's 2021 settlement was AUD 1.3 billion. The remediation programme that followed cost additional hundreds of millions. Against those figures, the difference between a well-tuned system and an adequate one looks very different on a business case.
What Tookitaki's FinCense Does Differently
FinCense is Tookitaki's transaction monitoring platform, built specifically for APAC financial institutions.
The core technical differentiator is federated learning. Most ML-based TM systems train models on a single institution's data, which limits pattern diversity. FinCense's models learn from typology patterns across the Tookitaki client network — without sharing raw transaction data between institutions. The result is detection capability that reflects a broader range of financial crime patterns than any single institution's data could produce.
In production deployments across APAC, FinCense has reduced false positive rates by up to 50% compared to legacy rule-based systems. In analyst workflow terms: a team processing 400 alerts per day at a 97% false positive rate could reduce that to approximately 200 alerts at the same investigation standard — roughly halving the time spent on non-productive reviews.
The platform is pre-integrated with APAC-specific typologies for AUSTRAC, MAS, BNM, BSP, and FMA regulatory environments. Regulatory updates are included in the standard contract.
Ready to Evaluate?
If your institution is reviewing its transaction monitoring system or implementing one for the first time, the seven questions in this guide are a starting framework. The answers will tell you more about a vendor's actual capability than any feature demonstration.
Book a discussion with Tookitaki's team to see FinCense in a live environment calibrated for your institution type and region. Or read our complete guide to "what is transaction monitoring? The Complete 2026 Guide" before the vendor conversations begin.


