Article

AML: What If the Real Risk Is the Time Spent Processing Your Alerts?

14 April, 2026

Reading time : 7 min.

AML What If the Real Risk Is the Time Spent Processing Your Alerts

At a Glance :

  • Driven by regulatory pressure, AML programs have become increasingly sensitive, triggering a surge in alert volumes. Skilled analysts now spend the bulk of their time triaging low-relevance flags, at the expense of investigating the cases that actually matter.
  • Decision fatigue is well-documented: beyond a certain volume of repetitive decisions, analyst vigilance erodes, paradoxically increasing the risk of missing genuinely suspicious activity.
  • France’s financial intelligence unit Tracfin confirmed in its 2024 review: what matters now is the quality of suspicious activity reports, not their volume. A program’s maturity is measured by its ability to direct human attention where it genuinely belongs.
  • Three concrete levers can drive the shift: tracking alert-to-qualified-case conversion rates, treating analyst feedback as an operational signal, and regularly reassessing the relevance of existing detection rules.

« Our AML team has become world-class at handling false positives. The problem is, that’s not what we hired them for. »

That quote captures a quiet drift that has taken hold across financial institutions over the past decade. Under regulatory pressure, increasingly sensitive detection systems were deployed. Thresholds were lowered, rule sets expanded, and alert volumes climbed sharply. To absorb the load, institutions hired more analysts.

Analysts trained to identify sophisticated money laundering schemes now spend most of their time reviewing low-relevance flags that better-calibrated controls could have filtered out upstream. The problem is not the people. It is the design of the program itself.

The real question is not how many alerts are generated, but what analysts are actually being asked to do: are they working on high-value investigations, or clearing the backlog that detection systems failed to filter in the first place?

1. The Silent Drift: How AML Teams Became Alert-Processing Machines

To understand where things stand today, it helps to go back roughly a decade. Following major enforcement actions against several global banks (HSBC, BNP Paribas, Deutsche Bank), regulators tightened their expectations considerably. Institutions responded in a broadly uniform way: increase detection sensitivity by lowering alert thresholds, layering in more business rules, and cross-referencing additional data sources.

Individually, each of these choices was defensible. Together, they produced a predictable outcome: a sharp increase in alerts requiring review. To absorb the volume, banks and payment institutions expanded their teams. The result: skilled analysts now spend a large share of their time on routine, repetitive work: acknowledging alerts, running basic checks, finding no suspicion, closing the case, at very high frequency.

The shift is clear: the original mandate, identifying complex money laundering schemes, has gradually given way to a throughput logic. Not because teams lack capability, but because the program funnels them toward triage rather than analysis.

A detection program that generates too many alerts does not protect the institution more effectively. It dilutes the attention of the people responsible for protecting it.

2. The Hidden Cost: What Compliance Budgets Fail to Measure

Compliance budgets are typically managed around visible inputs: headcount, software licenses, training costs, legal fees. But the true efficiency of the program, specifically the relationship between resources deployed and actual detection value produced, is rarely measured.

This volume-centric approach is now being challenged, including by regulators themselves. In its 2024 AML/CFT activity report, Tracfin made a notable shift in emphasis: the focus is less on the number of reports filed and more on their quality.

TRACFIN AML/CFT REPORT 2024

“The quality challenge in suspicious activity reporting. Beyond managing the ever-growing volume of information received by Tracfin, improving the quality of suspicious activity reports is a major priority. Regular exchanges between Tracfin and reporting professionals make it possible to run joint projects aimed at improving data quality.”

Tracfin Report “AML/CFT: Activity of Reporting Professions, 2024 Review,” June 2025.

Three metrics stand out as particularly telling.

The first is cost per genuine detection: dividing the team’s total budget by the number of cases that actually resulted in a suspicious activity report or referral to authorities yields a ratio that is often hard to defend, but essential to understanding true program performance.

The second is AML analyst retention. This is not simply an HR metric. An experienced analyst who can identify complex typologies or sector-specific vulnerabilities is a critical institutional asset. When that person leaves because the work has become intellectually disengaging, the loss is significant, even if it never shows up in a budget line.

The third is the quality of referrals submitted to authorities, a point Tracfin has made consistently: the issue is not the volume of reports filed, but the depth of analysis behind them. A well-structured case built around a clear money laundering hypothesis is worth far more than a stack of mechanically triggered, insufficiently qualified alerts.

3. What the Research Says About Analyst Decision Fatigue

Cognitive psychology has established over decades a well-documented phenomenon: decision quality deteriorates as decision volume increases within a single work session. Known as decision fatigue, this effect has been observed across fields as varied as judicial proceedings, medicine, finance, and HR. It applies with particular force to AML analyst work.

In practice, an analyst reviewing their hundredth alert of the day is not bringing the same level of focus as they were on the tenth. Vigilance erodes, confirmation bias strengthens, and decisions increasingly rely on cognitive shortcuts: close quickly rather than dig deeper. This is not a failure of professionalism. It is a structural limitation of human attention under high-volume, repetitive conditions.

This produces a paradox that is rarely articulated: a program generating excessive alert volumes increases the risk of missing genuinely relevant cases. Not despite its sensitivity, but because of it. System over-alerting leads to human under-attention. That is precisely where the most critical risks accumulate.

An overly sensitive system does not strengthen protection. It erodes the analytical capacity of the people responsible for it.

Research points to the existence of an optimal alert volume, a threshold beyond which performance begins to decline. That threshold varies by organization, including team size, case complexity and experience levels, but it exists. And in most cases, it goes unmeasured.

4. The Question to Bring to Your Next Steering Committee

Rethinking how an AML program is measured means transforming how it is managed. The first step is shifting the central reporting question: not “how many alerts were processed?” but “what is the quality of the decisions being made?” That shift is not cosmetic. It requires revisiting metrics, reframing steering committee discussions, and often challenging detection parameters that have gone untouched since initial implementation.

One lever is tracking the conversion rate of alerts into qualified cases. When that rate is persistently low, as is often the case, each new detection rule should be evaluated not on its ability to generate volume, but on its measurable impact on that conversion rate. That inversion of logic directly reshapes how priorities and investments are read.

A second lever is incorporating analyst feedback on alert relevance. This is not a comfort metric. It is an operational signal. When a team consistently views a significant share of its alerts as low-value, that reflects a program misalignment. This type of signal typically precedes a decline in analytical quality and the departure of the most experienced investigators.

A third lever is regularly auditing the detection model itself. How many rules were added over the period? How many were removed? What is the actual impact of each addition on alert volume and relevance? In many organizations, rules accumulate without re-evaluation. Volume rises, but value does not follow.

The maturity of an AML program is measured neither by the number of rules in place nor by the volume of alerts generated. It is measured by its ability to direct human attention to the cases that warrant it. That requires deliberate design choices and the managerial discipline to measure what is genuinely useful, even when the results are uncomfortable to present.

  • Driven by regulatory pressure, AML programs have become increasingly sensitive, triggering a surge in alert volumes. Skilled analysts now spend the bulk of their time triaging low-relevance flags, at the expense of investigating the cases that actually matter.
  • Decision fatigue is well-documented: beyond a certain volume of repetitive decisions, analyst vigilance erodes, paradoxically increasing the risk of missing genuinely suspicious activity.
  • France’s financial intelligence unit Tracfin confirmed in its 2024 review: what matters now is the quality of suspicious activity reports, not their volume. A program’s maturity is measured by its ability to direct human attention where it genuinely belongs.
  • Three concrete levers can drive the shift: tracking alert-to-qualified-case conversion rates, treating analyst feedback as an operational signal, and regularly reassessing the relevance of existing detection rules.

FAQ

01
How do we know if our AML program is generating too many alerts relative to its actual value?

The primary signal is the conversion rate: if fewer than 5 to 10 percent of processed alerts result in a suspicious activity report or a referral, the program warrants recalibration. A second, more qualitative signal is analyst sentiment. When an experienced team consistently views the majority of its alerts as low-value, that is a serious operational indicator, not a matter of preference. Finally, the age of detection rules without reassessment is itself a red flag.

02
Is decision fatigue actually measurable in an AML context?

Not directly. But its effects are. One can observe, for example, whether alert closure decisions cluster disproportionately in the late afternoon or end of week, whether average processing time per alert drops sharply in later hours, or whether decision variability across analysts increases on comparable cases. These indirect indicators surface the imbalance between alert volume and actual analytical capacity.

03
Won’t reducing alert volumes be viewed negatively by the regulator?

That concern is understandable, but it reflects an outdated reading of regulatory expectations. Tracfin stated this explicitly in its 2024 review: what is expected is the quality of suspicious activity reports, not their quantity. A program that generates fewer alerts but produces better-constructed cases, built around a clearly articulated and documented money laundering hypothesis, is far more responsive to what regulators actually want. The key is being able to demonstrate the calibration logic and the traceability of decisions made.

We got you covered

for your unified commerce needs

Security & Defense

We designed for defense and intelligence agencies, a multi-int platform fuses data from diverse sources into a single, cohesive environment.

Manufacturing & Energy

We help manufacturers and energy actors stay ahead with AI-driven solutions, from secure data exchange to market intelligence.

Life Sciences

We empower life sciences with AI solutions from drug discovery, supply chain to medical communication.

Financial services

Our AI is transforming banking and finance: process automation, fraud detection, and predictive analytics strengthen both security and efficiency.

Private Equity

We empower the Private Equity sector with comprehensive AI solutions across the investment lifecycle.