ATTENTION: This page has been migrated to the Tazama GitHub repository and is now located at: https://github.com/frmscoe/docs/blob/main/Product/configuration-management.md This page will no longer be maintained in Confluence. |
---|
Table of Content Zone | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
TL;DR
Platform configuration is managed through a number of configuration files, each containing a JSON document that configures a specific processor type (CRSP, rules and typologies) and specific processor instance identified by a processor identifier (id@version) and a configuration version.
...
Configuration documents can be uploaded to the platform using the ArangoDB API deployed with the platform.
1. Overview of the detection methodology
The core detection capability within the platform is distributed across three distinct steps in the end-to-end evaluation flow.
...
In this document, we will discuss how the various configuration documents are expected to be updated to influence evaluation behavior.
2. Configuration Management
Configuration documents are essentially files that contain a processor-specific configuration object in JSON format. The recommended way to upload the configuration file to the appropriate configuration database (networkMap
or configuration
) and collection is via Arango DB’s HTTP API that is deployed as standard during platform deployment.
...
Finally, the typologies and rules are bound together into the network map and attached to the specific transaction type for which the rules and typologies are intended. The network map defines the rules that should receive the transaction for evaluation, and also the routing to the typologies composed out of the rules.
...
2.1. Rule Processor Configuration
Introduction
A rule processor is a custom-built module that evaluates an incoming message according to its code. When a new rule processor is developed, the rule designer will specify both the input parameters for the rule, as well as the output results. Changes to these attributes can alter a rule processor’s behavior and it is expected that these attributes are hosted in the rule configuration so that the rule processor behavior can be altered by updating the configuration instead of changing the rule processor code.
...
Rule configuration metadata
A
config
object, thatmay contain a number of parameters
may contain a number of exit conditions
will contains either result bands
or alternatively will contain result cases
Rule configuration metadata
The rule configuration “header” contains metadata that describes the rule. The metadata includes the following attributes:
...
Code Block |
---|
{ "id": "rule-001@1.0.0", "cfg": "1.0.0", "desc": "Derived account age - creditor", ... } |
The configuration object - parameters
A rule processor’s parameters are used to define how a rule processor will operate to evaluate the incoming message. The requirement for the parameters are coded into the rule processor and must be provided in the configuration for the rule processor to deliver a successful outcome. If any of the required parameters are missing, the rule processor will still deliver a result, but it will be a default error outcome. Parameters are given descriptive names to assist the operator in specifying them correctly. Parameters often differ from one rule to the next, but typically define thresholds and time-frames for the historical queries that are executed inside a rule processor. Some notable examples:
...
If a rule processor does not use any parameters, the parameters object may either be empty (parameters{}
) or omitted entirely.
The configuration object - exit conditions
A rule processor’s exit conditions ensure that a rule processor is always able to produce a result, even if the rule processor is unable to reach a definitive, deterministic outcome. Exit conditions account for non-deterministic exceptions in the rule processor’s behavior. The exit conditions are coded into the rule processor and each exit condition must be provided in the configuration for the rule processor to deliver a successful outcome. If any of the exit conditions are missing, the rule processor will still deliver a result, but it will be error outcome complaining about the missing exit condition related to the specific exit condition code.
...
Attribute | Description |
---|---|
| Every rule processor is capable of reporting a number of different outcomes, but only a single outcome from the complete set is ultimately delivered to the typology processor. Each outcome is defined by a unique sub-rule reference identifier to differentiate the delivered outcome from the others and also to allow the typology processor to apply a unique weighting to that specific outcome. By convention, the exit condition sub-rule references are prefaced with an 'x'. |
| The configuration file defines whether the result delivered by the rule processor is flagged as either Exit conditions are usually non-deterministic. |
| The reason provides a human-readable description of the result that accompanies the rule result to the eventual over-all evaluation result. |
Info |
---|
The |
The configuration object - rule results
While the parameters and exit conditions may be optional for a specific rule processor, the core function and output of a rule processor is contained in the results object. Rule processors offer two different kinds of rule results:
...
It is extremely important that the configuration of a rule processor does not leave any gaps in the results, whether banded or cased. Every possible outcome of a rule result must be accounted for, otherwise the rule processor may deliver a result that the typology processor cannot interpret. In the event that a rule processor result misses the configured results, the rule processor will issue an error (.err
) result with a reason description of Value provided undefined, so cannot determine rule outcome
.
Rule results - banded results
Banded results are partitions in a contiguous range of results, effectively from -∞ to +∞. When a target value is evaluated against a result band the lower limit of a band is always evaluated with the >=
operator and the upper limit is always evaluated with the <
operator. This way, we can configure the upper limit of one band and the lower limit of the next band with the exact same value to make sure there is no overlap between bands and also no gap.
...
Term | Milliseconds |
---|---|
second | 1,000 |
minute | 60,000 |
hour | 3,600,000 |
day | 86,400,000 |
week | 604,800,000 |
month (30.44 days) | 2,629,743,000 |
year (365.24 days) | 31,556,926,000 |
Rule results - cased results
In contrast to the partitioning of a result range as in banded results, cased results are a collection of discrete and explicit outcomes for a rule processor out of which the rule processor will determine the specific result applicable to the evaluation it performed.
...
Attribute | Description |
---|---|
| This attribute defines the specific value that will be matched in the rule processor ( Every case contains a value, with the exception of the default “else” case. Values can be either strings, encapsulated in quotes, or numbers, without quotes. |
| Every rule processor is capable of reporting a number of different outcomes, but only a single outcome from the complete set is ultimately delivered to the typology processor. Each outcome is defined by a unique sub-rule reference identifier to differentiate the delivered outcome from the others and also to allow the typology processor to apply a unique weighting to that specific outcome. We have elected to assign a numeric sequence to the sub-rule references for result cases, prefaced with a dot (“.”) separator, but this format is not mandatory for the sub-rule reference string. Any descriptive and unique string would be an acceptable sub-rule reference. By convention, the default “else” outcome has a sub-rule reference of |
| The configuration file defines whether the result delivered by the rule processor is flagged as either |
| The reason provides a human-readable description of the result that accompanies the rule result to the eventual over-all evaluation result. |
Complete example of a rule processor configuration
Complete example of a rule processor configuration
2.2. Typology Configuration
Introduction
The typology processor collects rule results and compiles the rule results into a variety of fraud and money laundering scenarios, called typologies. Unlike rule processors that have specific and unique functions guided by their individual configurations, the typology processor is a centralized processor that arranges rules into scenarios based on multiple typology-specific configurations. Effectively, a typology is described solely by its configuration and does not otherwise exist as a physical processor. When the typology processor receives a rule result, it determines which typologies rely on the result and a typology-specific configuration is used to formulate the scenario.
...
Typology configuration metadata
A
rules
object, that specifies the weighting for each rule result by sub-rule referenceAn
expression
object, that defines the formula for calculating the typology score out of the rule result weightingsA
workflow
object, that contains the alert and interdiction thresholds against which the typology score will be measured
Typology configuration metadata
The typology configuration “header” contains metadata that describes the typology. The metadata includes the following attributes:
...
Code Block |
---|
{ "id": "typology-processor@1.0.0", "cfg": "typology-001@1.0.0", "desc": "Use of several currencies, structured transactions, etc", ... } |
The Rules object
The rules
object is an array that contains an element for every possible outcome for each of the rule results that can be received from the rule processors in scope for the typology.
...
Code Block |
---|
"rules": [ { "id": "001@1.0.0", "cfg": "1.0.0", "ref": ".err", "true": 0, "false": 0 }, { "id": "001@1.0.0", "cfg": "1.0.0", "ref": ".x01", "true": 100, "false": 0 }, { "id": "001@1.0.0", "cfg": "1.0.0", "ref": ".01", "true": 200, "false": 0 }, { "id": "001@1.0.0", "cfg": "1.0.0", "ref": ".02", "true": 100, "false": 0 }, { "id": "002@1.0.0", "cfg": "1.0.0", "ref": ".err", "true": 0, "false": 0 }, { "id": "002@1.0.0", "cfg": "1.0.0", "ref": ".x01", "true": 100, "false": 0 }, { "id": "002@1.0.0", "cfg": "1.0.0", "ref": ".x02", "true": 100, "false": 0 }, { "id": "002@1.0.0", "cfg": "1.0.0", "ref": ".01", "true": 100, "false": 0 }, { "id": "002@1.0.0", "cfg": "1.0.0", "ref": ".02", "true": 200, "false": 0 } ] |
The expression object
The expression object in the typology processor defines the formula that is used to calculate the typology score. The expression is able to accommodate any formula composed out of a combination of multiplication (*
), division (/
), addition (+
) and subtraction (-
) operations.
...
typology score = rule 001 weighting + rule 002 weighting + rule 003 weighting + rule 004 weighting
The workflow object
The workflow object determines the thresholds according to which the typology processor will decide if an action is necessary in response to the typology score. A typology can be configured with two separate thresholds:
...
Code Block |
---|
"workflow": { "alertThreshold": 500, "interdictionThreshold": 1000 } |
Complete example of a typology configuration
Complete example of a typology processor configuration
2.3. The Network Map
Introduction
The network map associates a specific transaction type with the rules and typologies that will be used to evaluate the incoming transaction. The network map allows for a sub-division of typologies according to themes (channels) as may be appropriate for a specific implementation. For example, typologies can be arranged in channels according to the types of financial crime they aim to detect, or typologies can be arranged according to the speed and performance with which they are required to respond, based on the infrastructure onto which the rules are deployed.
...
rules -> typologies -> channels -> transaction.
Network map metadata
The network map “header” contains metadata that describes the network map. The metadata includes the following attributes:
...
Code Block |
---|
{ "active": true, "cfg": "1.0.0", |
The messages object
The messages
object is an array that contains information about the transactions that the platform is expected to evaluate. Each element in the messages
object contains the following attributes11:
...
Code Block |
---|
"messages": [ { "id": "004@1.0.0", "cfg": "1.0.0", "txTp": "pacs.002.001.12", "channels": [ |
The channels object
The channels
object is a nested array object inside the transaction element in the messages
array object. The channels
array defines the channels within which the typologies are distributed. The channel object contains id
and cfg
attributes to differentiate between multiple channels. The platform is deployed by default to only contain a single channel, so the values are typically:
...
Code Block |
---|
{ "id": "001@1.0.0", "cfg": "1.0.0", "typologies": [ |
The typology object
The typology object array contains the following attributes:
...
Code Block |
---|
{ "id": "typology-processor@1.0.0", "cfg": "001@1.0.0", "rules": [ |
The rules object
The rules object array contains the following attributes:
...
Code Block |
---|
"rules": [ { "id": "002@1.0.0", "cfg": "1.0.0" }, |
Complete network map example
Complete example of a network map
2.4. Updating configurations via the ArangoDB API
3. Version Management
3.1. Introduction and Basics
Each configuration document in the platform can be assigned a unique semantic version that will identify one instance of a configuration document as distinctly separate from another instance of the same configuration document.
...
MAJOR version when you make incompatible API changes
MINOR version when you add functionality in a backward compatible manner
PATCH version when you make backward compatible bug fixes
3.2. Configuration version management of processors
Every rule processor, typology processor and transaction aggregation and decisioning processor (TADProc) is guided by its own configuration document. The specific version of a configuration document that is required to operate a processor is defined in the network map when the evaluation routing is specified. When a processor receives an instruction from its predecessor in the evaluation flow, the processor checks the network map to determine which configuration document and version to use to perform its tasks.
...
Once a configuration document has been created or updated and uploaded to the configuration database, the configuration is ready to be used, but not in use yet. To activate a new configuration (or version), the configuration must be linked to the processor in the network map.
3.3. The Network Map
The network map defines the routing of an incoming transaction to all rules and typologies that are required to evaluate the transaction7. By default, the platform is configured to evaluate a pacs.002 transaction that concludes a transaction initiated from a pain.001 or pacs.008 message with a status response.
...
The active network map ultimately defines the scope of a particular evaluation, right down to the specific processors and their versions that are going to be used, as well as the specific version of the processor configuration required. If any of the components in a network map changes, a new network map must be deployed and activated to replace the previous iteration of the network map.
...
References
In its current configuration, the platform only evaluates the pacs.002 as the trigger payload for the rule processors and typologies have only been defined with the final status of a payment transaction in mind.
The typology processor is not currently configured to interdict the transaction when the threshold is breached; only investigations are commissioned once the evaluation of all the typologies are complete.
An explicit version reference has been planned for development to make it easier for an operator to link an evaluation result to the specific originating network map.
We have found during our performance testing that the text-based descriptions in our processor results undermines the performance gains we achieved with our ProtoBuff implementation. We will be removing the unabridged reason and processor descriptions from the configuration documents in favor of shorter look-up codes that will then also be used to introduce regionalized/language-specific descriptions.
In its default deployment, the platform contains a single version of the “core” platform processors (the typology processor and TADProc) at a time. Though it is possible to deploy and maintain multiple parallel versions of these processors and manage routing to these processors through the network map, this guide will only focus on singular core processors for now.
Before our implementation of NATS, Tazama processors were implemented as RESTful micro-services. The
host
attributes in the network map contained the URL where the processors could be addressed. With our initial implementation of NATS, the routing information was moved into environment variables that were read into the processors when they were deployed, or restarted in the event of a processor failure. We have now removed the need to specify the host property for a processor - the routing is automatically determined from the network map at processor startup - see https://github.com/frmscoe/General-Issues/issues/310 for details.