ATTENTION: This page has been migrated to the Tazama GitHub repository and is now located at: https://github.com/frmscoe/docs/blob/main/Product/configuration-management.md This page will no longer be maintained in Confluence. |
---|
Table of Content Zone | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
TL;DR
Platform configuration is managed through a number of configuration files, each containing a JSON document that configures a specific processor type (CRSP, rules and typologies) and specific processor instance identified by a processor identifier (id@version) and a configuration version.
...
Configuration documents are essentially files that contain a processor-specific configuration object in JSON format. The recommended way to upload the configuration file to the appropriate configuration database (networkMap
or configuration
) and collection is via Arango DB’s HTTP API that is deployed as standard during platform deployment.
...
Rule configuration metadata
A
config
object, thatmay contain a number of parameters
may contain a number of exit conditions
will contains either result bands
or alternatively will contain result cases
...
The combination of the id
and cfg
strings forms a unique identifier for every rule configuration and is sometimes compiled into a database key, though this is not essential: the database enforces the uniqueness of any configuration to make sure that a specific version of a configuration can never be over-written.configuration can never be over-written.
Example of the rule configuration metadata:
Code Block |
---|
{
"id": "rule-001@1.0.0",
"cfg": "1.0.0",
"desc": "Derived account age - creditor",
...
} |
The configuration object - parameters
...
Info |
---|
Why does the typology configuration A rule processor (defined by its id) is closely paired with its configuration (defined by the A typology processor is a generic “engine” processor. It is not paired with a specific typology the way a rule processor is - it is intended to work for multiple, if not all, typologies. The typology configuration needs another way to reference the specific typology that will be scored by the typology processor. For that reason, the |
Example of the typology configuration metadata:
Code Block |
---|
{
"id": "typology-processor@1.0.0",
"cfg": "typology-001@1.0.0",
"desc": "Use of several currencies, structured transactions, etc",
...
} |
The Rules object
The rules
object is an array that contains an element for every possible outcome for each of the rule results that can be received from the rule processors in scope for the typology.
...
Info |
---|
What does “every possible outcome” mean? A rule processor must always produce a result, and only ever a single result from a number of possible results. The rule result will always fall into one of the following categories: error, exit or band/case. Results across all the classes categories are mutually exclusive and there can be only one result regardless of the category. Results are uniquely identified via the
The rule processor must produce one of these results (identified by the result’s |
...
The messages
object is an array that contains information about the transactions that the platform is expected to evaluate. Each element in the messages
object contains the following attributes11:
id
is the unique identifier for the Transaction Aggregation and Decisioning Processor (TADProc) that will be used to ultimately conclude the evaluation of a specific transaction. It is possible for a transaction to be routed to a unique TADProc that contains specialized functionality related to summarizing the transaction’s results10.cfg
is the unique version of the deployed TADProc that will be used to conclude the evaluation of the transaction.host
defines the NATS subscription queue for the TADProc where the results of the previous processor in the evaluation flow, the typology processor, will be published11.txTp
defines the transaction type for which the message element is intended. ThetxTp
value here must match a correspondingTxTp
attribute in the root of the incoming message. If no matchingtxTp
attribute is found in the network map, the transaction will not be routed for evaluation and will simply be ignored by the CRSP.channels
defines the next layer of evaluation destinations along the route laid out by the network map for the evaluation.
Code Block |
---|
"messages": [
{
"id": "004@1.0.0",
"cfg": "1.0.0",
"host": "TADP",
"txTp": "pacs.002.001.12",
"channels": [ |
...
id
is the unique identifier for the rule processor and version that will be invoked to evaluate the transaction.cfg
defines the unique rule configuration version that will guide the execution of the rule processor.host
defines the NATS queue that the rule processor will subscribe to so that it can receive the transaction from the CRSP for evaluation11. The NATS publishing destination for the rule processor is presently defined as an environment variable in the will guide the execution of the rule processor.
Code Block |
---|
"rules": [ { "id": "002@1.0.0", "host": "RuleRequest002", "cfg": "1.0.0" }, |
Complete network map example
...
Configuration documents in EKUTA Tazama are strictly structured JSON documents. Each document contains an identifier related to the specific processor and version of that processor to which the configuration is to be applied. For example, the configuration for a rule processor would have the following attribute and value in the typology configuration:
...
The configuration version attribute defines the specific version of the configuration file when it is used by a processor.
EKUTA Tazama employs semantic versioning3 for both processor source control and configuration documents:
...
Collection name | Processor Type |
---|---|
| Rule4 |
| Typologies5 |
| Transaction Aggregation and Decisioning6 |
...
Beyond this constraint imposed by the database, configuration versions are expected to be managed outside the platform. EKUTA Tazama does not currently offer a native user interface for configuration management, though Sybrin, one of the FRMS Centre of Excellence’s System Integrator partners, have created a user interface that allows for the creation of configuration documents as well as the automated management of configuration versions between iterations of a configuration document.
...
In its current configuration, the platform only evaluates the pacs.002 as the trigger payload for the rule processors and typologies have only been defined with the final status of a payment transaction in mind.
The typology processor is not currently configured to interdict the transaction when the threshold is breached; only investigations are commissioned once the evaluation of all the typologies are complete.
An explicit version reference has been planned for development to make it easier for an operator to link an evaluation result to the specific originating network map.
We have found during our performance testing that the text-based descriptions in our processor results undermines the performance gains we achieved with our ProtoBuff implementation. We will be removing the unabridged reason and processor descriptions from the configuration documents in favor of shorter look-up codes that will then also be used to introduce regionalized/language-specific descriptions.
In its default deployment, the platform contains a single version of the “core” platform processors (the typology processor and TADProc) at a time. Though it is possible to deploy and maintain multiple parallel versions of these processors and manage routing to these processors through the network map, this guide will only focus on singular core processors for now.
Before our implementation of NATS, EKUTA Tazama processors were implemented as RESTful micro-services. The
host
attributes in the network map contained the URL where the processors could be addressed. With our initial implementation of NATS, the routing information was moved into environment variables that were read into the processors when they were deployed, or restarted in the event of a processor failure. At some point we will revert the routing information back to the network map so that we can adjust routing more dynamically while processors are in flightWe have now removed the need to specify the host property for a processor - the routing is automatically determined from the network map at processor startup - see https://github.com/frmscoe/General-Issues/issues/310 for details.