Today’s technology to make payments and settle transactions is more adaptable than ever.  Unlike in the 1980s, nothing appears to be hard coded anymore and our ability to configure applications provides us with endless flexibility.  Any of us who have worked in the requirements definition phase of a payment systems’ deployment in Financial Services will immediately be able to recite examples of where key system parameters “must be configurable”.

We see this maturity of technology parameterisation in all walks of life, especially for those at the forefront of their industry.  In the motor industry, we see that Formula 1 cars are reconfigured for every race depending on weather, track layout, competition, etc.  In the world of airline travel, we see market-leading engine manufacturers monitor and recalibrate engines between flights depending on environmental conditions.  In modern medicine, physicians adjust prosthetics, treatment plans and the use of drugs to fit the individual patient.  However, in the world of Financial Services, we rarely change our highly configurable systems.  What was good enough for yesterday is often assumed good enough for tomorrow.  So why is this?

Well, the risk of change is significant.  Large-scale banking outages regularly impact thousands of customers as a result of poorly tested changes.  The cost of managing this risk is significant.  Testing every possible permutation of system interaction on a by and large random set of behaviours when one wishes to change a system’s configuration, is usually a large undertaking.

Then there is the Regulatory burden to contend with.  Demonstrating to the Regulatory community that the planned change will not bring negative impacts to a ‘finely tuned’ financial environment is usually an intensive activity.  Satisfying the well-intentioned but probing questions about testing and modelling methods can also take its toll on the critical path of a change programme.  Then there is the client ‘Anxiety Gap’ – how to put clients’ minds or the market at ease that the change will deliver benefits rather than problems for them.  So, what do we do?  Generally, we leave things as they are….. 

However, over the last 18 months I have seen a change to this Financial Services Modus Operandi (I will refrain from acronymising that).  I have been privileged to work with some institutions around the globe who have decided to buck this trend.  These institutions are not content with the status quo of their systems.  They are looking to increasingly optimise their systems, including customer facing, operational and other systems.  Whether they are looking to optimise their use of liquidity, reduce the quantity of fraudulent payments or even to reduce the likelihood of timeline critical issues, their confidence in embracing a different way of working is increasingly refreshing.  Like the Formula 1 analogy, these institutions accept that their system configuration was good for the day it was deployed, but tomorrow they may have different weather, track layouts or competition.  Or, in the case of Financial Services, different liquidity challenges, client needs or client behaviour.  Not only are these institutions proactively improving their systems and service offerings, but are also recognising that a reinvigorated system’s configuration is once again only as good as the day it was deployed.  This then takes them on a journey of improving their system monitoring capabilities to monitor the effectiveness of the new configuration.  Which, in turn, leads them to constantly re-assess how far from the optimum configuration they are, to simulate new or anticipated changes and then recalibrate where necessary.

And how is all this achieved?

At FNA we deploy simulators for our Financial Services clients to replicate their own systems.  These simulators can be scripted to run far ahead of normal clock-speed and therefore run hundreds of different system configurations over their own data in just days, or even hours.  This results in valuable insights into how best to tune their system, how to accurately communicate the impact of their change internally and externally, all in the safety of their own environments.

Dave Sissens ( is FNA’s Chief Solutions Officer.

More News

What is Suptech Analytics?

Author: Will Towning, Central Banks and Academia Programme Manager   What is Suptech Analytics?   Suptech is the application of innovative supervisory technology by central banks and financial authorities. It has traditionally focused on helping authorities collect and manage data more effectively, as well as digitise and automate certain processes. Such early iterations mostly support […]

FNA Talks Data Science in Economics and Finance with the Bank of England

FNA Talks Data Science in Economics and Finance with the Bank of England    Adrian Waddy, Data Consultant at Australian Prudential Regulation Authority and Developer at the Bank of England, joins host Adam Csabay to discuss his contribution to the Risk books publication, Data Science in Economics and Finance for Decision-makers. Adrian’s chapter, Implementing Big […]

Reconstructing and Stress Testing Credit Networks

By Amanah Ramadiah, Fabio Caccioli, Daniel Fricke Financial networks are an essential source of systemic risk. Unfortunately, detailed data on (direct and indirect) interactions between individual financial institutions is often unavailable, and the only the total aggregate position is known. To conduct a stress test, one must resort to network reconstruction methods to infer the […]
Copyright FNA © 2021 | Privacy Policy