Article
A case for model validation in financial services processes
Feb 03, 2021 · Authored by
As dependency on automated model systems and software is ever increasing and resources must be efficiently utilized, the need for strong system validation and calibration has never been more important. Financial service processes, such as BSA/AML, ALM and CECL, are becoming increasingly reliant on automated models in: detecting suspicious activity (BSA/AML), measuring risk and supporting key business decisions (ALM), and monitoring required allowances (CECL). Over the past year, one of the most commonly cited areas of examiner criticism is centered on the concept of sound model risk management. Model validation has been a significant requirement of regulators over the past several years and the expectation of validation is increasing. Regulatory examination bodies have added model specialists, released supervisory guidance, and increased regulatory enforcement actions related to sound and effective management of model risk. Improving the efficiency of implemented models is an ongoing exercise.
The OCC, FDIC and FRB have developed guidance covering model validations. The OCC and FRB guidance was released in 2011. In 2017, the FDIC adopted the OCC and FRB guidance with technical conforming changes to include a revised definition of ‘banks’ to reflect the FDIC’s supervisory authority and to reflect the FDIC’s expectations that the supervisory guidance generally pertains to FDIC-supervised institutions with $1 billion or more in total assets. The FDIC further expects that this guidance will pertain to FDIC-supervised institutions with under $1 billion in total assets if the institution’s model use is significant, complex, or poses elevated risk to the institution. The recommended frequency of model validations is provided in the guidance. The regulatory agencies state that validation should be conducted ‘periodically’ – defined as at least annually but more frequently if warranted – of each model to determine whether it is working as intended and if the existing validation activities are sufficient.
The use of models invariably presents model risk, which is the potential for adverse consequences from decisions based on incorrect use or misuse of model outputs and reports. Model risk can lead to:
- Financial loss
- Poor business and strategic decision making
- Reputation damage
Even with skilled modeling and robust validation, model risk cannot be eliminated, so other tools should be used to manage model risk effectively. As is generally the case with other risks, materiality is an important consideration in model risk management. Another essential element is a sound model validation. Effective model validation helps reduce model risk by identifying model errors, corrective actions and appropriate use.
What are the benefits of model validation?
- Increased efficiency
- Decreased operational costs
- Ensuring system is working properly and as intended
- Improved decision-making and confidence of the models
- Satisfying regulatory requirements
Common components of an effective validation framework include:
- Independence: Validation should be completed by people who are not responsible for development or use and do not have a stake in whether a model is determined to be valid. Staff completing the validation should have the requisite knowledge, skills and expertise. Staff conducting validation work should have explicit authority to challenge developers and users and to elevate their findings, including issues and deficiencies. Validation can be completed by internal staff or a third party.
- Conceptual design: Evaluate the logic and design of the model. The model was designed in a way to achieve a certain objective; now the question is: Is the model designed in a way to do exactly that? Is there anything missing? Are all risks that the firm is exposed to taken into consideration? Does it include all products and services?
- System validation: Evaluate the system to ensure that it is properly designed to perform. After ensuring that the conceptual design is adequate in mitigating risks, the system itself should be tested to ensure that it reflects the same.
- Data validation: Confirm that accurate and complete information is captured by the system to execute the model. A system can be designed and implemented so professionally to achieve its objective, however, it can end up failing badly due to data integrity issues. If the input data is not reliable, the output would not be in a position to give any value. This part will require identifying source systems and transaction codes, and ensuring accurate data feeds. It is essential for an organization to clearly establish and document its use of model data, the flow of data from various sources, and the internal controls put in place to have confidence in the reliability of the information.
- Process validation: The framework should include an evaluation of controls, the reconciliation of source data systems with model inputs, and the usefulness and accuracy of model outputs and reporting.
Over the past several years, we have encountered a number of regulatory criticisms of model validations, including the following examples:
- Lack of independence of the person/group performing model validation
- Use of “off the shelf” or default settings
- Lack of developmental evidence to substantiate model assumptions
- Failure to complete ongoing model risk management due diligence and/or monitoring
- Unsupported sampling methodology
- Lack of explanation to support the use of expert judgment and model overrides
- An incomplete model inventory
While model validation can be a tedious task, nonetheless a necessary step for an industry – and world – becomes ever more reliant on automated model systems each year.
For more information on this topic, or to learn how Baker Tilly’s banking and capital markets industry Value Architects™ can help, contact our team.