THANK YOU FOR SUBSCRIBING

Reimagining Model Risk Management Framework
Chun Maw TEY, Head, Group FRS 9 Model Validation & SG Credit Risk Model Validation, Maybank


Chun Maw TEY, Head, Group FRS 9 Model Validation & SG Credit Risk Model Validation, Maybank
Introduction
With ever-more powerful machines developed, there’s a Cambrian explosion of data created and stored. In today’s banking industry, institutions not using artificial intelligence/machine learning (AI/ML) risk losing their competitive edge as competitors are increasingly enhancing their strategic decisions with the power analytical capabilities of AI/ML. Especially since the global pandemic, businesses are accelerating their digital transformation, especially the adoption of AI/ML-based technologies.
According to IBM’s survey, 78% of companies say that it is important that results obtained from AI are “fair, safe, and reliable.” To ensure reliable AI models, companies should consider implementing a system that includes checks of impartiality, transparency, responsibility, and accountability, having the right controls and security measures in place, constant tracking of reliability, and the protection of customer data. That system in the banking industry is the model risk management (MRM) framework.
Relooking at Model Risk Management Framework
Most banks have an MRM function, at least a model validation function for IRB banks. Typically, the MRM function is currently performed by dedicated and independent teams reporting to the CRO. While these firms have developed a robust MRM approach to improve the governance and control of their critical models determining capital requirements and lending decisions, this approach is usually not ideal for AI/ML models or less regulated modelling aspects, as highlighted below.
1. MRM is often focused more on traditional risk types, primarily financial risks, such as capital adequacy, market risk and credit risk. These may not fully cover the new and more diverse risks arising from the widespread use of AI, such as reputational risk, consumer and conduct risk, and employee risk.
2. Many of the new AI/ML models are very different from traditional ML model types. The increased model complexity is a key driver of higher risk associated with AI/ML models.
3. One key difference between AI/ML and traditional models is that the AI/ML model is expected to continuously learn from the data (autodidactic), identify patterns and refine/change its decision-making process.
4. AI models are difficult to track across the organization. AI use is getting more widespread and, in many organizations, decentralized across the enterprise, thus making it more difficult for risk managers to track. It is imperative that model inventory is updated to capture a large number of models from various users, with documentation of the defined characteristics of the models, such as type of models, risk tiering, model usage etc. As such, the governance structure with clear R&R mapped out, with audit trails embedded, will be important.
5. Results based on AI/ML models need to be explainable to all stakeholders, i.e., customers, management, and regulators. As such, transparency and explainability are key requirements to ensure that AI/ML algorithms are functioning properly.
Striving for Modern Model Risk Management Framework
AI/ML algorithms are often embedded in larger AI application systems, such as software-as-a-service (SaaS) offerings from vendors. Risk Management cannot be an afterthought or addressed only by model validation functions such as those that currently exist in financial services. Companies need to build risk management directly into their AI initiatives so that oversight is constant and concurrent with internal development and external provisioning of AI across the enterprise.
To tackle these challenges without constraining AI innovation and disrupting the agile ways of working that enable it, banks need to adopt a new approach to their existing MRM framework.
To ensure reliable AI models, companies should consider implementing a system that includes checks of impartiality, transparency, responsibility, and accountability, having the right controls and security measures in place, constant tracking of reliability, and the protection of customer data.
1. Data pre-/post-processing controls: data pipeline testing, data sourcing analysis, statistical data checks and data-usage fairness.
2. To build a model that achieves good performance, various stakeholders other than model developers, i.e., Business, IT, risk, and compliance, should be engaged to check if the model actually solves the problems stated during ideation: model-robustness review, business-context metrics testing, data-leakage controls, label-quality assessment, data availability.
3. Upon completing the model or identifying a few shortlisted models, evaluate the performance of the model(s) and engage with the model owner regularly to ensure it fits business usage before moving the model into production or deployment.
4. The evaluation and monitoring at every stage of the MLOps and MRM life cycle. Tools such as model interpretability, bias detection, and performance monitoring should be built in so that oversight is constant and concurrent with AI development activities and consistent across the enterprise.
Conclusion
The technological challenges posed by new uses of data and innovative applications of AI have been addressed over the past few years. However, there have been many reports of AI models going awry over the past years. From gender and race discrimination in loan applications to misidentification of pictures of certain races. This has only served as a reminder that using AI can create significant risks. Especially in a highly regulated environment like banking, where the risk of not properly addressing these model risks can be high, organizations must quickly adapt to address the AI/ML model risk challenges.