APAC CIOOutlook

Advertise

with us

  • Technologies
      • Artificial Intelligence
      • Big Data
      • Blockchain
      • Cloud
      • Digital Transformation
      • Internet of Things
      • Low Code No Code
      • MarTech
      • Mobile Application
      • Security
      • Software Testing
      • Wireless
  • Industries
      • E-Commerce
      • Education
      • Logistics
      • Retail
      • Supply Chain
      • Travel and Hospitality
  • Platforms
      • Microsoft
      • Salesforce
      • SAP
  • Solutions
      • Business Intelligence
      • Cognitive
      • Contact Center
      • CRM
      • Cyber Security
      • Data Center
      • Gamification
      • Procurement
      • Smart City
      • Workflow
  • Home
  • CXO Insights
  • CIO Views
  • Vendors
  • News
  • Conferences
  • Whitepapers
  • Newsletter
  • CXO Awards
Apac
  • Artificial Intelligence

    Big Data

    Blockchain

    Cloud

    Digital Transformation

    Internet of Things

    Low Code No Code

    MarTech

    Mobile Application

    Security

    Software Testing

    Wireless

  • E-Commerce

    Education

    Logistics

    Retail

    Supply Chain

    Travel and Hospitality

  • Microsoft

    Salesforce

    SAP

  • Business Intelligence

    Cognitive

    Contact Center

    CRM

    Cyber Security

    Data Center

    Gamification

    Procurement

    Smart City

    Workflow

Menu
    • Retail
    • Cyber Security
    • Hotel Management
    • Workflow
    • E-Commerce
    • Business Intelligence
    • MORE
    #

    Apac CIOOutlook Weekly Brief

    ×

    Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Apac CIOOutlook

    Subscribe

    loading

    THANK YOU FOR SUBSCRIBING

    • Home
    • Retail
    Editor's Pick (1 - 4 of 8)
    left
    Bunnings  Diy Digital Transformation

    Leah Balter, Chief Information Officer, Bunnings

    AI becomes Personal

    Sherif Mityas, CIO & Chief Experience Officer, TGI Fridays

    Is Your Head in the Sand?

    Marc Kermisch, VP & CIO, Red Wing Shoe

    Laying a Foundation for the New-Age Retail Industry

    Paul Karras, SVP & CIO, Wilton Brands

    Driving a Performance Culture

    Paul Scorza, CIO, Ahold USA

    Discover the First Mile

    Ruben Martin, Co-Founder &CTO/ COO, Quivers

    The Missing Link for Retail: Closing the Loop between Point of Sale and Marketing

    David Inggs, CTO, VMob

    It all Comes Together in Retail.  In a World of Possibilities, Where to Start?

    Simon Kennedy, CIO, The Warehouse Group

    right

    Reimagining Model Risk Management Framework

    Chun Maw TEY, Head, Group FRS 9 Model Validation & SG Credit Risk Model Validation, Maybank

    Tweet
    content-image

    Chun Maw TEY, Head, Group FRS 9 Model Validation & SG Credit Risk Model Validation, Maybank

    Introduction

    With ever-more powerful machines developed, there’s a Cambrian explosion of data created and stored. In today’s banking industry, institutions not using artificial intelligence/machine learning (AI/ML) risk losing their competitive edge as competitors are increasingly enhancing their strategic decisions with the power analytical capabilities of AI/ML. Especially since the global pandemic, businesses are accelerating their digital transformation, especially the adoption of AI/ML-based technologies.

    According to IBM’s survey, 78% of companies say that it is important that results obtained from AI are “fair, safe, and reliable.” To ensure reliable AI models, companies should consider implementing a system that includes checks of impartiality, transparency, responsibility, and accountability, having the right controls and security measures in place, constant tracking of reliability, and the protection of customer data. That system in the banking industry is the model risk management (MRM) framework.

    Relooking at Model Risk Management Framework

    Most banks have an MRM function, at least a model validation function for IRB banks. Typically, the MRM function is currently performed by dedicated and independent teams reporting to the CRO. While these firms have developed a robust MRM approach to improve the governance and control of their critical models determining capital requirements and lending decisions, this approach is usually not ideal for AI/ML models or less regulated modelling aspects, as highlighted below.

    1. MRM is often focused more on traditional risk types, primarily financial risks, such as capital adequacy, market risk and credit risk. These may not fully cover the new and more diverse risks arising from the widespread use of AI, such as reputational risk, consumer and conduct risk, and employee risk.

    2. Many of the new AI/ML models are very different from traditional ML model types. The increased model complexity is a key driver of higher risk associated with AI/ML models.

    3. One key difference between AI/ML and traditional models is that the AI/ML model is expected to continuously learn from the data (autodidactic), identify patterns and refine/change its decision-making process.

    This retraining may change the essential properties of the model and its parameterization, so renewed validation and adequacy checks are required. Traditional MRM is typically based on a point-in-time model assessment, e.g., annual validation, which largely assumes the models are largely static between reviews. This is clearly inadequate when, for example, a fraud model is retrained weekly in order to adapt to new scams.

    4. AI models are difficult to track across the organization. AI use is getting more widespread and, in many organizations, decentralized across the enterprise, thus making it more difficult for risk managers to track. It is imperative that model inventory is updated to capture a large number of models from various users, with documentation of the defined characteristics of the models, such as type of models, risk tiering, model usage etc. As such, the governance structure with clear R&R mapped out, with audit trails embedded, will be important.

    5. Results based on AI/ML models need to be explainable to all stakeholders, i.e., customers, management, and regulators. As such, transparency and explainability are key requirements to ensure that AI/ML algorithms are functioning properly.

    Striving for Modern Model Risk Management Framework

    AI/ML algorithms are often embedded in larger AI application systems, such as software-as-a-service (SaaS) offerings from vendors. Risk Management cannot be an afterthought or addressed only by model validation functions such as those that currently exist in financial services. Companies need to build risk management directly into their AI initiatives so that oversight is constant and concurrent with internal development and external provisioning of AI across the enterprise.

    To tackle these challenges without constraining AI innovation and disrupting the agile ways of working that enable it, banks need to adopt a new approach to their existing MRM framework.

    To ensure reliable AI models, companies should consider implementing a system that includes checks of impartiality, transparency, responsibility, and accountability, having the right controls and security measures in place, constant tracking of reliability, and the protection of customer data.

    1. Data pre-/post-processing controls: data pipeline testing, data sourcing analysis, statistical data checks and data-usage fairness.

    2. To build a model that achieves good performance, various stakeholders other than model developers, i.e., Business, IT, risk, and compliance, should be engaged to check if the model actually solves the problems stated during ideation: model-robustness review, business-context metrics testing, data-leakage controls, label-quality assessment, data availability.

    3. Upon completing the model or identifying a few shortlisted models, evaluate the performance of the model(s) and engage with the model owner regularly to ensure it fits business usage before moving the model into production or deployment.

    4. The evaluation and monitoring at every stage of the MLOps and MRM life cycle. Tools such as model interpretability, bias detection, and performance monitoring should be built in so that oversight is constant and concurrent with AI development activities and consistent across the enterprise.

    Conclusion

    The technological challenges posed by new uses of data and innovative applications of AI have been addressed over the past few years. However, there have been many reports of AI models going awry over the past years. From gender and race discrimination in loan applications to misidentification of pictures of certain races. This has only served as a reminder that using AI can create significant risks. Especially in a highly regulated environment like banking, where the risk of not properly addressing these model risks can be high, organizations must quickly adapt to address the AI/ML model risk challenges.

    tag

    AI

    Financial

    inventory

    SaaS

    Machine Learning

    review

    Fraud

    Weekly Brief

    loading
    Top 10 Retail Tech Solutions Companies - 2024
    ON THE DECK

    Retail 2024

    I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info

    Copyright © 2025 APAC CIOOutlook. All rights reserved. Registration on or use of this site constitutes acceptance of our Terms of Use and Privacy and Anti Spam Policy 

    Home |  CXO Insights |   Whitepapers |   Subscribe |   Conferences |   Sitemaps |   About us |   Advertise with us |   Editorial Policy |   Feedback Policy |  

    follow on linkedinfollow on twitter follow on rss
    This content is copyright protected

    However, if you would like to share the information in this article, you may use the link below:

    https://retail.apacciooutlook.com/views/reimagining-model-risk-management-framework-nwid-9881.html