AI-based Models and Model Risk

AI-based models are more susceptible to certain risks than their conventional counterparts, unless they are put through proper governance and oversight mechanisms. We will look at some of the model risks AI models may be susceptible (over its conventional counterparts).

Kasthuri Rangan Bhaskar
VP, Financial Services Practice & Risk SME (Lead) at BCT Digital

AI-based models are more susceptible to certain risks than their conventional counterparts, unless they are put through proper governance and oversight mechanisms. We will look at some of the model risks AI models may be susceptible (over its conventional counterparts).

Model Risk in AI Models

Bias Risk: This happens when the results of an ML model are skewed in favour of or against a particular cross section of the population. This may have happened due to various reasons:

  • The training data selected may not have been representative enough, either intentionally or unintentionally
  • The fundamental characteristics of the universe have changed since the model was last trained (for example, income distribution used in the training data has undergone a dramatic change)
  • In certain cases, interaction among the variables may result in bias that is not readily noticeable

A classic example of model bias, which has become public in the recent times, albeit for AI model based recruiting, is this:
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

‘Black-Box’ Risk: This may occur where highly complex models are used and the relationship between the output and the causal factors are not explainable to common business users, resulting in the ML models turning into a ‘black box’. This is particularly common in areas where vendor models are used. Wherever models turn into black boxes, the users become detached from them. Everything then follows ‘the model says so’ approach, rather than allowing the owners/users to apply expert judgment or discrimination to complement the results. The suitability and appropriateness of the model being used become difficult to evaluate in such cases due to the opacity. Also, by this approach, there cannot be effective feedback from the model owners/users back to the model development team.

This also poses challenges to regulators in validating the models. Read more:https://www.centralbanking.com/central-banks/financial-stability/micro-prudential/3504951/black-box-models-present-challenges-us-regulators

Regulatory/Legal Risk: Usage of ML models, unless subject to tight governance and oversight processes may result in legal risks. This is typically the outcome of other risks such as bias risk or ‘black box’ risk. A classic example is when Facebook was sued by the US Department of Housing and Urban Development for using tools that discriminated certain sections of society in housing-related advertisements.https://www.theverge.com/2019/3/28/18285178/facebook-hud-lawsuit-fair-housing-discrimination

A detailed article on the topic can be found here:https://www.americanbar.org/groups/business_law/publications/committee_newsletters/banking/2019/201904/fa_4/

Technology Risk: Some of the regulators have sounded alarm on the threat of AI algorithms or data being hijacked by criminals. The fear stems from the fact that some of the facets of the algorithm may not be explainable by intuition or by an expert, providing a chink in the armour for cyber criminals to manipulate the data or the algorithm to skew results in their favour. Australian Prudential Regulation Authority Executive Board Member Geoff Summerhayes sounded a warning on this sometime back:https://www.insurancebusinessmag.com/au/news/breaking-news/apra-leader-sounds-alarm-on-ai-use-96353.aspx

To conclude, AI-based algorithms have to be subjected to human oversight and discretion, lest they have unintended consequences. They have to be aligned to an institution’s Model Risk Management Framework, about which we will discuss in the next part.

Authors

Kasthuri Rangan Bhaskar

VP, Financial Services Practice & Risk SME (Lead) at BCT Digital

Mr. Kasthuri is the Risk SME (Lead) at Bahwan CyberTek with profound experience in Market Risk & Credit Risk, and has over 15 years of experience in the BFSI sector. He has experience working with some of the large mainstream BFSI labels in the country.

Share On

Also Read

Thought Leadership
By rt360 March 17, 2021

Key trends in Model Risk Management in 2021

The Lehman Brothers crisis and the subsequent recession gave rise to a new economic regime. Now, in the next normal, it’s time for yet another regime change. When it comes to charting the strategic growth of financial institutions, model risk management has earned its seat at the table in the banking and financial services industry.

Rajiv Singh
Data Scientist
Prashanth
Product Manager, Risk

Read More
Thought Leadership
By rt360 March 17, 2021

Enterprise risk management and the banking industry – A 2021 outlook

What does the next normal hold for ERM – a critically indispensable banking function in the post-pandemic era?
With the resurgence of COVID-19 in several parts of the globe, we are currently grappling with the biggest black swan event of our lifetime. The crisis continues to unfold to this day, impacting several industry sectors and businesses of all scales and sizes. Even banks are not an exception to the pangs of the pandemic. In the current precarious and rapidly escalating situation, enterprise risk management has re-emerged as a potent interventional mechanism to effectively alleviate the impact of a crisis.

Kasthuri Rangan Bhaskar
VP, Financial Services Practice & Risk SME (Lead)

Read More

Continue reading “AI-based Models and Model Risk”