AI-based Models and Model Risk

AI-based models are more susceptible to certain risks than their conventional counterparts, unless they are put through proper governance and oversight mechanisms. We will look at some of the model risks AI models may be susceptible (over its conventional counterparts).

Kasthuri Rangan Bhaskar
VP, Financial Services Practice & Risk SME (Lead) at BCT Digital

AI-based Models and Model Risk

AI-based models are more susceptible to certain risks than their conventional counterparts, unless they are put through proper governance and oversight mechanisms. We will look at some of the model risks AI models may be susceptible (over its conventional counterparts).

Kasthuri Rangan Bhaskar
VP, Financial Services Practice & Risk SME (Lead) at BCT Digital

AI-based models are more susceptible to certain risks than their conventional counterparts, unless they are put through proper governance and oversight mechanisms. We will look at some of the model risks AI models may be susceptible (over its conventional counterparts).

Model Risk in AI Models

Bias Risk: This happens when the results of an ML model are skewed in favour of or against a particular cross section of the population. This may have happened due to various reasons:

  • The training data selected may not have been representative enough, either intentionally or unintentionally
  • The fundamental characteristics of the universe have changed since the model was last trained (for example, income distribution used in the training data has undergone a dramatic change)
  • In certain cases, interaction among the variables may result in bias that is not readily noticeable

A classic example of model bias, which has become public in the recent times, albeit for AI model based recruiting, is this:
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

‘Black-Box’ Risk: This may occur where highly complex models are used and the relationship between the output and the causal factors are not explainable to common business users, resulting in the ML models turning into a ‘black box’. This is particularly common in areas where vendor models are used. Wherever models turn into black boxes, the users become detached from them. Everything then follows ‘the model says so’ approach, rather than allowing the owners/users to apply expert judgment or discrimination to complement the results. The suitability and appropriateness of the model being used become difficult to evaluate in such cases due to the opacity. Also, by this approach, there cannot be effective feedback from the model owners/users back to the model development team.

This also poses challenges to regulators in validating the models. Read more:https://www.centralbanking.com/central-banks/financial-stability/micro-prudential/3504951/black-box-models-present-challenges-us-regulators

Regulatory/Legal Risk: Usage of ML models, unless subject to tight governance and oversight processes may result in legal risks. This is typically the outcome of other risks such as bias risk or ‘black box’ risk. A classic example is when Facebook was sued by the US Department of Housing and Urban Development for using tools that discriminated certain sections of society in housing-related advertisements.https://www.theverge.com/2019/3/28/18285178/facebook-hud-lawsuit-fair-housing-discrimination

A detailed article on the topic can be found here:https://www.americanbar.org/groups/business_law/publications/committee_newsletters/banking/2019/201904/fa_4/

Technology Risk: Some of the regulators have sounded alarm on the threat of AI algorithms or data being hijacked by criminals. The fear stems from the fact that some of the facets of the algorithm may not be explainable by intuition or by an expert, providing a chink in the armour for cyber criminals to manipulate the data or the algorithm to skew results in their favour. Australian Prudential Regulation Authority Executive Board Member Geoff Summerhayes sounded a warning on this sometime back:https://www.insurancebusinessmag.com/au/news/breaking-news/apra-leader-sounds-alarm-on-ai-use-96353.aspx

To conclude, AI-based algorithms have to be subjected to human oversight and discretion, lest they have unintended consequences. They have to be aligned to an institution’s Model Risk Management Framework, about which we will discuss in the next part.

Authors

Kasthuri Rangan Bhaskar

VP, Financial Services Practice & Risk SME (Lead) at BCT Digital

Mr. Kasthuri is the Risk SME (Lead) at Bahwan CyberTek with profound experience in Market Risk & Credit Risk, and has over 15 years of experience in the BFSI sector. He has experience working with some of the large mainstream BFSI labels in the country.

Share On

Also Read

Thought Leadership
By rt360 January 13, 2020

Making big leaps in Banking with Big Data

The banking risk management scenario is steadily evolving. The system of compiling data from information silos and feeding them into manual spreadsheets is now fading into the past.

Ramkumar Iyer
Lead Technical Architect at BCT Digital
Rajiv Singh
Principal Consultant at BCT Digital

Read More
Thought Leadership
By rt360 January 13, 2020

rt360 and the Risk-Adjusted Return on Capital (RAROC) calculator

The struggle that banks face in combating credit risks is multi-dimensional. Foremost among them is determining the right capital allocation and pricing for different sources of their revenue.

Shankar Ravichandran
Senior Manager at BCT Digital

Read More