Share with
AI-based models are more susceptible to certain risks than their conventional counterparts, unless they are put through proper governance and oversight mechanisms. We will look at some of the model risks AI models may be susceptible (over its conventional counterparts).
Model Risk in AI Models
Bias Risk: This happens when the results of an ML model are skewed in favour of or against a particular cross section of the population. This may have happened due to various reasons:
- The training data selected may not have been representative enough, either intentionally or unintentionally
- The fundamental characteristics of the universe have changed since the model was last trained (for example, income distribution used in the training data has undergone a dramatic change)
- In certain cases, interaction among the variables may result in bias that is not readily noticeable
A classic example of model bias, which has become public in the recent times, albeit for AI model based recruiting, is this:
‘Black-Box’ Risk: This may occur where highly complex models are used and the relationship between the output and the causal factors are not explainable to common business users, resulting in the ML models turning into a ‘black box’. This is particularly common in areas where vendor models are used. Wherever models turn into black boxes, the users become detached from them. Everything then follows ‘the model says so’ approach, rather than allowing the owners/users to apply expert judgment or discrimination to complement the results. The suitability and appropriateness of the model being used become difficult to evaluate in such cases due to the opacity. Also, by this approach, there cannot be effective feedback from the model owners/users back to the model development team.This also poses challenges to regulators in validating the models. Read more: ‘Black box’ models present challenges – US regulators
Regulatory/Legal Risk: Usage of ML models, unless subject to tight governance and oversight processes may result in legal risks. This is typically the outcome of other risks such as bias risk or ‘black box’ risk. A classic example is when Facebook was sued by the US Department of Housing and Urban Development for using tools that discriminated certain sections of society in housing-related advertisements.
A detailed article on the topic can be found here :
Technology Risk: Some of the regulators have sounded alarm on the threat of AI algorithms or data being hijacked by criminals. The fear stems from the fact that some of the facets of the algorithm may not be explainable by intuition or by an expert, providing a chink in the armour for cyber criminals to manipulate the data or the algorithm to skew results in their favour. Australian Prudential Regulation Authority Executive Board Member Geoff Summerhayes sounded a warning on this sometime back:
To conclude, AI-based algorithms have to be subjected to human oversight and discretion, lest they have unintended consequences. They have to be aligned to an institution’s Model Risk Management Framework, about which we will discuss in the next part.
Author
Author
Kasthuri Rangan Bhaskar
VP, Financial Services Practice & Risk SME (Lead) at BCT DigitalMr. Kasthuri is the Risk SME (Lead) at Bahwan CyberTek with profound experience in Market Risk & Credit Risk, and has over 15 years of experience in the BFSI sector. He has experience working with some of the large mainstream BFSI labels in the country.