By: Doug Hague, Executive Director, UNC Charlotte’s School of Data Science
As data scientists work to understand the ethics and implications of their models, a framework for managing this is needed. Fortunately, the Model Risk Management (MRM) framework emerging from the financial services industry may be expanded to include ethics. Models from various industries including resume screeners, recidivism models and healthcare payment models may be biased against various users or protected groups and have resulted in poor publicity for any corporation found to be using them. As data scientists develop methods to manage bias, MRM may be useful to document and ensure best practices are followed. My focus here is applying MRM processes on mathematical biases of a model; however, the MRM framework is also applicable when broadening to fairness and overall ethical implications of data science.
In simple terms, MRM is a process that reviews and monitors model development and operations. It consists of examining data quality, mathematical soundness, quality of predictions, appropriate use, and ongoing monitoring, all through independent review and validation. In each of these areas, bias may creep into a model’s prediction.
Data: If data is biased at the start (as most data is), MRM has checks and balances to ensure that as much bias is removed as possible through management of input data, e.g.-selective sampling, ensuring representative data, etc. Older methods of removing protected variables are still necessary, but no longer enough as other correlated variables will bring bias back into the predictions.
Math: It is important to understand the implications of the mathematical techniques utilized while developing models. For example, it may be important for mathematics to show why a particular result was produced. Explainability, especially for models once considered to be black boxes such as neural networks, becomes critical to enabling some use cases and therefore required during validation and in production.
Performance: When examining the quality of model predictions, MRM can ensure that not only is the full data set examined, but that understanding the outcomes for protected subgroups are as similar as possible. This may result in a detuning of the overall performance to achieve a more unbiased outcome. MRM should require debate and internal transparency to these choices. One item of note - while protected variables should not be used during development, they should be available during validation to determine whether bias exists in the performance.
Appropriate use: Appropriate use is the case where MRM limits the reuse of models outside of the data and assumptions made during development. The reuse of models makes data scientists much more efficient; MRM ensures that this reuse does not cause ethical considerations. For example, does a model developed in Asia apply in the US where different protected variables are important? Sometimes the questions and checks asked by MRM are easy, other times not. Ensuring that the questions are asked and answered goes a long way towards more ethical models.
Monitoring: One of the more important process checks in MRM is the monitoring of model performance as model performance will drift. This is true for both static models and those auto-tuned frequently although in the former, performance drifts and in the latter parameters drift. As models drift, bias tends to creep back into the performance as well. Adding a bias check as well as a performance check during model monitoring will enable redevelopment at appropriate times.
Validation: Independent validation and monitoring of a model is a great way to ensure different stakeholders and viewpoints are considered. This can be done through a separate reporting chain as is common in financial service companies or at a minimum through peer review. Having an outside perspective prevents tunnel vision and provides some initial diversity of understanding. Best practice is to include validators that have different and relevant life experiences.
Applying the MRM framework to its model development practices can help a company better understand and reduce the risk of operating models that may have challenging ethical outcomes. Adding bias checks and assurances throughout the MRM process is one step that can help data science practitioners develop and manage the bias and ethical considerations in their work.
Originally published in 97 Things About Ethics Everyone In Data Science Should Know, ed. Bill Franks, O'Reilly Media, Aug 2020