Harnessing Explainability to Improve ML Ensemble Resilience

Abraham Chan, Arpan Gujarati, Karthik Pattabiraman and Sathish Gopalakrishnan, To appear in the Supplementary proceedings of the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2024. Disrupt Track. (Acceptance Rate: TBD) [ PDF | Talk ]

Abstract: Safety-critical applications such as healthcare and autonomous vehicles, utilize machine learning (ML), where mispredictions could have disastrous consequences. Training data can contain faults, especially when collected through crowd-sourcing. Ensembles, consisting of multiple ML models voting on predictions, have been found to be an effective resilience technique. Ensembles are resilient when their constituent models behave independently during inference, by focusing on different features in an input. However, independence is not observed on every input, resulting in mispredictions. One way to improve ensemble resilience is to dynamically weigh predictions during inference by its constituent models instead of treating each model equally. While previous work on dynamically weighted models in ensembles has relied upon output diversity metrics due to efficiency, we focus on the feature-space of inputs for accuracy. Hence, we propose the use of explainable artificial intelligence XAI) techniques to dynamically adjust the weight of ensemble models based on local feature-space diversity

Comments are closed.