Understanding the Resilience of Neural Network Ensembles against Faulty Training Data

Abraham Chan, Niranjhana Narayananan, Arpan Gujarati, Karthik Pattabiraman, and Sathish Gopalakrishnan, IEEE International Symposium on Quality, Reliability and Security (QRS), 2021. Full paper (Acceptance Rate: 25.1%) [ PDF | Talk | Video ] Best Paper Award (1 of 3)

Abstract: Machine learning (ML) is becoming more prevalent in safety-critical systems like autonomous vehicles and medical imaging. Faulty training data, where data is either mislabelled, missing, or duplicated, can increase the chance of misclassification, resulting in serious consequences. In this paper, we evaluate the resilience of ML ensembles against faulty training data, in order to understand how to build better ensembles. To support our evaluation, we develop a fault injection framework to systematically mutate training data, and introduce two diversity metrics that capture the distribution and entropy of predicted labels. Our experiments find that ensemble learning is more resilient than any individual model and that high accuracy neural networks are not necessarily more resilient to faulty training data. We also find a diminishing return in resilience with increasing number of models in an ensemble. These findings can help ML developers build ensembles that are both more resilient and efficient.

Comments are closed.