Zitao Chen, Niranjhana Narayanan, Bo Fang, Guanpeng Li, Karthik Pattabiraman, and Nathan DeBardeleben, IEEE International Symposium on Software Reliability Engineering (ISSRE), 2020. (Acceptance Rate: 26%) [ PDF | Talk ] (Code)
Abstract: As machine learning (ML) has seen increasing adoption in safety-critical domains (e.g., autonomous vehicles), the reliability of ML systems has also grown in importance. While prior studies have proposed techniques to enable efficient error-resilience (e.g., selective instruction duplication), a fundamental requirement for realizing these techniques is a detailed understanding of the application’s resilience. In this work, we present TensorFI, a high-level fault injection (FI) framework for TensorFlow-based applications. TensorFI is able to inject both hardware and software faults in general TensorFlow programs. TensorFI is a configurable FI tool that is flexible, easy to use, and portable. It can be integrated into existing TensorFlow programs to assess their resilience for different fault types (e.g., faults in particular operators). We use TensorFI to evaluate the resilience of 12 ML programs, including DNNs used in the autonomous vehicle domain. The results give us insights into why some of the models are more resilient. We also present two case studies to demonstrate the usefulness of the tool. TensorFI is publicly available at https://github.com/DependableSystemsLab/TensorFI.
-
Recent Papers
- D-semble: Efficient Diversity-Guided Search for Resilient ML Ensembles
- A Method to Facilitate Membership Inference Attacks in Deep Learning Models
- SAM: Foreseeing Inference-Time False Data Injection Attacks on ML-enabled Medical Devices
- AutoPatch: Automated Generation of Hotpatches for Real-Time Embedded Devices
- SpecGuard: Specification Aware Recovery for Robotic Autonomous Vehicles from Physical Attacks
- Global Clipper: Enhancing Safety and Reliability of Transformer-based Object Detection Models
- Co-Approximator: Enabling Performance Prediction in Colocated Applications
- Harnessing Explainability to Improve ML Ensemble Resilience
- POMABuster: Detecting Price Oracle Manipulation Attacks in Decentralized Finance
- Systematically Assessing the Security Risks of AI/ML-enabled Connected Healthcare Systems
Pages
- About us
- Awards
- Papers
- People
- Photos
- Projects
- Autonomous Systems and IoT Security
- Building Robust ML Systems to Training Data Faults
- Decentralized Finance (DeFi) and Blockchain Oracle Security
- Error Resilient ML Applications
- Membership Inference Attacks in Machine Learning Models
- Middleware for Edge Computing Applications
- Resilience Assessment of ML Models under Hardware Faults
- Smart Contract’s Security
- Software