Lucas Palazzi, Guanpeng Li, Bo Fang, and Karthik Pattabiraman, IEEE Transactions on Dependable and Secure Computing (TDSC). (Acceptance date: March 2020). [PDF] (Code)
This paper is an extended version of the following conference paper
Abstract: Fault injection (FI) is a commonly used experimental technique to evaluate the resilience of software techniques for tolerating hardware faults. Software-implemented FI can be performed at different levels of abstraction in the system stack; FI performed at the compiler’s intermediate representation (IR) level has the advantage that it is closer to the program being evaluated and is hence easier to derive insights from for the design of software fault-tolerance mechanisms. Unfortunately, it is not clear how accurate IR-level FI is vis-a-vis FI performed at the assembly code level, and prior work has presented contradictory findings. In this paper, we perform a comprehensive evaluation of the accuracy of IR-level FI across a range of benchmark programs and compiler optimization levels. Our results show that IR-level FI is as accurate as assembly-level FI for silent data corruption (SDC) probability estimation across different benchmarks and optimization levels. Further, we present a machine-learning-based technique for improving the accuracy of crash probability measurements made by IR-level FI, which takes advantage of an observed correlation between program crash probabilities and instructions that operate on memory address values. We find that the machine learning technique provides comparable accuracy for IR-level FI as assembly code level FI for program crashes.
-
Recent Papers
- Anonymity Unveiled: A Practical Framework for Auditing Data Use in Deep Learning Models
- OneOS: Distributed Operating System for the Edge-to-Cloud Continuum
- RAVAGE: Robotic Autonomous Vehicles’ Attack Generation Engine
- Reentrancy Redux: The Evolution of Real-World Reentrancy Attacks on Blockchains
- ReMlX: Resilience for ML Ensembles using XAI at Inference against Faulty Training Data
- D-semble: Efficient Diversity-Guided Search for Resilient ML Ensembles
- A Method to Facilitate Membership Inference Attacks in Deep Learning Models
- SAM: Foreseeing Inference-Time False Data Injection Attacks on ML-enabled Medical Devices
- AutoPatch: Automated Generation of Hotpatches for Real-Time Embedded Devices
- SpecGuard: Specification Aware Recovery for Robotic Autonomous Vehicles from Physical Attacks
Pages
- About us
- Awards
- Papers
- People
- Photos
- Projects
- Autonomous Systems and IoT Security
- Building Robust ML Systems against Training Data Faults
- Decentralized Finance (DeFi) and Blockchain Oracle Security
- Error Resilient ML Applications
- Membership Inference Attacks in Machine Learning Models
- Middleware for Edge Computing Applications
- Resilience Assessment of ML Models under Hardware Faults
- Smart Contract’s Security
- Software