Elaine Yao, Pritam Dash and Karthik Pattabiraman, Proceedings of the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2023. (Acceptance rate: 20%) [ PDF | Talk ] (Code)
Abstract: Swarm robotics, particularly drone swarms, are used in various safety-critical tasks. While a lot of attention has been given to improving swarm control algorithms for improved intelligence, the security implications of various design choices in swarm control algorithms have not been studied. We highlight how an attacker can exploit the vulnerabilities in swarm control algorithms to disrupt drone swarms. Specifically, we show that the attacker can target a swarm member (target drone) through GPS spoofing attacks, and indirectly cause other swarm members (victim drones) to veer from their course, resulting in a collision with an obstacle. We call these Swarm Propagation Vulnerabilities. In this paper, we introduce SwarmFuzz, a fuzzing framework to capture the attacker’s ability, and efficiently find such vulnerabilities in swarm control algorithms. SwarmFuzz uses a combination of graph theory and gradient-guided optimization to find the potential attack parameters. Our evaluation on a popular swarm
control algorithm shows that SwarmFuzz achieves an average success rate of 48.8% in finding vulnerabilities, and compared to random fuzzing, has a 10x higher success rate, and 3x lower runtime. We also find that swarms of a larger size are more vulnerable to this attack type, for a given spoofing distance.
-
Recent Papers
- D-semble: Efficient Diversity-Guided Search for Resilient ML Ensembles
- A Method to Facilitate Membership Inference Attacks in Deep Learning Models
- SAM: Foreseeing Inference-Time False Data Injection Attacks on ML-enabled Medical Devices
- AutoPatch: Automated Generation of Hotpatches for Real-Time Embedded Devices
- SpecGuard: Specification Aware Recovery for Robotic Autonomous Vehicles from Physical Attacks
- Global Clipper: Enhancing Safety and Reliability of Transformer-based Object Detection Models
- Co-Approximator: Enabling Performance Prediction in Colocated Applications
- Harnessing Explainability to Improve ML Ensemble Resilience
- POMABuster: Detecting Price Oracle Manipulation Attacks in Decentralized Finance
- Systematically Assessing the Security Risks of AI/ML-enabled Connected Healthcare Systems
Pages
- About us
- Awards
- Papers
- People
- Photos
- Projects
- Autonomous Systems and IoT Security
- Building Robust ML Systems to Training Data Faults
- Decentralized Finance (DeFi) and Blockchain Oracle Security
- Error Resilient ML Applications
- Membership Inference Attacks in Machine Learning Models
- Middleware for Edge Computing Applications
- Resilience Assessment of ML Models under Hardware Faults
- Smart Contract’s Security
- Software