SAM: Foreseeing Inference-Time False Data Injection Attacks on ML-enabled Medical Devices

Mohammadreza Hallajiyan, Athish Pranav Dharmalingam, Gargi Mitra, Homa Alemzadeh, Shahrear Iqbal and Karthik Pattabiraman, To appear in the Workshop on Cybersecurity in HealthCare (HealthSec), 2024. Co-held with ACM CCS’24. [ PDF (coming soon) | Talk ]

Abstract: The increasing use of machine learning (ML) in medical systems necessitates robust security measures to mitigate potential threats. Current research often overlooks the risk of adversaries injecting false inputs through peripheral devices at inference time, leading to mispredictions in patients’ conditions. These risks are hard to foresee and mitigate during the design phase since the system is assembled by end users at the time of use. To address this gap, we introduce SAM, a technique that enables security analysts to perform System Theoretic Process Analysis for Security (STPA-Sec) on ML-enabled medical devices during the design phase. SAM models the medical system as a control structure, with the ML engine as the controller and peripheral devices as potential points for false data injection. It interfaces with state-of-the-art vulnerability databases and LLMs to automate the discovery of vulnerabilities and generate a list of possible attack paths. We demonstrate the usefulness of SAM through case studies on two FDA-approved medical devices: a blood glucose management system and a bone mineral density measurement software. SAM allows security analysts to expedite the security assessment of ML-enabled medical devices at the design phase. This proactive approach mitigates potential patient harm and reduces costs associated with post-deployment security measures.

Comments are closed.