Evaluating the Effect of Common Annotation Faults on Object Detection Techniques

Abraham Chan, Arpan Gujarati, Karthik Pattabiraman and Sathish Gopalakrishnan, To appear in the Proceedings of the IEEE International Symposium on Software Reliability Engineering (ISSRE), 2023. (Acceptance Rate: 29.5%) [ PDF | Talk ] (Code). Artifacts Available and Reviewed.

Abstract: Machine learning (ML) is applied in many safety-critical domains such as autonomous driving and medical diagnosis. Many ML applications in such domains require object detection, which includes both classification and localization, to provide additional context. To ensure high accuracy, state-of-the-art object detection systems require large quantities of correctly annotated images for training. However, creating such datasets is non-trivial, may involve significant human effort, and is hence inevitably prone to annotation faults. We evaluate the effect of such faults on object detection applications. We present ODFI, which can inject five different types of common annotation faults into any COCO-formatted dataset. We then use ODFI to inject these faults into two road traffic and one medical X-ray imaging datasets. Finally, using these faulty datasets, we systematically evaluate and compare the efficacy of existing object detection techniques that are designed to be robust against such faults. To do so, we introduce a new metric that evaluates the robustness of object detection models in the presence of faults. We find that single stage detectors trained with faulty annotations perform better in crowded scenes, redundant bounding boxes have the least impact on robustness, and ensembles have the highest overall robustness among robust techniques.

Comments are closed.