Surrogate Object Detection Explainer (SODEx) with YOLOv4 and LIME

Jonas Herskind Sejr*, Peter Schneider-Kamp, Naeem Ayoub

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

1 Downloads (Pure)

Abstract

Due to impressive performance, deep neural networks for object detection in images have become a prevalent choice. Given the complexity of the neural network models used, users of these algorithms are typically given no hint as to how the objects were found. It remains, for example, unclear whether an object is detected based on what it looks like or based on the context in which it is located. We have developed an algorithm, Surrogate Object Detection Explainer (SODEx), that can explain any object detection algorithm using any classification explainer. We evaluate SODEx qualitatively and quantitatively by detecting objects in the COCO dataset with YOLOv4 and explaining these detections with LIME. This empirical evaluation does not only demonstrate the value of explainable object detection, it also provides valuable insights into how YOLOv4 detects objects
Original languageEnglish
JournalMachine Learning and Knowledge Extraction
Volume3
Issue number3
Pages (from-to)662-671
DOIs
Publication statusPublished - Aug 2021

Fingerprint

Dive into the research topics of 'Surrogate Object Detection Explainer (SODEx) with YOLOv4 and LIME'. Together they form a unique fingerprint.

Cite this