Explainable outlier detection: What, for Whom and Why?

Jonas Herskind Sejr*, Anna Schneider-Kamp

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

102 Downloads (Pure)

Abstract

Outlier algorithms are becoming increasingly complex. Thereby, they become much less interpretable to the data scientists applying the algorithms in real-life settings and to end-users using their predictions. We argue that outliers are context-dependent and, therefore, can only be detected via domain knowledge, algorithm insight, and interaction with end-users. As outlier detection is equivalent to unsupervised semantic binary classification, at the core of interpreting an outlier algorithm we find the semantics of the classes, i.e., the algorithm’s conceptual outlier definition. We investigate current interpretable and explainable outlier algorithms: what they are, for whom they are, and what their value proposition is. We then discuss how interpretation and explanation and user involvement have the potential to provide the missing link to bring modern complex outlier algorithms from computer science labs into real-life applications and the challenges they induce.
Original languageEnglish
Article number100172
JournalMachine Learning With Applications
Volume6
Number of pages13
ISSN2666-8270
DOIs
Publication statusPublished - 15. Dec 2021

Keywords

  • Unsupervised outlier detection Explainable artificial intelligence

Fingerprint

Dive into the research topics of 'Explainable outlier detection: What, for Whom and Why?'. Together they form a unique fingerprint.

Cite this