Explainable outlier detection: What, for Whom and Why?

Jonas Herskind Sejr*, Anna Schneider-Kamp

*Kontaktforfatter

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

39 Downloads (Pure)

Abstrakt

Outlier algorithms are becoming increasingly complex. Thereby, they become much less interpretable to the data scientists applying the algorithms in real-life settings and to end-users using their predictions. We argue that outliers are context-dependent and, therefore, can only be detected via domain knowledge, algorithm insight, and interaction with end-users. As outlier detection is equivalent to unsupervised semantic binary classification, at the core of interpreting an outlier algorithm we find the semantics of the classes, i.e., the algorithm’s conceptual outlier definition. We investigate current interpretable and explainable outlier algorithms: what they are, for whom they are, and what their value proposition is. We then discuss how interpretation and explanation and user involvement have the potential to provide the missing link to bring modern complex outlier algorithms from computer science labs into real-life applications and the challenges they induce.
OriginalsprogEngelsk
Artikelnummer100172
TidsskriftMachine Learning With Applications
Vol/bind6
Antal sider13
ISSN2666-8270
DOI
StatusUdgivet - 15. dec. 2021

Fingeraftryk

Dyk ned i forskningsemnerne om 'Explainable outlier detection: What, for Whom and Why?'. Sammen danner de et unikt fingeraftryk.

Citationsformater