Internal Evaluation of Unsupervised Outlier Detection

Henrique O. Marques*, Ricardo J.G.B. Campello, Jürg Sander, Arthur Zimek

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

Although there is a large and growing literature that tackles the unsupervised outlier detection problem, the unsupervised evaluation of outlier detection results is still virtually untouched in the literature. The so-called internal evaluation, based solely on the data and the assessed solutions themselves, is required if one wants to statistically validate (in absolute terms) or just compare (in relative terms) the solutions provided by different algorithms or by different parameterizations of a given algorithm in the absence of labeled data. However, in contrast to unsupervised cluster analysis, where indexes for internal evaluation and validation of clustering solutions have been conceived and shown to be very useful, in the outlier detection domain, this problem has been notably overlooked. Here we discuss this problem and provide a solution for the internal evaluation of outlier detection results. Specifically, we describe an index called Internal, Relative Evaluation of Outlier Solutions (IREOS) that can evaluate and compare different candidate outlier detection solutions. Initially, the index is designed to evaluate binary solutions only, referred to as top-n outlier detection results. We then extend IREOS to the general case of non-binary solutions, consisting of outlier detection scorings. We also statistically adjust IREOS for chance and extensively evaluate it in several experiments involving different collections of synthetic and real datasets.

Original languageEnglish
Article number47
JournalACM Transactions on Knowledge Discovery from Data
Volume14
Issue number4
ISSN1556-4681
DOIs
Publication statusPublished - 9. Jul 2020

Keywords

  • Outlier detection
  • unsupervised evaluation
  • validation

Fingerprint

Dive into the research topics of 'Internal Evaluation of Unsupervised Outlier Detection'. Together they form a unique fingerprint.

Cite this