On evaluation of outlier rankings and outlier scores

Erich Schubert, Remigius Wojdanowski, Arthur Zimek, Hans Peter Kriegel

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

Outlier detection research is currently focusing on the development of new methods and on improving the computation time for these methods. Evaluation however is rather heuristic, often considering just precision in the top k results or using the area under the ROC curve. These evaluation procedures do not allow for assessment of similarity between methods. Judging the similarity of or correlation between two rankings of outlier scores is an important question in itself but it is also an essential step towards meaningfully building outlier detection ensembles, where this aspect has been completely ignored so far. In this study, our generalized view of evaluation methods allows both to evaluate the performance of existing methods as well as to compare different methods w.r.t. their detection performance. Our new evaluation framework takes into consideration the class imbalance problem and offers new insights on similarity and redundancy of existing outlier detection methods. As a result, the design of effective ensemble methods for outlier detection is considerably enhanced.

Original languageEnglish
Title of host publicationProceedings of the 12th SIAM International Conference on Data Mining
EditorsJoydeep Ghosh, Huan Liu, Ian Davidson, Carlotta Domeniconi, Chandrika Kamath
Publication dateDec 2012
Pages1047-1058
ISBN (Print)9781611972320
ISBN (Electronic)978-1-61197-282-5
DOIs
Publication statusPublished - Dec 2012
Externally publishedYes
Event12th SIAM International Conference on Data Mining - Anaheim, United States
Duration: 26. Apr 201228. Apr 2012

Conference

Conference12th SIAM International Conference on Data Mining
Country/TerritoryUnited States
CityAnaheim
Period26/04/201228/04/2012
SponsorAmerican Statistical Association

Fingerprint

Dive into the research topics of 'On evaluation of outlier rankings and outlier scores'. Together they form a unique fingerprint.

Cite this