Abstract
Though numerous new clustering algorithms are proposed every year, the fundamental question of the proper way to evaluate new clustering algorithms has not been satisfactorily answered. Common procedures of evaluating a clustering result have several drawbacks. Here, we propose a system that could represent a step forward in addressing open issues (though not resolving all open issues) by bridging the gap between an automatic evaluation using mathematical models or known class labels and the actual human researcher. We introduce an interactive evaluation method where clusters are first rated by the system with respect to their similarity to known results and where "new" results are fed back to the human researcher for inspection. The researcher can then validate and refine these results and re-add them back into the system to improve the evaluation result.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 772 |
Pages (from-to) | 55-66 |
ISSN | 1613-0073 |
Publication status | Published - 2011 |
Externally published | Yes |
Event | 2nd Workshop on Discovering, Summarizing and Using Multiple Clusterings - Athens, Greece Duration: 5. Sept 2011 → 5. Sept 2011 |
Conference
Conference | 2nd Workshop on Discovering, Summarizing and Using Multiple Clusterings |
---|---|
Country/Territory | Greece |
City | Athens |
Period | 05/09/2011 → 05/09/2011 |
Fingerprint
Dive into the research topics of 'Evaluation of multiple clustering solutions'. Together they form a unique fingerprint.Related datasets
-
ELKI Multi-View Clustering Data Sets Based on the Amsterdam Library of Object Images (ALOI)
Schubert, E. (Creator) & Zimek, A. (Creator), Zenodo, 30. Jun 2010
Dataset