Evaluation of multiple clustering solutions

Hans Peter Kriegel, Erich Schubert, Arthur Zimek

Publikation: Bidrag til tidsskriftKonferenceartikelForskningpeer review

Abstrakt

Though numerous new clustering algorithms are proposed every year, the fundamental question of the proper way to evaluate new clustering algorithms has not been satisfactorily answered. Common procedures of evaluating a clustering result have several drawbacks. Here, we propose a system that could represent a step forward in addressing open issues (though not resolving all open issues) by bridging the gap between an automatic evaluation using mathematical models or known class labels and the actual human researcher. We introduce an interactive evaluation method where clusters are first rated by the system with respect to their similarity to known results and where "new" results are fed back to the human researcher for inspection. The researcher can then validate and refine these results and re-add them back into the system to improve the evaluation result.

OriginalsprogEngelsk
TidsskriftCEUR Workshop Proceedings
Vol/bind772
Sider (fra-til)55-66
ISSN1613-0073
StatusUdgivet - 2011
Udgivet eksterntJa
Begivenhed2nd Workshop on Discovering, Summarizing and Using Multiple Clusterings - Athens, Grækenland
Varighed: 5. sep. 20115. sep. 2011

Konference

Konference2nd Workshop on Discovering, Summarizing and Using Multiple Clusterings
Land/OmrådeGrækenland
ByAthens
Periode05/09/201105/09/2011

Fingeraftryk

Dyk ned i forskningsemnerne om 'Evaluation of multiple clustering solutions'. Sammen danner de et unikt fingeraftryk.

Citationsformater