Abstrakt
Though numerous new clustering algorithms are proposed every year, the fundamental question of the proper way to evaluate new clustering algorithms has not been satisfactorily answered. Common procedures of evaluating a clustering result have several drawbacks. Here, we propose a system that could represent a step forward in addressing open issues (though not resolving all open issues) by bridging the gap between an automatic evaluation using mathematical models or known class labels and the actual human researcher. We introduce an interactive evaluation method where clusters are first rated by the system with respect to their similarity to known results and where "new" results are fed back to the human researcher for inspection. The researcher can then validate and refine these results and re-add them back into the system to improve the evaluation result.
Originalsprog | Engelsk |
---|---|
Tidsskrift | CEUR Workshop Proceedings |
Vol/bind | 772 |
Sider (fra-til) | 55-66 |
ISSN | 1613-0073 |
Status | Udgivet - 2011 |
Udgivet eksternt | Ja |
Begivenhed | 2nd Workshop on Discovering, Summarizing and Using Multiple Clusterings - Athens, Grækenland Varighed: 5. sep. 2011 → 5. sep. 2011 |
Konference
Konference | 2nd Workshop on Discovering, Summarizing and Using Multiple Clusterings |
---|---|
Land/Område | Grækenland |
By | Athens |
Periode | 05/09/2011 → 05/09/2011 |