Abstract
Semi-supervised classification methods are specialized to use a very limited amount of labeled data for training and ultimately for assigning labels to the vast majority of unlabeled data. Label propagation is such a technique, that assigns labels to those parts of unlabeled data that are in some sense close to labeled examples and then uses these predicted labels in turn to predict labels of more remote data. Here we propose to not propagate an immediate label decision to neighbors but to propagate the label probability distribution. This way we keep more information and take into account the remaining uncertainty of the classifier. We employ a Bayesian schema that is more straightforward than existing methods. As a consequence, we avoid propagating errors by decisions taken too early. A crisp decision can be derived from the propagated label distributions at will. We implement and test this strategy with a probabilistic k-nearest neighbor classifier, providing semi-supervised classification results comparable to several state-of-the-art competitors in quality while being more efficient in terms of computational resources. Furthermore, we establish a theoretical connection between the k-nearest neighbor classifier and density-based label propagation.
Original language | English |
---|---|
Article number | 102507 |
Journal | Information Systems |
Volume | 129 |
Number of pages | 12 |
ISSN | 0306-4379 |
DOIs | |
Publication status | Published - Mar 2025 |
Keywords
- Density-based learning
- k nearest neighbor classification
- Label propagation
- Semi-supervised learning
- Transductive learning