TY - GEN
T1 - Online Bin Covering with Frequency Predictions
AU - Berg, Magnus
AU - Kamali, Shahin
PY - 2024/6
Y1 - 2024/6
N2 - We study the bin covering problem where a multiset of items from a fixed set S ⊆ (0, 1] must be split into disjoint subsets while maximizing the number of subsets whose contents sum to at least 1. We focus on the online discrete variant, where S is finite, and items arrive sequentially. In the purely online setting, we show that the competitive ratios of best deterministic (and randomized) algorithms converge to12 for large S, similar to the continuous setting. Therefore, we consider the problem under the prediction setting, where algorithms may access a vector of frequencies predicting the frequency of items of each size in the instance. In this setting, we introduce a family of online algorithms that perform near-optimally when the predictions are correct. Further, we introduce a second family of more robust algorithms that presents a tradeoff between the performance guarantees when the predictions are perfect and when predictions are adversarial. Finally, we consider a stochastic setting where items are drawn independently from any fixed but unknown distribution of S. Using results from the PAC-learnability of probabilities in discrete distributions, we introduce a purely online algorithm whose average-case performance is near-optimal with high probability for all finite sets S and all distributions of S.
AB - We study the bin covering problem where a multiset of items from a fixed set S ⊆ (0, 1] must be split into disjoint subsets while maximizing the number of subsets whose contents sum to at least 1. We focus on the online discrete variant, where S is finite, and items arrive sequentially. In the purely online setting, we show that the competitive ratios of best deterministic (and randomized) algorithms converge to12 for large S, similar to the continuous setting. Therefore, we consider the problem under the prediction setting, where algorithms may access a vector of frequencies predicting the frequency of items of each size in the instance. In this setting, we introduce a family of online algorithms that perform near-optimally when the predictions are correct. Further, we introduce a second family of more robust algorithms that presents a tradeoff between the performance guarantees when the predictions are perfect and when predictions are adversarial. Finally, we consider a stochastic setting where items are drawn independently from any fixed but unknown distribution of S. Using results from the PAC-learnability of probabilities in discrete distributions, we introduce a purely online algorithm whose average-case performance is near-optimal with high probability for all finite sets S and all distributions of S.
KW - Bin Covering
KW - Learning-Augmented Algorithms
KW - Online Algorithms with Predictions
KW - PAC Learning
U2 - 10.4230/LIPIcs.SWAT.2024.10
DO - 10.4230/LIPIcs.SWAT.2024.10
M3 - Article in proceedings
AN - SCOPUS:85195389211
T3 - Leibniz International Proceedings in Informatics, LIPIcs
BT - 19th Scandinavian Symposium on Algorithm Theory, SWAT 2024
A2 - Bodlaender, Hans L.
PB - Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
T2 - 19th Scandinavian Symposium on Algorithm Theory, SWAT 2024
Y2 - 12 June 2024 through 14 June 2024
ER -