TY - GEN
T1 - On The Resilience Of Online Federated Learning To Model Poisoning Attacks Through Partial Sharing
AU - Lari, Ehsan
AU - Chakravarthi Gogineni, Vinay
AU - Arablouei, Reza
AU - Werner, Stefan
PY - 2024
Y1 - 2024
N2 - We investigate the robustness of the recently introduced partialsharing online federated learning (PSO-Fed) algorithm against model-poisoning attacks. To this end, we analyze the performance of the PSO-Fed algorithm in the presence of Byzantine clients, who may clandestinely corrupt their local models with additive noise before sharing them with the server. PSO-Fed can operate on streaming data and reduce the communication load by allowing each client to exchange parts of its model with the server. Our analysis, considering a linear regression task, reveals that the convergence of PSO-Fed can be ensured in the mean sense, even when confronted with model-poisoning attacks. Our extensive numerical results support our claim and demonstrate that PSO-Fed can mitigate Byzantine attacks more effectively compared with its state-of-the-art competitors. Our simulation results also reveal that, when model-poisoning attacks are present, there exists a non-trivial optimal stepsize for PSO-Fed that minimizes its steady-state mean-square error.
AB - We investigate the robustness of the recently introduced partialsharing online federated learning (PSO-Fed) algorithm against model-poisoning attacks. To this end, we analyze the performance of the PSO-Fed algorithm in the presence of Byzantine clients, who may clandestinely corrupt their local models with additive noise before sharing them with the server. PSO-Fed can operate on streaming data and reduce the communication load by allowing each client to exchange parts of its model with the server. Our analysis, considering a linear regression task, reveals that the convergence of PSO-Fed can be ensured in the mean sense, even when confronted with model-poisoning attacks. Our extensive numerical results support our claim and demonstrate that PSO-Fed can mitigate Byzantine attacks more effectively compared with its state-of-the-art competitors. Our simulation results also reveal that, when model-poisoning attacks are present, there exists a non-trivial optimal stepsize for PSO-Fed that minimizes its steady-state mean-square error.
U2 - 10.1109/icassp48485.2024.10447497
DO - 10.1109/icassp48485.2024.10447497
M3 - Article in proceedings
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 9201
EP - 9205
BT - ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PB - IEEE
T2 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Y2 - 14 April 2024 through 19 May 2024
ER -