Journal Home > Just Accepted

Decentralized Online Learning (DOL) extends online learning to the domain of distributed networks. However, limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models compared to centralized methods. Considering the increasing requirement to achieve a high-precision model or decision with distributed data resources in a network, we try to apply ensemble methods to achieve a superior model or decision with only gradients or models transferring. To realize the application of boosting in distributed scenarios, we design a new boosting method: Boosting for Distributed Online Convex Optimization (BD-OCO). BD-OCO achieves the regret upper bound O [(M+N/MN)T ], where M measures the size of the distributed network and N is the number of weak learners in each node. The core idea of BD-OCO is to apply the local model to train a stronger global one. We evaluate BD-OCO based on 8 different real-world datasets. The numerical results show that BD-OCO achieves excellent performance in accuracy and convergence, and is robust to the size of the distributed network.

Publication history
Copyright
Rights and permissions

Publication history

Available online: 08 October 2022

Copyright

© The author(s) 2022

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return