Journal Home >

Decentralized Online Learning (DOL) extends online learning to the domain of distributed networks. However, limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models compared to centralized methods. Considering the increasing requirement to achieve a high-precision model or decision with distributed data resources in a network, applying ensemble methods is attempted to achieve a superior model or decision with only transferring gradients or models. A new boosting method, namely Boosting for Distributed Online Convex Optimization (BD-OCO), is designed to realize the application of boosting in distributed scenarios. BD-OCO achieves the regret upper bound $𝒪⁢(M+NM⁢N⁢T)$, where $M$ measures the size of the distributed network and $N$ is the number of Weak Learners (WLs) in each node. The core idea of BD-OCO is to apply the local model to train a strong global one. BD-OCO is evaluated on the basis of eight different real-world datasets. Numerical results show that BD-OCO achieves excellent performance in accuracy and convergence, and is robust to the size of the distributed network.

Abstract
Full text
Outline

Boosting for Distributed Online Convex Optimization

Show Author's information Yuhan Hu1Yawei Zhao2,3Deke Guo1( )
Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China
Medical Engineering Laboratory of Chinese PLA General Hospital
School of Cyberspace Security, Dongguan University of Technology, Dongguan 523000, China

Abstract

Decentralized Online Learning (DOL) extends online learning to the domain of distributed networks. However, limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models compared to centralized methods. Considering the increasing requirement to achieve a high-precision model or decision with distributed data resources in a network, applying ensemble methods is attempted to achieve a superior model or decision with only transferring gradients or models. A new boosting method, namely Boosting for Distributed Online Convex Optimization (BD-OCO), is designed to realize the application of boosting in distributed scenarios. BD-OCO achieves the regret upper bound $𝒪⁢(M+NM⁢N⁢T)$, where $M$ measures the size of the distributed network and $N$ is the number of Weak Learners (WLs) in each node. The core idea of BD-OCO is to apply the local model to train a strong global one. BD-OCO is evaluated on the basis of eight different real-world datasets. Numerical results show that BD-OCO achieves excellent performance in accuracy and convergence, and is robust to the size of the distributed network.

Keywords: distributed Online Convex Optimization (OCO), online boosting, Online Gradient Boosting (OGB)

References(28)

[1]
W. Chen, Y. J. Wang, and Y. Yuan, Combinatorial multi-armed bandit: General framework and applications, in Proc. of the 30th Int. Conf. on Machine Learning (ICML), Atlanta, GA, USA, 2013, pp. 151–159.
[2]
B. L. Pereira, A. Ueda, G. Penha, R. L. T. Santos, and N. Ziviani, Online learning to rank for sequential music recommendation, in Proc. of the 13th ACM Conf. on Recommender Systems, Copenhagen, Dennark, 2019, pp. 237–245.
[3]
Y. W. Zhao, C. Yu, P. L. Zhao, H. L. Tang, S. Qiu, and J. Liu, Decentralized online learning: Take benefits from others’ data without sharing your own to track global trend, arXiv preprint arXiv: 1901.10593, 2019.
[4]
S. Shahrampour and A. Jadbabaie, Distributed online optimization in dynamic environments using mirror descent, IEEE Trans. Automat. Control, vol. 63, no. 3, pp. 714–725, 2018.
[5]
A. Koppel, S. Paternain, C. Richard, and A. Ribeiro, Decentralized online learning with kernels, IEEE Trans. Signal Process., vol. 66, no. 12, pp. 3240–3255, 2018.
[6]
N. Bastianello and E. Dall’Anese, Distributed and inexact proximal gradient method for online convex optimization, in Proc. 2021 European Control Conf. (ECC), Delft, the Netherlands, 2021, pp. 2432–2437.
[7]
C. Zhang, P. L. Zhao, S. J. Hao, Y. C. Soh, B. S. Lee, C. Y. Miao, and S. C. H. Hoi, Distributed multi-task classification: A decentralized online learning approach, Mach. Learn., vol. 107, no. 4, pp. 727–747, 2018.
[8]
T. G. Dietterich, Ensemble methods in machine learning, in Proc. Int. Workshop on Multiple Classifier Systems, Cagliari, Italy, 2000, pp. 1–15.
[9]
J. Tanha, Y. Abdi, N. Samadi, N. Razzaghi, and M. Asadpour, Boosting methods for multi-class imbalanced data classification: An experimental review, J. Big Data, vol. 7, p. 70, 2020.
[10]
N. C. Oza and S. J. Russell, Online bagging and boosting, in Proc. of the Eighth Int. Workshop on Artificial Intelligence and Statistics, Key West, FL, USA, 2001, pp. 229–236.
[11]
S. T. Chen, H. T. Lin, and C. J. Lu, An online boosting algorithm with theoretical justifications, in Proc. of the 29th Int. Conf. on Int. Conf. on Machine Learning, Edinburgh, UK, 2012, pp. 1873–1880.
[12]
A. Beygelzimer, E. Hazan, S. Kale, and H. P. Luo, Online gradient boosting, in Proc. of the 28th Int. Conf. on Neural Information Processing Systems, Montreal, Canada, 2015, pp. 2458–2466.
[13]
E. Hazan and K. Singh, Boosting for online convex optimization, in Proc. of the 38th Int. Conf. on Machine Learning, Vienna, Austria, 2021, pp. 4140–4149.
[14]
Y. Freund and R. E. Schapire, Experiments with a new boosting algorithm, in Proc. of the Thirteenth Int. Conf. on Int. Conf. on Machine Learning, Bari, Italy, 1996, pp. 148–156.
[15]
T. Zhang and B. Yu, Boosting with early stopping: Convergence and consistency, Ann. Statist., vol. 33, no. 4, pp. 1538–1579, 2005.
[16]
M. Raginsky, N. Kiarashi, and R. Willett, Decentralized online convex programming with local information, in Proc. of the 2011 American Control Conf., San Francisco, CA, USA, 2011, pp. 5363–5369.
[17]
A. Nedić, S. Lee, and M. Raginsky, Decentralized online optimization with global objectives and local communication, in Proc. 2015 American Control Conf. (ACC), Chicago, IL, USA, 2015, pp. 4497–4503.
[18]
J. Y. Jiang, W. P. Zhang, J. J. Gu, and W. W. Zhu, Asynchronous decentralized online learning, in Proc. of the 35th Int. Conf. on Neural Information Processing Systems, Montreal, Canada, 2021, pp. 20185–20196.
[19]
E. Hazan, Introduction to online convex optimization, Foundations and Trends in Optimization, vol. 2, nos. 3&4, pp. 157–325, 2016.
[20]
S. Shalev-Shwartz, Online learning and online convex optimization, Foundat. Trends Mach. Learn., vol. 4, no. 2, pp. 107–194, 2012.
[21]
Y. Freund and R. E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting, J. Comput. Syst. Sci., vol. 55, no. 1, pp. 119–139, 1997.
[22]
J. H. Friedman, Greedy function approximation: A gradient boosting machine, Ann. Statist., vol. 29, no. 5, pp. 1189–1232, 2001.
[23]
T. Parag, F. Porikli, and A. Elgammal, Boosting adaptive linear weak classifiers for online learning and tracking, in Proc. 2008 IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 2008, pp. 1–8.
[24]
A. Beygelzimer, S. Kale, and H. P. Luo, Optimal and adaptive algorithms for online boosting, in Proc. of the 32nd Int. Conf. on Int. Conf. on Machine Learning, Lille, France, 2015, pp. 2323–2331.
[25]
Y. H. Jung, J. Goetz, and A. Tewari, Online multiclass boosting, in Proc. of the 31st Int. Conf. on Neural Information Processing Systems, Long Beach, CA, USA, 2017. 920–929.
[26]
X. B. An, C. Hu, G. Liu, and H. S. Lin, Distributed online gradient boosting on data stream over multi-agent networks, Signal Process., vol. 189, p. 108253, 2021.
[27]
J. C. Duchi, A. Agarwal, and M. J. Wainwright, Dual averaging for distributed optimization: Convergence analysis and network scaling, IEEE Trans. Automat. Control, vol. 57, no. 3, pp. 592–606, 2012.
[28]
D. Garber and E. Hazan, A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization, arXiv preprint arXiv: 1301.4666, 2013.
Publication history
Acknowledgements
Rights and permissions

Publication history

Received: 06 September 2022
Revised: 18 September 2022
Accepted: 20 September 2022
Published: 06 January 2023
Issue date: August 2023