Journal Home > Volume 7 , Issue 1

With the enhancement of data collection capabilities, massive streaming data have been accumulated in numerous application scenarios. Specifically, the issue of classifying data streams based on mobile sensors can be formalized as a multi-task multi-view learning problem with a specific task comprising multiple views with shared features collected from multiple sensors. Existing incremental learning methods are often single-task single-view, which cannot learn shared representations between relevant tasks and views. An adaptive multi-task multi-view incremental learning framework for data stream classification called MTMVIS is proposed to address the above challenges, utilizing the idea of multi-task multi-view learning. Specifically, the attention mechanism is first used to align different sensor data of different views. In addition, MTMVIS uses adaptive Fisher regularization from the perspective of multi-task multi-view learning to overcome catastrophic forgetting in incremental learning. Results reveal that the proposed framework outperforms state-of-the-art methods based on the experiments on two different datasets with other baselines.


menu
Abstract
Full text
Outline
About this article

Incremental Data Stream Classification with Adaptive Multi-Task Multi-View Learning

Show Author's information Jun Wang1Maiwang Shi1Xiao Zhang1( )Yan Li1Yunsheng Yuan1Chenglei Yang2Dongxiao Yu1( )
School of Computer Science and Technology, Shandong University, Qingdao 266237, China
School of Software, Shandong University, Jinan 250101, China

Abstract

With the enhancement of data collection capabilities, massive streaming data have been accumulated in numerous application scenarios. Specifically, the issue of classifying data streams based on mobile sensors can be formalized as a multi-task multi-view learning problem with a specific task comprising multiple views with shared features collected from multiple sensors. Existing incremental learning methods are often single-task single-view, which cannot learn shared representations between relevant tasks and views. An adaptive multi-task multi-view incremental learning framework for data stream classification called MTMVIS is proposed to address the above challenges, utilizing the idea of multi-task multi-view learning. Specifically, the attention mechanism is first used to align different sensor data of different views. In addition, MTMVIS uses adaptive Fisher regularization from the perspective of multi-task multi-view learning to overcome catastrophic forgetting in incremental learning. Results reveal that the proposed framework outperforms state-of-the-art methods based on the experiments on two different datasets with other baselines.

Keywords: mobile sensors, incremental learning, data stream classification, multi-task multi-view learning

References(57)

[1]

X. Zhang, Q. Wang, Z. Ye, H. Ying, and D. Yu, Federated representation learning with data heterogeneity for human mobility prediction, IEEE Trans. Intell. Transp. Syst., vol. 24, no. 6, pp. 6111–6122, 2023.

[2]

Z. Chen, D. Chen, X. Zhang, Z. Yuan, and X. Cheng, Learning graph structures with transformer for multivariate time-series anomaly detection in IoT, IEEE Internet Things J., vol. 9, no. 12, pp. 9179–9189, 2021.

[3]
R. S. Peres, A. D. Rocha, P. Leitao, and J. Barata, IDARTS–towards intelligent data analysis and real-time supervision for industry 4.0, Comput. Ind., vol. 101, pp. 138–146, 2018.
DOI
[4]

W. C. Abraham and A. Robins, Memory retention – the synaptic stability versus plasticity dilemma, Trends Neurosci., vol. 28, no. 2, pp. 73–78, 2005.

[5]
B. Goodrich and I. Arel, Unsupervised neuron selection for mitigating catastrophic forgetting in neural networks, in Proc. 2014 IEEE 57 th Int. Midwest Symp. on Circuits and Systems (MWSCAS), College Station, TX, USA, 2014, pp. 997–1000.
DOI
[6]

Y. Yang, Z. Q. Sun, H. Zhu, Y. Fu, Y. Zhou, H. Xiong, and J. Yang, Learning adaptive embedding considering incremental class, IEEE Trans. Knowl. Data Eng., vol. 35, no. 3, pp. 2736–2749, 2023.

[7]

H. Yu, Z. Chen, X. Zhang, X. Chen, F. Zhuang, H. Xiong, and X. Cheng, FedHAR: Semi-supervised online learning for personalized federated human activity recognition, IEEE Trans. Mob. Comput., vol. 22, no. 6, pp. 3318–3332, 2023.

[8]
J. He and R. Lawrence, A graph-based framework for multi-task multi-view learning, in Proc. 28 th Int. Conf. Int. Conf. Machine Learning, Bellevue, WA, USA, 2011, pp. 25–32.
[9]
J. Baxter, A Bayesian/information theoretic model of learning to learn via multiple task sampling, Mach. Learn., vol. 28, no. 1, pp. 7–39, 1997.
[10]
S. Ruder, An overview of multi-task learning in deep neural networks, arXiv preprint arXiv: 1706.05098, 2017.
[11]
Y. F. Wu, D. C. Zhan, and Y. Jiang, DMTMV: A unified learning framework for deep multi-task multi-view learning, in Proc. 2018 IEEE Int. Conf. Big Knowledge (ICBK), Singapore, 2018, pp. 49–56.
[12]

J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al., Overcoming catastrophic forgetting in neural networks, Proc. Natl. Acad. Sci. USA, vol. 114, no. 13, pp. 3521–3526, 2017.

[13]
S. Yao, S. Hu, Y. Zhao, A. Zhang, and T. Abdelzaher, DeepSense: A unified deep learning framework for time-series mobile sensing data processing, in Proc. 26 th Int. Conf. World Wide Web, Perth, Australia, 2017, pp. 351–360.
DOI
[14]
P. Siirtola, H. Koskimäki, and J. Röning, Personalizing human activity recognition models using incremental learning, in Proc. 26 th European Symp. on Artificial Neural Networks, Bruges, Belgium, 2019, pp. 627–632.
[15]
K. Demertzis, L. Iliadis, and V. D. Anezakis, MOLESTRA: A multi-task learning approach for real-time big data analytics, in Proc. 2018 Innovations in Intelligent Systems and Applications (INISTA), Thessaloniki, Greece, 2018, pp. 1–8.
DOI
[16]
S. Li, Y. Li, and Y. Fu, Multi-view time series classification: A discriminative bilinear projection approach, in Proc. 25 th ACM Int. Conf. Information and Knowledge Management, Indianapolis, IN, USA, 2016, pp. 989–998.
DOI
[17]
J. Ma, Z. Zhao, X. Yi, J. Chen, L. Hong, and E. H. Chi, Modeling task relationships in multi-task learning with multi-gate mixture-of-experts, in Proc. 24 th ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining, London, UK, 2018, pp. 1930–1939.
DOI
[18]
Y. Yang, D. W. Zhou, D. C. Zhan, H. Xiong, and Y. Jiang, Adaptive deep models for incremental learning: Considering capacity scalability and sustainability, in Proc. 25 th ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining, Anchorage, AK, USA, 2019, pp. 74–82.
DOI
[19]
A. Cipolla, Y. Gal, and A. Kendall, Multi-task learning using uncertainty to weigh losses for scene geometry and semantics, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 7482–7491.
DOI
[20]
T. Sztyler and H. Stuckenschmidt, On-body localization of wearable devices: An investigation of position-aware activity recognition, in Proc. 2016 IEEE Int. Conf. Pervasive Computing and Communications (PerCom), Sydney, Australia, 2016, pp. 1–9.
DOI
[21]
S. A. Rahman, C. Merck, Y. Huang, and S. Kleinberg, Unintrusive eating recognition using google glass, in Proc. 2015 9 th Int. Conf. Pervasive Computing Technologies for Healthcare (PervasiveHealth), Istanbul, Turkey, 2015, pp. 108–111.
DOI
[22]
S. Akaho, A kernel method for canonical correlation analysis, arXiv preprint arXiv: cs/0609071, 2006.
[23]

R. K. Ando and T. Zhang, A framework for learning predictive structures from multiple tasks and unlabeled data, J. Mach. Learn. Res., vol. 6, no. 61, pp. 1817−1853, 2005.

[24]

A. Argyriou, T. Evgeniou, and M. Pontil, Convex multi-task feature learning, Mach. Learn., vol. 73, no. 3, pp. 243–272, 2008.

[25]
A. Argyriou, T. Evgeniou, and M. Pontil, Multi-task feature learning, in Proc. Int. Conf. Neural Information Processing Systems 19, Vancouver, Canada, 2006, pp. 41–48, 2007.
DOI
[26]

R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton, Adaptive mixtures of local experts, Neural Comput., vol. 3, no. 1, pp. 79–87, 1991.

[27]
D. Nguyen, R. Vadaine, G. Hajduch, R. Garello, and R. Fablet, A multi-task deep learning architecture for maritime surveillance using AIS data streams, in Proc. 2018 IEEE 5 th Int. Conf. Data Science and Advanced Analytics (DSAA), Turin, Italy, 2018, pp. 331–340.
DOI
[28]

Y. Z. Shi, A. D. Li, Z. H. Deng, Q. S. Yan, Q. D. Lou, H. R. Chen, K. S. Choi, and S. T. Wang, Double-coupling learning for multi-task data stream classification, Inf. Sci., vol. 613, pp. 494–506, 2022.

[29]

J. Zhang, Y. Zheng, J. Sun, and D. Qi, Flow prediction in spatio-temporal networks based on multitask deep learning, IEEE Trans. Knowl. Data Eng., vol. 32, no. 3, pp. 468–478, 2020.

[30]
Z. Kang, K. Grauman, and F. Sha, Learning with whom to share in multi-task feature learning, in Proc. 28 th Int. Conf. Int. Conf. Machine Learning, Bellevue, WA, USA, 2011, pp. 521–528.
[31]
K. Chaudhuri, S. M. Kakade, K. Livescu, and K. Sridharan, Multi-view clustering via canonical correlation analysis, in Proc. 26 th Annu. Int. Conf. Machine Learning, Montreal, Canada, 2009, pp. 129–136.
DOI
[32]
S. Bickel and T. Scheffer, Multi-view clustering, in Proc. 4 th IEEE Int. Conf. Data Mining, Brighton, UK, 2004, pp. 19–26.
[33]

Y. Wang, L. Wu, X. M. Lin, and J. B. Gao, Multiview spectral clustering via structured low-rank matrix factorization, IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 10, pp. 4833–4843, 2018.

[34]

Y. Wang and L. Wu, Beyond low-rank representations: Orthogonal clustering basis reconstruction with optimized graph structure for multi-view spectral clustering, Neural Netw., vol. 103, pp. 1–8, 2018.

[35]

L. Wu and Y. Wang, Robust hashing for multi-view data: Jointly learning low-rank kernelized similarity consensus and hash functions, Image Vis. Comput., vol. 57, pp. 58–66, 2017.

[36]
A. Blum and T. Mitchell, Combining labeled and unlabeled data with co-training, in Proc. 11 th Annu. Conf. Computational Learning Theory, Madison, WI, USA, 1998, pp. 92–100.
DOI
[37]
V. Sindhwani, P. Niyogi, and M. Belkin, A co-regularization approach to semi-supervised learning with multiple views, presented at the 22nd Workshop on Learning with Multiple Views, Bonn, Germany, 2005.
[38]
H. Hotelling, Relations between two sets of variates, in Breakthroughs in Statistics, S. Kotz, N. L. Johnson, eds. New York, NY, USA: Springer, 1992, pp. 162–190.
DOI
[39]
S. M. Kakade and D. P. Foster, Multi-view regression via canonical correlation analysis, in Proc. 20 th Annu. Conf. Learning Theory, San Diego, CA, USA, 2007, pp. 82–96.
DOI
[40]

V. M. Devagiri, V. Boeva, and E. Tsiporkova, Split-merge evolutionary clustering for multi-view streaming data, Procedia Comput. Sci., vol. 176, pp. 460–469, 2020.

[41]
J. Zhang and J. Huan, Inductive multi-task learning with multiple view data, in Proc. 18 th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, Beijing, China, 2012, pp. 543–551.
DOI
[42]
X. Jin, F. Zhuang, S. Wang, Q. He, and Z. Shi, Shared structure learning for multiple tasks with multiple views, in Proc. European Conf. Machine Learning and Knowledge Discovery in Databases, Prague, Czech Republic, 2013, pp. 353–368.
DOI
[43]
X. Jin, F. Zhuang, H. Xiong, C. Du, P. Luo, and Q. He, Multi-task multi-view learning for heterogeneous tasks, in Proc. 23 rd ACM Int. Conf. Conf. Information and Knowledge Management, Shanghai, China, 2014, pp. 441–450.
DOI
[44]
C. T. Lu, L. He, W. Shao, B. Cao, and P. S. Yu, Multilinear factorization machines for multi-task multi-view learning, in Proc. 10 th ACM Int. Conf. Web Search and Data Mining, Cambridge, UK, 2017, pp. 701–709.
[45]
G. Cauwenberghs and T. Poggio, Incremental and decremental support vector machine learning, in Proc. 13 th Int. Conf. Neural Information Processing Systems, Denver, CO, USA, 2001, pp. 409–415.
[46]

P. Laskov, C. Gehl, S. Krüger, and K. R. Müller, Incremental support vector learning: Analysis, implementation and applications, J. Mach. Learn. Res., vol. 7, no. 69, pp. 1909–1936, 2006.

[47]
G. Zhou, K. Sohn, and H. Lee, Online incremental feature learning with denoising autoencoders, in Proc. 15 th Int. Conf. Artificial Intelligence and Statistics, La Palma, Spain, 2012, pp. 1453–1461.
[48]

Z. Li and D. Hoiem, Learning without forgetting, IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 12, pp. 2935–2947, 2018.

[49]
S. A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, iCaRL: Incremental classifier and representation learning, in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 2001–2010.
DOI
[50]

G. Ditzler and R. Polikar, Incremental learning of concept drift from streaming imbalanced data, IEEE Trans. Knowl. Data Eng., vol. 25, no. 10, pp. 2283–2301, 2013.

[51]
P. Domingos and G. Hulten, Mining high-speed data streams, in Proc. 6 th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, Boston, MA, USA, 2000, pp. 71–80.
DOI
[52]
R. Pecori, P. Ducange, and F. Marcelloni, Incremental learning of fuzzy decision trees for streaming data classification, presented at the 11th Conf. European Society for Fuzzy Logic and Technology, Prague, Czech Republic, 2019.
DOI
[53]

Y. Li, Y. Wang, Q. Liu, C. Bi, X. Jiang, and S. Sun, Incremental semi-supervised learning on streaming data, Pattern Recognit., vol. 88, pp. 383–396, 2019.

[54]

J. Gama, I. Žliobaitė, A. Bifet, M. Pechenizkiy, and A. Bouchachia, A survey on concept drift adaptation, ACM Comput. Surv., vol. 46, no. 4, p. 44, 2014.

[55]

R. Ratcliff, Connectionist models of recognition memory: Constraints imposed by learning and forgetting functions, Psychol. Rev., vol. 97, no. 2, pp. 285–308, 1990.

[56]
A. Chaudhry, P. K. Dokania, T. Ajanthan, and P. H. S. Torr, Riemannian walk for incremental learning: Understanding forgetting and intransigence, in Proc. 15 th European Conf. Computer Vision, Munich, Germany, 2018, pp. 532–547.
DOI
[57]
Z. Chen, X. Zhang, and X. Cheng, ASM2TV: An adaptive semi-supervised multi-task multi-view learning framework, arXiv preprint arXiv: 2105.08643, 2021.
DOI
Publication history
Copyright
Rights and permissions

Publication history

Received: 27 November 2022
Revised: 19 April 2023
Accepted: 25 April 2023
Published: 25 December 2023
Issue date: March 2024

Copyright

© The author(s) 2023.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return