AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (1 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Towards Federated Learning Driving Technology for Privacy-Preserving Micro-Expression Recognition

School of Computer Science and Engineering, Macau University of Science and Technology, Macau 999078, China
Oulu School, Nanjing Institute of Technology, Nanjing 211167, China, and also with Key Laboratory of Child Development and Learning Science of Ministry of Education and Research Center for Learning Science, Southeast University, Nanjing 210096, China
Key Laboratory of Child Development and Learning Science of Ministry of Education, and also with School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
Show Author Information

Abstract

As mobile devices and sensor technology advance, their role in communication becomes increasingly indispensable. Micro-expression recognition, an invaluable non-verbal communication method, has been extensively studied in human-computer interaction, sentiment analysis, and security fields. However, the sensitivity and privacy implications of micro-expression data pose significant challenges for centralized machine learning methods, raising concerns about serious privacy leakage and data sharing. To address these limitations, we investigate a federated learning scheme tailored specifically for this task. Our approach prioritizes user privacy by employing federated optimization techniques, enabling the aggregation of clients’ knowledge in an encrypted space without compromising data privacy. By integrating established micro-expression recognition methods into our framework, we demonstrate that our approach not only ensures robust data protection but also maintains high recognition performance comparable to non-privacy-preserving mechanisms. To our knowledge, this marks the first application of federated learning to the micro-expression recognition task.

References

[1]

X. Ben, Y. Ren, J. Zhang, S. J. Wang, K. Kpalma, W. Meng, and Y. J. Liu, Video-based facial micro-expression analysis: A survey of datasets, features and algorithms, IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 9, pp. 5826–5846, 2022.

[2]

Y. Li, J. Wei, Y. Liu, J. Kauttonen, and G. Zhao, Deep learning for micro-expression recognition: A survey, IEEE Trans. Affect. Comput., vol. 13, no. 4, pp. 2028–2046, 2022.

[3]

Q. Wu, X. B. Sheng, and X. L. Fu, Micro-expression and its applications, Adv. Psychol. Sci., vol. 18, no. 9, pp. 1359–1368, 2010.

[4]
C. He, A. D. Shah, Z. Tang, D. F. A. N. Sivashunmugam, K. Bhogaraju, M. Shimpi, L. Shen, X. Chu, M. Soltanolkotabi, and S. Avestimehr, FedCV: A federated learning framework for diverse computer vision tasks, arXiv preprint arXiv: 2111.11066, 2021.
[5]
A. Salman and C. Busso, Privacy preserving personalization for video facial expression recognition using federated learning, in Proc. 2022 Int. Conf. Multimodal Interaction, Bengaluru, India, 2022, pp. 495–503.
[6]
F. Qi, Z. Zhang, X. Yang, H. Zhang, and C. Xu, Feeling without sharing: A federated video emotion recognition framework via privacy-agnostic hybrid aggregation, in Proc. 30th ACM Int. Conf. Multimedia, Lisboa, Portugal, 2022, pp. 151–160.
[7]

X. Zeng, X. Zhao, X. Zhong, and G. Liu, A survey of micro-expression recognition methods based on LBP, optical flow and deep learning, Neural Process. Lett., vol. 55, no. 5, pp. 5995–6026, 2023.

[8]

H. Zhang, L. Yin, H. Zhang, and X. Wu, Facial micro-expression recognition using three-stream vision transformer network with sparse sampling and relabeling, Signal Image Video Process., vol. 18, pp. 3761–3771, 2024.

[9]

Z. Wang, M. Yang, Q. Jiao, L. Xu, B. Han, Y. Li, and X. Tan, Two-level spatio-temporal feature fused two-stream network for micro-expression recognition, Sensors, vol. 24, no. 5, p. 1574, 2024.

[10]
L. Wang, P. Huang, W. Cai, and X. Liu, Micro-expression recognition by fusing action unit detection and spatio-temporal features, in Proc. 2024 IEEE Int. Conf. Acoustics, Speech and Signal Processing, Seoul, Republic of Korea, 2024, pp. 5595– 5599.
[11]
J. See, M. H. Yap, J. Li, X. Hong, and S. J. Wang, MEGC 2019—the second facial micro-expressions grand challenge, in Proc. 14th IEEE Int. Conf. Automatic Face & Gesture Recognition, Lille, France, 2019, pp. 1–5.
[12]

L. Zhou, X. Shao, and Q. Mao, A survey of micro-expression recognition, Image Vision Comput., vol. 105, p. 104043, 2021.

[13]
P. Ekman, Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage, 4th ed. New York, NY, USA: W W Norton & Company, 2009.
[14]
H. Q. Khor, J. See, R. C. W. Phan, and W. Lin, Enriched long-term recurrent convolutional network for facial micro-expression recognition, in Proc. 13th IEEE Int. Conf. Automatic Face & Gesture Recognition, Xi’an, China, 2018, pp. 667–674.
[15]
M. Peng, Z. Wu, Z. Zhang, and T. Chen, From macro to micro expression recognition: Deep learning on small datasets using transfer learning, in Proc. 13th IEEE Int. Conf. Automatic Face & Gesture Recognition, Xi’an, China, 2018, pp. 657–661.
[16]
M. Simon, E. Rodner, and J. Denzler, ImageNet pre-trained models with batch normalization, arXiv preprint arXiv: 1612.01452, 2016.
[17]

J. Li, Y. Wang, J. See, and W. Liu, Micro-expression recognition based on 3D flow convolutional neural network, Pattern Anal. Appl., vol. 22, no. 4, pp. 1331–1339, 2019.

[18]

M. Peng, C. Wang, T. Chen, G. Liu, and X. Fu, Dual temporal scale convolutional neural network for micro-expression recognition, Front. Psychol., vol. 8, p. 1745, 2017.

[19]
Y. S. Gan, S. T. Liong, W. C. Yau, Y. C. Huang, and L. K. Tan, OFF-ApexNet on micro-expression recognition system, Signal Process. Image Commun., vol. 74, pp. 129–139, 2019.
[20]
H-Q. Khor, J. See, S_T. Liong, R. C. Phan, and W. Lin, Dual-stream shallow networks for facial micro-expression recognition, in Proc. lEEE Int. Conf. on Image Processing, Taibei, China, 2019, pp.36−40.
[21]
L. Zhou, Q. Mao, and L. Xue, Dual-inception network for cross-database micro-expression recognition, in Proc. 14th IEEE Int. Conf. Automatic Face & Gesture Recognition, Lille, France, 2019, doi: 10.1109/FG2019.8756579.
[22]
Y. Liu, H. Du, L. Zheng, and T. Gedeon, A neural micro-expression recognizer, in Proc. 14th IEEE Int. Conf. Automatic Face & Gesture Recognition, Lille, France, 2019, doi: 10.1109/FG2019.8756583.
[23]
N. van Quang, J. Chun, and T. Tokuyama, CapsuleNet for micro-expression recognition, in Proc. 14th IEEE Int. Conf. Automatic Face & Gesture Recognition, Lille, France, 2019, doi: 10.1109/FG2019.8756544.
[24]
S. T. Liong, Y. S. Gan, J. See, H. Q. Khor, and Y. C. Huang, Shallow triple stream three-dimensional CNN (STSTNet) for micro-expression recognition, in Proc. 14th IEEE Int. Conf. Automatic Face & Gesture Recognition, Lille, France, 2019, doi: 10.1109/FG2019.8756567.
[25]

Z. Xia, W. Peng, H. Q. Khor, X. Feng, and G. Zhao, Revealing the invisible with model and data shrinking for composite-database micro-expression recognition, IEEE Trans. Image Process., vol. 29, pp. 8590–8605, 2020.

[26]

L. Zhou, Q. Mao, X. Huang, F. Zhang, and Z. Zhang, Feature refinement: An expression-specific feature learning and fusion method for micro-expression recognition, Pattern Recognit., vol. 122, p. 108275, 2022.

[27]
L. Lo, H. X. Xie, H. H. Shuai, and W. H. Cheng, MER-GCN: Micro-expression recognition based on relation modeling with graph convolutional networks, in Proc. 2020 IEEE Conf. Multimedia Information Processing and Retrieval, Shenzhen, China, 2020, pp. 79–84.
[28]
H. X. Xie, L. Lo, H. H. Shuai, and W. H. Cheng, AU-assisted graph attention convolutional network for micro-expression recognition, in Proc. 28th ACM International Conference on Multimedia, Seattle, WA, USA, 2020, pp. 2871–2880.
[29]
T. N. Kipf and M. Welling, Semi-supervised classification with graph convolutional networks, in Proc. 5th Int. Conf. Learning Representations, Toulon, France, https://dblp.uni-trier.de/db/conf/iclr/iclr2017.html#KipfW17, 2017.
[30]

Y. Li, X. Huang, and G. Zhao, Joint local and global information learning with single apex frame detection for micro-expression recognition, IEEE Trans. Image Process., vol. 30, pp. 249–263, 2021.

[31]

S. J. Huang, W. Gao, and Z. H. Zhou, Fast multi-instance multi-label learning, IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 11, pp. 2614–2627, 2019.

[32]
S. J. Wang, Y. He, J. Li, and X. Fu, MESNet: A convolutional neural network for spotting multi-scale micro-expression intervals in long videos, IEEE Trans. Image Process., vol. 30, pp. 3956–3969, 2021.
[33]
Z. Zhai, J. Zhao, C. Long, W. Xu, S. He, and H. Zhao, Feature representation learning with adaptive displacement generation and transformer fusion for micro-expression recognition, in Proc. 2023 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Vancouver, Canada, 2023, pp. 22086–22095.
[34]
M. Verma, P. Lubal, S. K. Vipparthi, and M. Abdel-Mottaleb, RNAS-MER: A refined neural architecture search with hybrid spatiotemporal operations for micro-expression recognition, in Proc. 2023 IEEE/CVF Winter Conf. Applications of Computer Vision, Waikoloa, HI, USA, 2023, pp. 4770–4779.
[35]
X. B. Nguyen, C. N. Duong, X. Li, S. Gauch, H. S. Seo, and K. Luu, Micron-BERT: BERT-based facial micro-expression recognition, in Proc. 2023 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Vancouver, Canada, 2023, pp. 1482–1492.
[36]
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, Communication-efficient learning of deep networks from decentralized data, in Proc. 20th Int. Conf. Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 2017, pp. 1273–1282.
[37]
T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, Federated optimization in heterogeneous networks, in Proc. 3rd Conf. Machine Learning and Systems, Austin, TX, USA, 2020, pp. 429–450.
[38]
D. Shome and T. Kar, FedAffect: Few-shot federated learning for facial expression recognition, in Proc. 2021 IEEE/CVF Int. Conf. Computer Vision, Montreal, Canada, 2021, pp. 4168–4175.
[39]
Y. Niu and W. Deng, Federated learning for face recognition with gradient correction, in Proc. 36th AAAI Conf. Artificial Intelligence, Virtual Event, 2022, pp. 1999–2007.
[40]
M. Calderbank, The RSA Cryptosystem: History, Algorithm, Primes. Chicago, IL, USA: Math. Uchicago. Edu., 2007.
[41]
G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers. Oxford, UK: Clarendon Press, 1938.
[42]
X. Li, T. Pfister, X. Huang, G. Zhao, and M. Pietikäinen, A spontaneous micro-expression database: Inducement, collection and baseline, in Proc. 10th IEEE Int. Conf. and Workshops on Automatic Face and Gesture Recognition, Shanghai, China, 2013, pp. 1–6.
[43]
W. J. Yan, X. Li, S. J. Wang, G. Zhao, Y. J. Liu, Y. H. Chen, and X. Fu, CASME II: An improved spontaneous micro-expression database and the baseline evaluation, PLoS One, vol. 9, no. 1, p. e86041, 2014.
[44]
A. K. Davison, C. Lansley, N. Costen, K. Tan, and M. H. Yap, SAMM: A spontaneous micro-facial movement dataset, IEEE Trans. Affect. Comput., vol. 9, no. 1, pp. 116–129, 2018.
[45]
R. Kohavi, A study of cross-validation and bootstrap for accuracy estimation and model selection, in Proc. 14th Int. Joint Conf. Artificial Intelligence, Montreal, Canada, 1995, pp. 1137–1145.
Tsinghua Science and Technology
Pages 2169-2183
Cite this article:
Wang M, Zhou L, Huang X, et al. Towards Federated Learning Driving Technology for Privacy-Preserving Micro-Expression Recognition. Tsinghua Science and Technology, 2025, 30(5): 2169-2183. https://doi.org/10.26599/TST.2024.9010098

143

Views

6

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 08 February 2024
Revised: 11 May 2024
Accepted: 25 May 2024
Published: 29 April 2025
© The Author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return