Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Micro-Expression Recognition (MER) is a challenging task as the subtle changes occur over different action regions of a face. Changes in facial action regions are formed as Action Units (AUs), and AUs in micro-expressions can be seen as the actors in cooperative group activities. In this paper, we propose a novel deep neural network model for objective class-based MER, which simultaneously detects AUs and aggregates AU-level features into micro-expression-level representation through Graph Convolutional Networks (GCN). Specifically, we propose two new strategies in our AU detection module for more effective AU feature learning: the attention mechanism and the balanced detection loss function. With these two strategies, features are learned for all the AUs in a unified model, eliminating the error-prune landmark detection process and tedious separate training for each AU. Moreover, our model incorporates a tailored objective class-based AU knowledge-graph, which facilitates the GCN to aggregate the AU-level features into a micro-expression-level feature representation. Extensive experiments on two tasks in MEGC 2018 show that our approach outperforms the current state-of-the-art methods in MER. Additionally, we also report our single model-based micro-expression AU detection results.
F. Zhang, Q. Mao, X. Shen, Y. Zhan, and M. Dong, Spatially coherent feature learning for pose-invariant facial expression recognition, ACM Trans. Multimed. Comput. Commun. Appl., vol. 14, no. 1s, p. 27, 2018.
P. Ekman and W. V. Friesen, Nonverbal leakage and clues to deception, Psychiatry, vol. 32, no. 1, pp. 88–106, 1969.
A. K. Davison, W. Merghani, and M. H. Yap, Objective classes for micro-facial expression recognition, J. Imaging, vol. 4, no. 10, p. 119, 2018.
G. Zhao and M. Pietikäinen, Dynamic texture recognition using local binary patterns with an application to facial expressions, IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 6, pp. 915–928, 2007.
Y. J. Liu, J. K. Zhang, W. J. Yan, S. J. Wang, G. Zhao, and X. Fu, A main directional mean optical flow feature for spontaneous micro-expression recognition, IEEE Trans. Affect. Comput., vol. 7, no. 4, pp. 299–310, 2016.
G. Zhou, S. Yuan, H. Xing, Y. Jiang, P. Geng, Y. Cao, and X. Ben, Micro-expression action unit recognition based on dynamic image and spatial pyramid, J. Supercomput., vol. 79, no. 17, pp. 19879–19902, 2023.
S. Polikovsky, Y. Kameda, and Y. Ohta, Facial micro-expression detection in hi-speed video based on facial action coding system (FACS), IEICE Trans. Inf. Syst., vol. E96.D, no. 1, pp. 81–92, 2013.
Y. Wang, J. See, R. C. W. Phan, and Y. H. Oh, Efficient spatio-temporal local binary patterns for spontaneous facial micro-expression recognition, PLoS One, vol. 10, no. 5, p. e0124674, 2015.
X. Huang, G. Zhao, X. Hong, W. Zheng, and M. Pietikäinen, Spontaneous facial micro-expression analysis using spatiotemporal completed local quantized patterns, Neurocomputing, vol. 175, pp. 564–578, 2016.
X. Huang, S. J. Wang, X. Liu, G. Zhao, X. Feng, and M. Pietikäinen, Discriminative spatiotemporal local binary pattern with revisited integral projection for spontaneous facial micro-expression recognition, IEEE Trans. Affect. Comput., vol. 10, no. 1, pp. 32–47, 2019.
Y. Zong, X. Huang, W. Zheng, Z. Cui, and G. Zhao, Learning from hierarchical spatiotemporal descriptors for micro-expression recognition, IEEE Trans. Multimed., vol. 20, no. 11, pp. 3160–3172, 2018.
Z. Xia, X. Hong, X. Gao, X. Feng, and G. Zhao, Spatiotemporal recurrent convolutional networks for recognizing spontaneous micro-expressions, IEEE Trans. Multimed., vol. 22, no. 3, pp. 626–640, 2020.
Y. S. Gan, S. T. Liong, W. C. Yau, Y. C. Huang, and L. K. Tan, Off-apexnet on micro-expression recognition system, Signal Process. Image Commun., vol. 74, pp. 129–139, 2019.
Y. Li, X. Huang, and G. Zhao, Joint local and global information learning with single apex frame detection for micro-expression recognition, IEEE Trans. Image Process., vol. 30, pp. 249–263, 2021.
C. Wang, M. Peng, T. Bi, and T. Chen, Micro-attention for micro-expression recognition, Neurocomputing, vol. 410, pp. 354–362, 2020.
K. Zhao, W. S. Chu, F. de la Torre, J. F. Cohn, and H. Zhang, Joint patch and multi-label learning for facial action unit and holistic expression recognition, IEEE Trans. Image Process., vol. 25, no. 8, pp. 3931–3946, 2016.
Y. Li, X. Huang, and G. Zhao, Micro-expression action unit detection with spatial and channel attention, Neurocomputing, vol. 436, pp. 221–231, 2021.
L. Shi, Y. Zhang, J. Cheng, and H. Lu, Skeleton-based action recognition with multi-stream adaptive graph convolutional networks, IEEE Trans. Image Process., vol. 29, pp. 9532–9545, 2020.
X. Hao, J. Li, Y. Guo, T. Jiang, and M. Yu, Hypergraph neural network for skeleton-based action recognition, IEEE Trans. Image Process., vol. 30, pp. 2263–2275, 2021.
M. Verma, S. K. Vipparthi, G. Singh, and S. Murala, Learnet: Dynamic imaging network for micro expression recognition, IEEE Trans. Image Process., vol. 29, pp. 1618–1627, 2020.
B. Schuller, B. Vlasenko, F. Eyben, M. Wöllmer, A. Stuhlsatz, A. Wendemuth, and G. Rigoll, Cross-corpus acoustic emotion recognition: Variances and strategies, IEEE Trans. Affect. Comput., vol. 1, no. 2, pp. 119–131, 2010.
J. Gou, L. Sun, B. Yu, S. Wan, W. Ou, and Z. Yi, Multilevel attention-based sample correlations for knowledge distillation, IEEE Trans. Ind. Inf., vol. 19, no. 5, pp. 7099–7109, 2023.
J. Gou, B. Yu, S. J. Maybank, and D. Tao, Knowledge distillation: A survey, Int. J. Comput. Vis., vol. 129, no. 6, pp. 1789–1819, 2021.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).