Journal Home > Volume 5 , Issue 2

Social relationship understanding infers existing social relationships among individuals in a given scenario, which has been demonstrated to have a wide range of practical value in reality. However, existing methods infer the social relationship of each person pair in isolation, without considering the context-aware information for person pairs in the same scenario. The context-aware information for person pairs exists extensively in reality, that is, the social relationships of different person pairs in a simple scenario are always related to each other. For instance, if most of the person pairs in a simple scenario have the same social relationship, "friends", then the other pairs have a high probability of being "friends" or other similar coarse-level relationships, such as "intimate" . This context-aware information should thus be considered in social relationship understanding. Therefore, this paper proposes a novel end-to-end trainable Person-Pair Relation Network (PPRN), which is a GRU-based graph inference network, to first extract the visual and position information as the person-pair feature information, then enable it to transfer on a fully-connected social graph, and finally utilizes different aggregators to collect different kinds of person-pair information. Unlike existing methods, the method—with its message passing mechanism in the graph model—can infer the social relationship of each person-pair in a joint way (i.e., not in isolation). Extensive experiments on People In Social Context (PISC)- and People In Photo Album (PIPA)-relation datasets show the superiority of our method compared to other methods.


menu
Abstract
Full text
Outline
About this article

Understanding Social Relationships with Person-Pair Relations

Show Author's information Hang ZhaoHaicheng ChenLeilai LiHai Wan( )
Guizhou Post and Telecommunications Planning and Design Institute Co., Ltd., Guiyang 550003, China
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen 518049, China

Abstract

Social relationship understanding infers existing social relationships among individuals in a given scenario, which has been demonstrated to have a wide range of practical value in reality. However, existing methods infer the social relationship of each person pair in isolation, without considering the context-aware information for person pairs in the same scenario. The context-aware information for person pairs exists extensively in reality, that is, the social relationships of different person pairs in a simple scenario are always related to each other. For instance, if most of the person pairs in a simple scenario have the same social relationship, "friends", then the other pairs have a high probability of being "friends" or other similar coarse-level relationships, such as "intimate" . This context-aware information should thus be considered in social relationship understanding. Therefore, this paper proposes a novel end-to-end trainable Person-Pair Relation Network (PPRN), which is a GRU-based graph inference network, to first extract the visual and position information as the person-pair feature information, then enable it to transfer on a fully-connected social graph, and finally utilizes different aggregators to collect different kinds of person-pair information. Unlike existing methods, the method—with its message passing mechanism in the graph model—can infer the social relationship of each person-pair in a joint way (i.e., not in isolation). Extensive experiments on People In Social Context (PISC)- and People In Photo Album (PIPA)-relation datasets show the superiority of our method compared to other methods.

Keywords: social relationship understanding, person-pair relations, Person-Pair Relation Network (PPRN)

References(27)

[1]
L. Yang, X. Wang, and M. M. M. Luo, Trust and closeness: A mixed method for understanding the relationship of social network users, J. Int. Technol. Inf. Manage., vol. 30, no. 1, p. 4, 2021.
[2]
J. Mou, W. L. Zhu, M. Benyoucef, and J. Kim, Understanding the relationship between social media use and Depression: A systematic review, in Proc. of the 26th Americas Conf. on Information Systems, Virtual Conference, 2020, p. 15.
[3]
C. X. R. Shen, Z. C. Lu, T. Faas, and D. Wigdor, The labor of fun: Understanding the social relationships between Gamers and paid gaming teammates in China, in Proc. 2021 CHI Conf. on Human Factors in Computing Systems (CHI), Yokohama, Japan, 2021, p. 140.
[4]
N. Fairclough, Analysing Discourse: Textual Analysis for Social Research. London, UK: Routledge, 2003.
DOI
[5]
J. N. Li, Y. K. Wong, Q. Zhao, and M. S. Kankanhalli, Dual-glance model for deciphering social relationships, in Proc. IEEE Int. Conf. on Computer Vision (ICCV), Venice, Italy, 2017, pp. 2669-2678.
[6]
Z. X. Wang, T. S. Chen, J. Ren, W. H. Yu, H. Cheng, and L. Lin, Deep reasoning with knowledge graph for social relationship understanding, in Proc. 27th Int. Joint Conf. on Artificial Intelligence (IJCAI), Stockholm, Sweden, 2018, pp. 1021-1028.
[7]
G. Wang, A. Gallagher, J. B. Luo, and D. Forsyth, Seeing people in social context: Recognizing people and social relationships, in Proc. 11th European Conf. on Computer Vision (ECCV), Heraklion, Greece, 2010, pp. 169-182.
[8]
Z. P. Zhang, P. Luo, C. C. Loy, and X. O. Tang, Learning social relation traits from face images, in Proc. IEEE Int. Conf. on Computer Vision (ICCV), Santiago, Chile, 2015, pp. 3631-3639.
[9]
P. L. Dai, J. N. Lv, and B. Wu, Two-stage model for social relationship understanding from videos, in Proc. 2019 IEEE Int. Conf. on Multimedia and Expo (ICME), Shanghai, China, 2019, pp. 1132-1137.
[10]
V. Ramanathan, B. P. Yao, and L. Fei-Fei, Social role discovery in human events, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 2013, pp. 2475-2482.
[11]
L. Ding and A. Yilmaz, Learning relations among movie characters: A social network perspective, in Proc. 11th European Conf. on Computer Vision (ECCV), Heraklion, Greece, 2010, pp. 410-423.
[12]
A. Vinciarelli, M. Pantic, and H. Bourlard, Social signal processing: Survey of an emerging domain, Image Vis. Comput., vol. 27, no. 12, pp. 1743-1759, 2009.
[13]
Q. R. Sun, B. Schiele, and M. Fritz, A domain based approach to social relation recognition, in Proc. IEEE. Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 435-444.
[14]
K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770-778.
[15]
K. Cho, B. van Merriënboer, D. Bahdanau, and Y. Bengio, On the properties of neural machine translation: Encoder-decoder approaches, in Proc. SSST-8, 8th Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 2014, pp. 103-111.
[16]
H. Dibeklioglu, A. A. Salah, and T. Gevers, Like father, like son: Facial expression dynamics for kinship verification, in Proc. IEEE Int. Conf. on Computer Vision (ICCV), Sydney, Australia, 2013, pp. 1497-1504.
[17]
S. Q. Ren, K. M. He, R. B. Girshick, and J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, in Proc. 28th Int. Conf. Advances in Neural Information Processing Systems, Montreal, Canada, 2015, pp. 91-99.
[18]
L. J. Li, D. A. Shamma, X. N. Kong, S. Jafarpour, R. Van Zwol, and X. H. Wang, CelebrityNet: A social network constructed from large-scale online celebrity images. ACM Trans. Multimed. Comput., Commun. Appl., vol. 12, no. 1, p. 3, 2015.
[19]
S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Z. Su, D. L. Du, C. Huang, and P. H. S. Torr, Conditional random fields as recurrent neural networks, in Proc. IEEE Int. Conf. on Computer Vision, Santiago, Chile, 2015, pp. 1529-1537.
[20]
J. Johnson, R. Krishna, M. Stark, L. J. Li, D. A. Shamma, M. S. Bernstein, and L. Fei-Fei, Image retrieval using scene graphs, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015, pp. 3668-3678.
[21]
M. Yatskar, L. Zettlemoyer, and A. Farhadi, Situation recognition: Visual semantic role labeling for image understanding, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 5534-5542.
[22]
D. F. Xu, Y. K. Zhu, C. B. Choy, and L. Fei-Fei, Scene graph generation by iterative message passing, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 3097-3106.
[23]
X. D. Liang, X. H. Shen, J. S. Feng, L. Lin, and S. C. Yan, Semantic object parsing with graph LSTM, in Proc. 14th European Conf. on Computer Vision (ECCV), Amsterdam, The Netherlands, 2016, pp. 125-143.
[24]
Y. J. Li, D. Tarlow, M. Brockschmidt, and R. S. Zemel, Gated graph sequence neural networks, in Proc. 4th Int. Conf. on Learning Representations, San Juan, Puerto Rico, 2018, pp. 273-283.
[25]
N. Zhang, M. Paluri, Y. Taigman, R. Fergus, and L. Bourdev, Beyond frontal faces: Improving person recognition using multiple cues, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015, pp. 4804-4813.
[26]
C. W. Lu, R. Krishna, M. Bernstein, and L. Fei-Fei, Visualrelationship detection with language priors, in Proc. 14th European Conf. on Computer Vision (ECCV), Amsterdam, The Netherlands, 2016, pp. 852-869.
[27]
T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, Microsoft COCO: Common objects in context, in Proc. 13th European Conf. on Computer Vision (ECCV), Zurich, Switzerland, 2014, pp. 740-755.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 20 October 2021
Accepted: 03 November 2021
Published: 25 January 2022
Issue date: June 2022

Copyright

© The author(s) 2022.

Acknowledgements

This paper was supported by the National Natural Science Foundation of China (Nos. 61976232 and 51978675), Humanities and Social Science Research Project of Ministry of Education (No. 18YJCZH006), and All-China Federation of Returned Overseas Chinese Research Project (No. 17BZQK216).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return