AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (9.6 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Deep Learning-Based Classification of the Polar Emotions of "Moe" -Style Cartoon Pictures

Qinchen CaoWeilin ZhangYonghua Zhu( )
School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China.
Shanghai Film Academy, Shanghai University, Shanghai 200072, China.
Show Author Information

Abstract

The cartoon animation industry has developed into a huge industrial chain with a large potential market involving games, digital entertainment, and other industries. However, due to the coarse-grained classification of cartoon materials, cartoon animators can hardly find relevant materials during the process of creation. The polar emotions of cartoon materials are an important reference for creators as they can help them easily obtain the pictures they need. Some methods for obtaining the emotions of cartoon pictures have been proposed, but most of these focus on expression recognition. Meanwhile, other emotion recognition methods are not ideal for use as cartoon materials. We propose a deep learning-based method to classify the polar emotions of the cartoon pictures of the "Moe" drawing style. According to the expression feature of the cartoon characters of this drawing style, we recognize the facial expressions of cartoon characters and extract the scene and facial features of the cartoon images. Then, we correct the emotions of the pictures obtained by the expression recognition according to the scene features. Finally, we can obtain the polar emotions of corresponding picture. We designed a dataset and performed verification tests on it, achieving 81.9% experimental accuracy. The experimental results prove that our method is competitive.

References

[1]
J. Huo, W. B. Li, Y. H. Shi, Y. Gao, and H. J. Yin, WebCaricature: A benchmark for caricature recognition, arXiv preprint: 1703.03230, 2017.
[2]
B. F. Klare, S. S. Bucak, A. K. Jain, and T. Akgul, Towards automated caricature recognition, in 2012 5th IAPR_Int. Conf. on Biometrics, New Delhi, India, 2012, pp. 139-146.
[3]
S. X. Ouyang, T. Hospedales, Y. Z. Song, and X. M. Li, Cross-modal face matching: Beyond viewed sketches, in Computer Vision, D. Cremers, I. Reid, H. Saito, and M. H. Yang, eds. Springer, 2015, pp. 210-225.
[4]
S. Siersdorfer, E. Minack, F. Deng, and J. Hare, Analyzing and predicting sentiment of images on the social web, in Proc. 18th ACM Int. Conf. on Multimedia, Firenze, Italy, 2010, pp. 715-718.
[5]
B. Li, S. H. Feng, W. H. Xiong, and W. M. Hu, Scaring or pleasing: Exploit emotional impact of an image, in Proc. 20th ACM Int. Conf. on Multimedia, Nara, Japan, 2012, pp. 1365-1366.
[6]
D. Borth, R. R. Ji, T. Chen, T. Breuel, and S. F. Chang, Large-scale visual sentiment ontology and detectors using adjective noun pairs, in Proc. 21st ACM Int. Conf. on Multimedia, Barcelona, Spain, 2013, pp. 223-232.
[7]
C. Xu, S. Cetintas, K. C. Lee, and L. J. Li, Visual sentiment prediction with deep convolutional neural networks, arXiv preprint: 1411.5731, 2014.
[8]
V. Campos, B. Jou, and X. Giró-i-Nieto, From pixels to sentiment: Fine-tuning CNNs for visual sentiment prediction, Image and Vision Computing, vol. 65, pp. 15-22.
[9]
C. de Juan and B. Bodenheimer, Cartoon textures, in Proc. 2004 ACM SIGGRAPH/Eurographics Symp. on Computer Animation, Grenoble, France, 2004, pp. 267-276.
[10]
H. H. Wang and B. Raj, On the origin of deep learning, arXiv preprint: 1702.07800, 2017.
[11]
H. S. Bhatt, S. Bharadwaj, R. Singh, and M. Vatsa, Memetically optimized MCWLD for matching sketches with digital face images, IEEE Trans. Inform. Forensics Secur., vol. 7, no. 5, pp. 1522-1535, 2012.
[12]
A. Ruiz-Garcia, M. Elshaw, A. Altahhan, and V. Palade, Deep learning for emotion recognition in faces, in Int. Conf. on Artificial Neural Networks 2016, A. Villa, P. Masulli, and R. A. Pons, eds. Springer, 2016, pp. 38-46.
[13]
B. Abaci and T. Akgul, Matching caricatures to photographs, Signal Image Video Process., vol. 9, no. S1, pp. 295-303, 2015.
[14]
E. J. Crowley, O. M. Parkhi, and A. Zisserman, Face painting: Querying art with photos, in British Machine Vision Conf., Swansea, UK, 2015, pp. 1-13.
[15]
K. P. Zhang, Z. P. Zhang, Z. F. Li, and Y. Qiao, Joint face detection and alignment using multitask cascaded convolutional networks, IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1499-1503, 2016.
[16]
E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, ORB: An efficient alternative to SIFT or SURF, in 2011 Int. Conf. on Computer Vision, Barcelona, Spain, 2012, pp. 2564-2571.
[17]
J. F. Cao, J. J. Chen, and H. F. Li, Sentiment classification of image based on Adaboost-BP neural network, (in Chinese), J. Shanxi Univ. (Nat. Sci. Ed.), vol. 36, no. 3, pp. 331-337, 2013.
[18]
J. B. Yuan, S. Mcdonough, Q. Z. You, and J. B. Luo, Sentribute: Image sentiment analysis from a mid-level perspective, in Proc. 2nd Int. Workshop on Issues of Sentiment Discovery and Opinion Mining, Chicago, IL, USA, 2013.
[19]
Q. Z. You, J. B. Luo, H. L. Jin, and J. C. Yang, Robust image sentiment analysis using progressively trained and domain transferred deep networks, in Proc. 29th AAAI Conf. on Artificial Intelligence, Austin, TX, USA, 2015, pp. 381-388.
Tsinghua Science and Technology
Pages 275-286
Cite this article:
Cao Q, Zhang W, Zhu Y. Deep Learning-Based Classification of the Polar Emotions of "Moe" -Style Cartoon Pictures. Tsinghua Science and Technology, 2021, 26(3): 275-286. https://doi.org/10.26599/TST.2019.9010035

1136

Views

59

Downloads

30

Crossref

N/A

Web of Science

41

Scopus

0

CSCD

Altmetrics

Received: 21 July 2019
Accepted: 28 July 2019
Published: 12 October 2020
© The author(s) 2021.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return