Journal Home > Volume 24 , Issue 3

Gender classification is an important task in automated face analysis. Most existing approaches for gender classification use only raw/aligned face images after face detection as input. These methods exhibit fair classification ability under constrained conditions, in which face images are acquired under similar illumination with similar poses. The performances of these methods may deteriorate when face images show drastic variances in poses and occlusion as routinely encountered in real-world data. The reduction in the performances of current gender classification methods may be attributed to the sensitiveness of features to image translations. This work proposes to alleviate this sensitivity by introducing a majority voting procedure that involves multiple face patches. Specifically, this work utilizes a deep learning method based on multiple large patches. Several Convolutional Neural Networks (CNN) are trained on individual, predefined patches that reflect various image resolutions and partial cropping. The decisions of each CNN are aggregated through majority voting to obtain the final gender classification accurately. Extensive experiments are conducted on four gender classification databases, including Labeled Face in-the-Wild (LFW), CelebA, ColorFeret, and All-Age Faces database, a novel database collected by our group. Each individual patch is evaluated, and complementary patches are selected for voting. We show that the classification accuracy of our method is comparable with that of state-of-the-art systems. This characteristic validates the effectiveness of our proposed method.


menu
Abstract
Full text
Outline
About this article

Exploiting Effective Facial Patches for Robust Gender Recognition

Show Author's information Jingchun ChengYali LiJilong WangLe YuShengjin Wang( )
Tsinghua University, Beijing 100084, China.
China Mobile Information Security Center, Beijing 100084, China.

Abstract

Gender classification is an important task in automated face analysis. Most existing approaches for gender classification use only raw/aligned face images after face detection as input. These methods exhibit fair classification ability under constrained conditions, in which face images are acquired under similar illumination with similar poses. The performances of these methods may deteriorate when face images show drastic variances in poses and occlusion as routinely encountered in real-world data. The reduction in the performances of current gender classification methods may be attributed to the sensitiveness of features to image translations. This work proposes to alleviate this sensitivity by introducing a majority voting procedure that involves multiple face patches. Specifically, this work utilizes a deep learning method based on multiple large patches. Several Convolutional Neural Networks (CNN) are trained on individual, predefined patches that reflect various image resolutions and partial cropping. The decisions of each CNN are aggregated through majority voting to obtain the final gender classification accurately. Extensive experiments are conducted on four gender classification databases, including Labeled Face in-the-Wild (LFW), CelebA, ColorFeret, and All-Age Faces database, a novel database collected by our group. Each individual patch is evaluated, and complementary patches are selected for voting. We show that the classification accuracy of our method is comparable with that of state-of-the-art systems. This characteristic validates the effectiveness of our proposed method.

Keywords: Convolutional Neural Network (CNN), gender classification, majority voting

References(32)

[1]
Golomb B. A., Lawrence D. T., and Sejnowski T. J., SEXNET: A neural network identifies sex from human faces, in Proc. 1990 Conf. on Advances in Neural Information Processing Systems, Denver, CO, USA, 1990.
[2]
Valentin D., Abdi H., Edelman B., and O’Toole A. J., Principal component and neural network analyses of face images: What can be generalized in gender classification? J. Mathem. Psychol., vol. 41, no. 4, pp. 398-413, 1997.
[3]
Lyons M. J., Budynek J., Plante A., and Akamatsu S., Classifying facial attributes using a 2-D Gabor wavelet representation and discriminant analysis, in Proc. 4th IEEE Int. Conf. on Automatic Face and Gesture Recognition, Grenoble, France, 2000, pp. 202-207.
[4]
Buchala S., Davey N., Gale T. M., and Frank R. J., Principal component analysis of gender, ethnicity, age, and identity of face images, in Proc. IEEE ICMI, Tozeur, Tunisia, 2005.
[5]
Phillips P. J., Wechsler H., Huang J., and Rauss R. J., The FERET database and evaluation procedure for face-recognition algorithms, Image Vis. Comput., vol. 16, no. 5, pp. 295-306, 1998.
[6]
Sim T., Baker S., and Bsat M., The CMU Pose, Illumination, and Expression (PIE) database, in Proc. 5th IEEE Int. Conf. on Automatic Face Gesture Recognition, Washington, DC, USA, 2002, pp. 53-58.
[7]
Bekios-Calfa J., Buenaposada J. M., and Baumela L., Robust gender recognition by exploiting facial attributes dependencies, Pattern Recognit. Lett., vol. 36, pp. 228-234, 2014.
[8]
Shan C. F., Learning local binary patterns for gender classification on real-world face images, Pattern Recognit. Lett., vol. 33, no. 4, pp. 431-437, 2012.
[9]
Fan H. Q., Yang M., Cao Z. M., Jiang Y. J., and Yin Q., Learning compact face representation: Packing a face into an int32, in Proc. 22nd ACM Int. Conf. on Multimedia, Orlando, FL, USA, 2014, pp. 933-936.
DOI
[10]
Liu Z. W., Luo P., Wang X. G., and Tang X. O., Deep learning face attributes in the wild, in Proc. 2015 IEEE Int. Conf. on Computer Vision, Santiago, Chile, 2015, pp. 3730-3738.
DOI
[11]
Rai P. and Khanna P., Gender classification using radon and wavelet transforms, in Proc. 2010 5th Int. Conf. on Industrial and Information Systems, Mangalore, India, 2010, pp. 448-451.
DOI
[12]
Reichl W. and Chou W., A unified approach of incorporating general features in decision tree based acoustic modeling, in Proc. 1999 IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Phoenix, AZ, USA, 1999, pp. 573-576.
DOI
[13]
Bekios-Calfa J., Buenaposada J. M., and Baumela L., Revisiting linear discriminant techniques in gender recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 4, pp. 858-864, 2011.
[14]
Zhang H. and Zhu Q., Gender classification in face images based on stacked-autoencoders method, in Proc. 2014 7th Int. Congress on Image and Signal Processing, Dalian, China, 2014, pp. 486-491.
DOI
[15]
Sun Y., Chen Y. H., Wang X. Q., and Tang X. O., Deep learning face representation by joint identification-verification, in Proc. 27th Int. Conf. on Neural Information Processing Systems, Montreal, Canada, 2014, pp. 1988-1996.
[16]
Rai P. and Khanna P., A gender classification system robust to occlusion using Gabor features based (2D)2PCA, J. Vis. Commun. Image Represent., vol. 25, no. 5, pp. 1118-1129, 2014.
[17]
Guo G. D., Dyer C. R., Fu Y., and Huang T. S., Is gender recognition affected by age? in Proc. 2009 IEEE 12th Int. Conf. on Computer Vision Workshops, Kyoto, Japan, 2009, pp. 2032-2039.
DOI
[18]
LeCun Y., Bottou L., Bengio Y., and Haffner P., Gradient-based learning applied to document recognition, Proc. IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[19]
Krizhevsky A., Sutskever I., and Hinton G. E., ImageNet classification with deep convolutional neural networks, in Proc. 25th Int. Conf. on Neural Information Processing Systems, Lake Tahoe, NV, USA, 2012, pp. 1097-1105.
[20]
Szegedy C., Liu W., Jia Y. Q., Sermanet P., Reed S., Anguelov D., Erhan D., Vanhoucke V., and Rabinovich A., Going deeper with convolutions, in Proc. 2015 IEEE Conf. on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 1-9.
DOI
[21]
Zheng L., Yang Y., and Tian Q., SIFT meets CNN: A decade survey of instance retrieval, IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 5, pp. 1224-1244, 2018.
[22]
Girshick R., Donahue J., Darrell T., and Malik J., Rich feature hierarchies for accurate object detection and semantic segmentation, in Proc. 2014 IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 580-587.
DOI
[23]
Cootes T. F., Taylor C. J., Cooper D. H., and Graham J., Active shape models-their training and application, Comput. Vis. Image Underst., vol. 61, no. 1, pp. 38-59, 1995.
[24]
Meller S., Nkenke E., and Kalender W. A., Statistical face models for the prediction of soft-tissue deformations after orthognathic osteotomies, in Medical Image Computing and Computer-Assisted Intervention-MICCAI 2005, Duncan J. S. and Gerig G., eds. Springer, 2005, pp. 443-450.
DOI
[25]
Deng J., Imagenet large scale visual recognition, PhD dissertation, Princeton University, Princeton, NJ, USA, 2012.
DOI
[26]
Huang G. B., Ramesh M., Berg T., and Learned-Miller E., Labeled faces in the wild: A database for studying face recognition in unconstrained environments, http://www.tamaraberg.com/papers/Huang_eccv2008-lfw.pdf, 2007.
[27]
Phillips P. J., Moon H., Rizvi S. A., and Rauss P. J., The FERET evaluation methodology for face-recognition algorithms, IEEE Trans. Pattern Anal. Mach. Intell, vol. 22, no. 10, pp. 1090-1104, 2000.
[28]
Thulin M. and Masek P., Software quality evaluation of face recognition APIS and libraries, https://gupea.ub.gu.se/bitstream/2077/38856/1/gupea_2077_38856_1.pdf, 2015.
[29]
Kumar N., Belhumeur P., and Nayar S., Facetracer: A search engine for large collections of images with faces, in Computer Vision-ECCV 2008, Forsyth D., Torr P., and Zisserman A., eds. Springer, 2008, pp. 340-353.
DOI
[30]
Zhang N., Paluri M., Ranzato M., Darrell T., and Bourdev L., Panda: Pose aligned networks for deep attribute modeling, in Proc. 2014 IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 1637-1644.
DOI
[31]
Baluja S. and Rowley H. A., Boosting sex identification performance, Int. J. Comput. Vis., vol. 71, no. 1, pp. 111-119, 2007.
[32]
Hing F., Tivive C., and Bouzerdoum A., A shunting inhibitory convolutional neural network for gender classification, in Proc. 18th Int. Conf. on Pattern Recognition, Hong Kong, China, 2006, pp. 421-424.
DOI
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 02 January 2018
Accepted: 01 March 2018
Published: 24 January 2019
Issue date: June 2019

Copyright

© The author(s) 2019

Acknowledgements

This work was supported by the National High-Tech Research and Development (863) Program of China (No. 2012AA011004), the National Science and Technology Support Program (No. 2013BAK02B04), and the National Key Research and Development Plan (No. 2016YFB0801301).

Rights and permissions

Return