Journal Home > Volume 24 , Issue 6

In this work, we use a deep learning method to tackle the Zero-Shot Learning (ZSL) problem in tactile material recognition by incorporating the advanced semantic information into a training model. Our main technical contribution is our proposal of an end-to-end deep learning framework for solving the tactile ZSL problem. In this framework, we use a Convolutional Neural Network (CNN) to extract the spatial features and Long Short-Term Memory (LSTM) to extract the temporal features in dynamic tactile sequences, and develop a loss function suitable for the ZSL setting. We present the results of experimental evaluations on publicly available datasets, which show the effectiveness of the proposed method.


menu
Abstract
Full text
Outline
About this article

Fabric Recognition Using Zero-Shot Learning

Show Author's information Feng WangHuaping Liu*( )Fuchun SunHaihong Pan
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China.
College of Mechanical Engineering, Guangxi University, Nanning 530003, China.

Abstract

In this work, we use a deep learning method to tackle the Zero-Shot Learning (ZSL) problem in tactile material recognition by incorporating the advanced semantic information into a training model. Our main technical contribution is our proposal of an end-to-end deep learning framework for solving the tactile ZSL problem. In this framework, we use a Convolutional Neural Network (CNN) to extract the spatial features and Long Short-Term Memory (LSTM) to extract the temporal features in dynamic tactile sequences, and develop a loss function suitable for the ZSL setting. We present the results of experimental evaluations on publicly available datasets, which show the effectiveness of the proposed method.

Keywords: deep learning, Zero-Shot-Learning (ZSL), fabric recognition, tactile recognition

References(42)

[1]
Vicente A., Liu J. D., and Yang G. Z., Surface classification based on vibration on omni-wheel mobile base, in Proc. 2015 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Hamburg, Germany, 2015, pp. 916-921.
DOI
[2]
Baglio S., Cantelli L., Giusa F., and Muscato G., Intelligent prodder: Implementation of measurement methodologies for material recognition and classification with humanitarian demining applications, IEEE Trans. Instrum. Meas., vol. 64, no. 8, pp. 2217-2226, 2015.
[3]
Chitta S., Sturm J., Piccoli M., and Burgard W., Tactile sensing for mobile manipulation, IEEE Trans. Rob., vol. 27, no. 3, pp. 558-568, 2011.
[4]
Hou Y. F., Zhang D. H., Wu B. H., and Luo M., Milling force modeling of worn tool and tool flank wear recognition in end milling, IEEE/ASME Trans. Mechatron., vol. 20, no. 3, pp. 1024-1035, 2015.
[5]
Khadem M., Rossa C., Usmani N., Sloboda R. S., and Tavakoli M., A two-body rigid/flexible model of needle steering dynamics in soft tissue, IEEE/ASME Trans. Mechatron., vol. 21, no. 5, pp. 2352-2364, 2016.
[6]
Liu H. P. and Sun F. C., Material identification using tactile perception: A semantics-regularized dictionary learning method, IEEE/ASME Trans. Mechatron., vol. 23, no. 3, pp. 1050-1058, 2018.
[7]
Shirmohammadi S. and Ferrero A., Camera as the instrument: The rising trend of vision based measurement, IEEE Instrum. Meas. Mag., vol. 17, no. 3, pp. 41-47, 2014.
[8]
Degol J., Golparvar-Fard M., and Hoiem D., Geometry-informed material recognition, in Proc. 2016 IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 1554-1562.
DOI
[9]
Bell S., Upchurch P., Snavely N., and Bala K., Material recognition in the wild with the materials in context database, in Proc. 2015 IEEE Conf. on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 3479-3487.
DOI
[10]
Wang Q. L., Li P. H., Zuo W. M., and Zhang L., RAID-G: Robust estimation of approximate infinite dimensional Gaussian with application to material recognition, in Proc. 2016 IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 4433-4441.
DOI
[11]
Brandao M., Shiguematsu Y. M., Hashimoto K., and Takanishi A., Material recognition CNNs and hierarchical planning for biped robot locomotion on slippery terrain, in Proc. 2016 IEEE-RAS 16th Int. Conf. on Humanoid Robots, Cancun, Mexico, 2016, pp. 81-88.
DOI
[12]
Liu H. P., Yu Y. L., Sun F. C., and Gu J., Visual–tactile fusion for object recognition, IEEE Trans. Autom. Sci. Eng., vol. 14, no. 2, pp. 996-1008, 2017.
[13]
Liu H. O., Qin J., Sun F. C., and Guo D., Extreme kernel sparse learning for tactile object recognition, IEEE Trans. Cybern., vol. 47, no. 12, pp. 4509-4520, 2017.
[14]
Kimoto A. and Matsue Y., A new multifunctional tactile sensor for detection of material hardness, IEEE Trans. Instrum. Meas., vol. 60, no. 4, pp. 1334-1339, 2011.
[15]
Liu H. B., Song X. J., Bimbo J., Seneviratne L., and Althoefer K., Surface material recognition through haptic exploration using an intelligent contact sensing finger, in Proc. 2012 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Vilamoura, Portugal, 2012, pp. 52-57.
DOI
[16]
Bhattacharjee T., Wade J., and Kemp C., Material recognition from heat transfer given varying initial conditions and short-duration contact, in Proc. Robotics: Science and Systems, Rome, Italy, 2015.
DOI
[17]
Christie J. and Kottege N., Acoustics based terrain classification for legged robots, in Proc. 2016 IEEE Int. Conf. on Robotics and Automation, Stockholm, Sweden, 2016, pp. 3596-3603.
DOI
[18]
Shevchik S. A., Saeidi F., Meylan B., and Wasmer K., Prediction of failure in lubricated surfaces using acoustic time-frequency features and random forest algorithm, IEEE Trans. Ind. Inf., vol. 13, no. 4, pp. 1541-1553, 2017.
[19]
Watanabe A., Even J., Morales L. Y., and Ishi C., Robot-assisted acoustic inspection of infrastructures-cooperative hammer sounding inspection, in Proc. 2015 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Hamburg, Germany, 2015, pp. 5942-5947.
DOI
[20]
Fukuda T., Tanaka Y., Fujiwara M., and Sano A., Softness measurement by forceps-type tactile sensor using acoustic reflection, in Proc. 2015 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Hamburg, Germany, 2015, pp. 3791-3796.
DOI
[21]
Baishya S. S. and Bäuml B., Robust material classification with a tactile skin using deep learning, in Proc. 2016 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Daejeon, Korea, 2016, pp. 8-15.
DOI
[22]
Yuan W. Z., Wang S. X., Dong S. Y., and Adelson E., Connecting look and feel: Associating the visual and tactile properties of physical materials, arXiv preprint arXiv: 1704.03822, 2017.
[23]
Yuan W. Z., Zhu C. Z., Owens A., Srinivasan M. A., and Adelson E. H., Shape-independent hardness estimation using deep learning and a gelsight tactile sensor, arXiv preprint arXiv: 1704.03955, 2017.
[24]
Drewing K., Weyel C., Celebi H., and Kaya D., Feeling and feelings: Affective and perceptual dimensions of touched materials and their connection, in Proc. 2017 IEEE World Haptics Conf., Munich, Germany, 2017, pp. 25-30.
DOI
[25]
Hughes D., Krauthammer A., and Correll N., Recognizing social touch gestures using recurrent and convolutional neural networks, in Proc. 2017 IEEE Int. Conf. on Robotics and Automation, Singapore, 2017, pp. 2315-2321.
DOI
[26]
Chu V., McMahon I., Riano L., McDonald C. G., He Q., Perez-Tejada J. M., Arrigo M., Darrell T., and Kuchenbecker K. J., Robotic learning of haptic adjectives through physical interaction, Rob. Autonom. Syst., vol. 63, pp. 279-292, 2015.
[27]
Gao Y., Hendricks L. A., Kuchenbecker K. J., and Darrell T., Deep learning for tactile understanding from visual and haptic data, in Proc. 2016 IEEE Int. Conf. on Robotics and Automation, Stockholm, Sweden, 2016.
DOI
[28]
Liu H. P., Wu Y. P., Sun F. C., Guo D., and Fang B., Multi-label tactile property analysis, in Proc. 2017 IEEE Int. Conf. on Robotics and Automation, Singapore, 2017, pp. 366-371.
DOI
[29]
Liu H. P., Wu Y. P., Sun F. C., Fang B., and Guo D., Weakly paired multimodal fusion for object recognition, IEEE Trans. Autom. Sci. Eng., vol. 15, no. 2, pp. 784-795, 2018.
[30]
Feng X. C., Huang L. F., Qin B., Lin Y., Ji H., and Liu T., Multi-level cross-lingual attentive neural architecture for low resource name tagging, Tsinghua Sci. Technol., vol. 22, no. 6, pp. 633-645, 2017.
[31]
Chen H. X., Feng S., Pei X., Zhang Z., and Yao D. Y., Dangerous driving behavior recognition and prevention using an autoregressive time-series model, Tsinghua Sci. Technol., vol. 22, no. 6, pp. 682-690, 2017.
[32]
Abderrahmane Z., Ganesh G., Cherubini A., and Crosnier A., Zero-shot object recognition based on haptic attributes, in Proc. 2017 IEEE Int. Conf. on Robotics and Automation, Singapore, 2017.
[33]
Lampert C. H., Nickisch H., and Harmeling S., Learning to detect unseen object classes by between-class attribute transfer, in Proc. 2009 IEEE Conf. on Computer Vision and Pattern Recognition, Miami, FL, USA, 2009, pp. 951-958.
DOI
[34]
Zhang Z. M. and Saligrama V., Zero-shot learning via joint latent similarity embedding, in Proc. 2016 IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 6034-6042.
DOI
[35]
Fu Y. W., Hospedales T. M., Xiang T., and Gong S. G., Transductive multi-view zero-shot learning, IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 11, pp. 2332-2345, 2015.
[36]
Changpinyo S., Chao W. L., Gong B. Q., and Sha F., Synthesized classifiers for zero-shot learning, in Proc. 2016 IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 5327-5336.
DOI
[37]
Romera-Paredes B. and Torr P. H. S., An embarrassingly simple approach to zero-shot learning, in Proc. 32nd Int. Conf. on Int. Conf. on Machine Learning, Lille, France, 2015, pp. 2152-2161.
[38]
Kodirov E., Xiang T., and Gong S. G., Semantic autoencoder for zero-shot learning, in Proc. 2017 IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017.
DOI
[39]
Norouzi M., Mikolov T., Bengio S., Singer Y., Shlens J., Frome A., Corrado G. S., and Dean J., Zero-shot learning by convex combination of semantic embeddings, arXiv preprint arXiv: 1312.5650, 2013.
[40]
Fakoor R., Bansal M., and Walter M. R., Deep attributebased zero-shot learning with layer-specific regularizers, in NIPS 2015 Workshop on Transfer and Multi-Task Learning, Montreal, Canada, 2015.
[41]
Morgado P. and Vasconcelos N., Semantically consistent regularization for zero-shot recognition, arXiv preprint arXiv: 1704.03039, 2017.
[42]
Lampert C. H., Nickisch H., and Harmeling S., Attribute-based classification for zero-shot visual object categorization, IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 3, pp. 453-465, 2014.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 09 February 2018
Revised: 18 April 2018
Accepted: 27 April 2018
Published: 05 December 2019
Issue date: December 2019

Copyright

© The author(s) 2019

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (Nos. 61673238, 61703284, and 61327809), in part by the Beijing Municipal Science and Technology Commission (No. D171100005017002), and in part by the Nanning Science Research and Technology Development Plan.

Rights and permissions

Return