Journal Home > Volume 24 , Issue 6

With the accelerated aging of the global population and escalating labor costs, more service robots are needed to help people perform complex tasks. As such, human-robot interaction is a particularly important research topic. To effectively transfer human behavior skills to a robot, in this study, we conveyed skill-learning functions via our proposed wearable device. The robotic teleoperation system utilizes interactive demonstration via the wearable device by directly controlling the speed of the motors. We present a rotation-invariant dynamical-movement-primitive method for learning interaction skills. We also conducted robotic teleoperation demonstrations and designed imitation learning experiments. The experimental human-robot interaction results confirm the effectiveness of the proposed method.


menu
Abstract
Full text
Outline
About this article

Skill Learning for Human-Robot Interaction Using Wearable Device

Show Author's information Bin Fang*( )Xiang WeiFuchun SunHaiming HuangYuanlong YuHuaping Liu
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China.
College of Information Engineering, Shenzhen University, Shenzhen 518060, China.
Fuzhou University, Fuzhou 350108, China.

Abstract

With the accelerated aging of the global population and escalating labor costs, more service robots are needed to help people perform complex tasks. As such, human-robot interaction is a particularly important research topic. To effectively transfer human behavior skills to a robot, in this study, we conveyed skill-learning functions via our proposed wearable device. The robotic teleoperation system utilizes interactive demonstration via the wearable device by directly controlling the speed of the motors. We present a rotation-invariant dynamical-movement-primitive method for learning interaction skills. We also conducted robotic teleoperation demonstrations and designed imitation learning experiments. The experimental human-robot interaction results confirm the effectiveness of the proposed method.

Keywords: interaction, skill learning, teleoperation, dynamical movement primitive

References(19)

[1]
Peer A., Einenkel S., and Buss M., Multi-fingered telemanipulation-mapping of a human hand to a three- finger gripper, in Proc. 17th IEEE Int. Symp. on Robot and Human Interactive Communication, Munich, Germany, 2008, pp. 465-470.
DOI
[2]
Rosell J., Suárez R., Rosales C., and Pérez A., Autonomous motion planning of a hand-arm robotic system based on captured human-like hand postures, Autonom. Rob., vol. 31, no. 1, pp. 87-102, 2011.
[3]
Pao L. and Speeter T. H., Transformation of human hand positions for robotic hand control, in Proc. 1989 Int. Conf. on Robotics and Automation, Scottsdale, AZ, USA, 1989, pp. 1758-1763.
[4]
Lin Y. and Sun Y., Grasp mapping using locality preserving projections and kNN regression, in Proc. 2013 IEEE Int. Conf. on Robotics and Automation, Karlsruhe, Germany, 2013, pp. 1076-1081.
DOI
[5]
Bócsi B., Csató L., and Peters J., Alignment-based transfer learning for robot models, in Proc. 2013 Int. Joint Conf. on Neural Networks, Dallas, TX, USA, 2013, pp. 1-7.
DOI
[6]
Zhou J. T., Tsang I. W., Pan S. J., and Tan M. K., Heterogeneous domain adaptation for multiple classes, in Proc. 17th Int. Conf. on Artificial Intelligence and Statistics, Reykjavik, Iceland, 2014, pp. 1095-1103.
[7]
Schaal S., Dynamic movement primitives—A framework for motor control in humans and humanoid robotics, in Adaptive Motion of Animals and Machines, Kimura H., Tsuchiya K., Ishiguro A., and Witte H., eds. Springer, 2006, pp. 261–280.
DOI
[8]
Pastor P., Hoffmann H., Asfour T., and Schaal S., Learning and generalization of motor skills by learning from demonstration, in Proc. 2009 IEEE Int. Conf. on Robotics and Automation, Kobe, Japan, 2009, pp. 763-768.
DOI
[9]
Ijspeert A. J., Nakanishi J., Hoffmann H., Pastor P., and Schaal S., Dynamical movement primitives: Learning attractor models for motor behaviors, Neural Comput., vol. 25, no. 2, pp. 328-373, 2013.
[10]
Metzen J. H., Fabisch A., Senger L., de G. Fernández J., and Kirchner E. A., Towards learning of generic skills for robotic manipulation, Künstl. Intell., vol. 28, no. 1, pp. 15-20, 2014.
[11]
Yu T. H., Finn C., Xie A. N., Dasari S., Zhang T. H., Abbeel P., and Levine S., One-shot imitation from observing humans via domain-adaptive meta-learning, arXiv preprint arXiv: 1802.01557, 2018.
[12]
Hussein A., Gaber M. M., Elyan E., and Jayne C., Imitation learning: A survey of learning methods, ACM Comput. Surv., vol. 50, no. 2, p. 21, 2017.
[13]
Kober J., Bagnel J., and Peters J., Reinforcement learning in robotics: A survey, Int. J. Rob. Res., vol. 32, no. 11, pp. 1238-1274, 2013.
[14]
Ijspeert A. J., Nakanishi J., and Schaal S., Movement imitation with nonlinear dynamical systems in humanoid robots, in Proc. 2002 IEEE Int. Conf. on Robotics and Automation, Washington, DC, USA, 2002, pp. 1398-1403.
[15]
Herzog S., Wörgötter F., and Kulvicius T., Optimal trajectory generation for generalization of discrete movements with boundary conditions, in Proc. 2016 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Daejeon, Korea, 2016, pp. 3143-3149.
DOI
[16]
Fang B., Sun F. C., Liu H. P., and Guo D., Development of a wearable device for motion capturing based on magnetic and inertial measurement units, Scientific Programming, vol. 2017, p. 7594763, 2017.
[17]
Fang B., Sun F. C., Liu H. P., Guo D., Chen W. D., and Yao G. D., Robotic teleoperation systems using a wearable multimodal fusion device, Int. J. Adv. Rob. Syst., vol. 14, no. 4, pp. 1-11, 2017.
[18]
Ang K. H., Chong G., and Li Y., PID control system analysis, design, and technology, IEEE Trans. Control Syst. Technol., vol. 13, no. 4, pp. 559-576, 2005.
[19]
Vakanski A., Mantegh I., Irish A., and Janabi-Sharifi F., Trajectory learning for robot programming by demonstration using hidden markov model and dynamic time warping, IEEE Trans. Syst., Man, Cybern. B: Cybern., vol. 42, no. 4, pp. 1039-1052, 2012.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 09 February 2018
Revised: 13 April 2018
Accepted: 27 April 2018
Published: 05 December 2019
Issue date: December 2019

Copyright

© The author(s) 2019

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Nos. 61503212, 61473089, U1613212, and 61327809), the Beijing Science and Technology Program (No. Z171100000817007), the German Research Foundation (DFG) in project Cross Modal Learning (No. NSFC 61621136008/DFG TRR-169), and the Suzhou Special Program (No. 2016SZ0219).

Rights and permissions

Return