Sort:
Open Access Issue
Skill Learning for Human-Robot Interaction Using Wearable Device
Tsinghua Science and Technology 2019, 24 (6): 654-662
Published: 05 December 2019
Downloads:94

With the accelerated aging of the global population and escalating labor costs, more service robots are needed to help people perform complex tasks. As such, human-robot interaction is a particularly important research topic. To effectively transfer human behavior skills to a robot, in this study, we conveyed skill-learning functions via our proposed wearable device. The robotic teleoperation system utilizes interactive demonstration via the wearable device by directly controlling the speed of the motors. We present a rotation-invariant dynamical-movement-primitive method for learning interaction skills. We also conducted robotic teleoperation demonstrations and designed imitation learning experiments. The experimental human-robot interaction results confirm the effectiveness of the proposed method.

Open Access Issue
Asynchronous Brain-Computer Interface Shared Control of Robotic Grasping
Tsinghua Science and Technology 2019, 24 (3): 360-370
Published: 24 January 2019
Downloads:27

The control of a high Degree of Freedom (DoF) robot to grasp a target in three-dimensional space using Brain-Computer Interface (BCI) remains a very difficult problem to solve. Design of synchronous BCI requires the user perform the brain activity task all the time according to the predefined paradigm; such a process is boring and fatiguing. Furthermore, the strategy of switching between robotic auto-control and BCI control is not very reliable because the accuracy of Motor Imagery (MI) pattern recognition rarely reaches 100 %. In this paper, an asynchronous BCI shared control method is proposed for the high DoF robotic grasping task. The proposed method combines BCI control and automatic robotic control to simultaneously consider the robotic vision feedback and revise the unreasonable control commands. The user can easily mentally control the system and is only required to intervene and send brain commands to the automatic control system at the appropriate time according to the experience of the user. Two experiments are designed to validate our method: one aims to illustrate the accuracy of MI pattern recognition of our asynchronous BCI system; the other is the online practical experiment that controls the robot to grasp a target while avoiding an obstacle using the asynchronous BCI shared control method that can improve the safety and robustness of our system.

Open Access Research Article Issue
Brain-inspired multimodal learning based on neural networks
Brain Science Advances 2018, 4 (1): 61-72
Published: 25 November 2018
Downloads:27

Modern computational models have leveraged biological advances in human brain research. This study addresses the problem of multimodal learning with the help of brain-inspired models. Specifically, a unified multimodal learning architecture is proposed based on deep neural networks, which are inspired by the biology of the visual cortex of the human brain. This unified framework is validated by two practical multimodal learning tasks: image captioning, involving visual and natural language signals, and visual-haptic fusion, involving haptic and visual signals. Extensive experiments are conducted under the framework, and competitive results are achieved.

Open Access Issue
Attitude Control of Rigid Body with Inertia Uncertainty and Saturation Input
Tsinghua Science and Technology 2017, 22 (1): 83-91
Published: 26 January 2017
Downloads:25

In this paper, the attitude control problem of rigid body is addressed with considering inertia uncertainty, bounded time-varying disturbances, angular velocity-free measurement, and unknown non-symmetric saturation input. Using a mathematical transformation, the effects of bounded time-varying disturbances, uncertain inertia, and saturation input are combined as total disturbances. A novel finite-time observer is designed to estimate the unknown angular velocity and the total disturbances. For attitude control, an observer-based sliding-mode control protocol is proposed to force the system state convergence to the desired sliding-mode surface; the finite-time stability is guaranteed via Lyapunov theory analysis. Finally, a numerical simulation is presented to illustrate the effective performance of the proposed sliding-mode control protocol.

total 4