Journal Home > Volume 2 , Issue 4

Using a brain-computer interface (BCI) rather than limbs to control multiple robots (i.e., brain-controlled multi-robots) can better assist people with disabilities in daily life than a brain-controlled single robot. For example, one person with disabilities can move by a brain-controlled wheelchair (leader robot) and simultaneously transport objects by follower robots. In this paper, we explore how to control the direction, speed, and formation of a brain-controlled multi-robot system (consisting of leader and follower robots) for the first time and propose a novel multi-robot predictive control framework (MRPCF) that can track users' control intents and ensure the safety of multiple robots. The MRPCF consists of the leader controller, follower controller, and formation planner. We build a whole brain-controlled multi-robot physical system for the first time and test the proposed system through human-in-the-loop actual experiments. The experimental results indicate that the proposed system can track users' direction, speed, and formation control intents when guaranteeing multiple robots’ safety. This paper can promote the study of brain-controlled robots and multi-robot systems and provide some novel views into human-machine collaboration and integration.


menu
Abstract
Full text
Outline
About this article

Brain-Controlled Multi-Robot at Servo-Control Level Based on Nonlinear Model Predictive Control

Show Author's information Zhenge Yang1Luzheng Bi1( )Weiming Chi1Haonan Shi1Cuntai Guan2
School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China
School of Computer Science and Engineering, Nanyang Technological University, Singapore 639673, Singapore

Abstract

Using a brain-computer interface (BCI) rather than limbs to control multiple robots (i.e., brain-controlled multi-robots) can better assist people with disabilities in daily life than a brain-controlled single robot. For example, one person with disabilities can move by a brain-controlled wheelchair (leader robot) and simultaneously transport objects by follower robots. In this paper, we explore how to control the direction, speed, and formation of a brain-controlled multi-robot system (consisting of leader and follower robots) for the first time and propose a novel multi-robot predictive control framework (MRPCF) that can track users' control intents and ensure the safety of multiple robots. The MRPCF consists of the leader controller, follower controller, and formation planner. We build a whole brain-controlled multi-robot physical system for the first time and test the proposed system through human-in-the-loop actual experiments. The experimental results indicate that the proposed system can track users' direction, speed, and formation control intents when guaranteeing multiple robots’ safety. This paper can promote the study of brain-controlled robots and multi-robot systems and provide some novel views into human-machine collaboration and integration.

Keywords: brain-computer interface, model predictive control, multi-robot system, human-machine collaboration

References(24)

[1]

M. A. Lebedev and M. A. L. Nicolelis, Brain-machine interfaces: Past, present and future, Trends in Neurosciences, vol. 29, no. 9, pp. 536–546, 2006.

[2]

J. D. R. Millán, F. Renkens, J. Mouriño, and W. Gerstner, Noninvasive brain-actuated control of a mobile robot by human EEG, IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 1026–1033, 2004.

[3]

K. Tanaka, K. Matsunaga, and H. O. Wang, Electroencephalogram-based control of an electric wheelchair, IEEE Trans. Robot., vol. 21, no. 4, pp. 762–766, 2005.

[4]
K. Choi and A. Cichocki, Control of a wheelchair by motor imagery in real time, in Proc. 9th International Conference on Intelligent Data Engineering and Automated Learning, Daejeon, Republic of Korea, 2008, pp. 330–337.
DOI
[5]
G. Pires, M. Castelo-Branco, and U. Nunes, Visual P300-based BCI to steer a wheelchair: A Bayesian approach, in Proc. 2008 30th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., Vancouver, Canada, 2008, pp. 658–661.
DOI
[6]

P. L. Lee, H. C. Chang, T. Y. Hsieh, H. T. Deng, and C. W. Sun, A brain-wave-actuated small robot car using ensemble empirical mode decomposition-based approach, IEEE Trans. Syst. Man,Cybern. Part A:Systems Humans, vol. 42, no. 5, pp. 1053–1064, 2012.

[7]

I. Iturrate, J. M. Antelis, A. Kübler, and J. Minguez, A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation, IEEE Trans. Robot., vol. 25, no. 3, pp. 614–627, 2009.

[8]

X. Deng, Z. L. Yu, C. Lin, Z. Gu, and Y. Li, Self-adaptive shared control with brain state evaluation network for human-wheelchair cooperation, J. Neural Eng., vol. 17, no. 4, p. 045005, 2020.

[9]
L. Bi, M. Wang, Y. Lu, and F. A. Genetu, A shared controller for brain-controlled assistive vehicles, in Proc. 2016 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), Banff, Canada, 2016, pp. 125–129.
DOI
[10]
F. He, L. Bi, Y. Lu, H. Li, and L. Wang, Model predictive control for a brain-controlled mobile robot, in Proc. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, Canada, 2017, pp. 3184–3188.
DOI
[11]

H. Li, L. Bi, and J. Yi, Sliding-mode nonlinear predictive control of brain-controlled mobile robots, IEEE Trans. Cybern., vol. 52, no. 6, pp. 5419–5431, 2020.

[12]
R. Liu, K. Z. Xue, Y. X. Wang, and L. Yang, A fuzzy-based shared controller for brain-actuated simulated robotic system, in Proc. 2011 Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBS), Boston, MA, USA, 2011, pp. 7384–7387.
[13]

R. Liu, Y. X. Wang, and L. Zhang, An FDES-based shared control method for asynchronous brain-actuated robot, IEEE Trans. Cybern., vol. 46, no. 6, pp. 1452–1462, 2016.

[14]

Y. U. Cao, A. S. Fukunaga, and A. B. Kahng, Cooperative mobile robotics: Antecedents and directions, Auton. Robots, vol. 4, no. 1, pp. 7–27, 1997.

[15]

T. Balch and R. C. Arkin, Behavior-based formation control for multirobot teams, IEEE Trans. Robot. Autom., vol. 14, no. 6, pp. 926–939, 1998.

[16]

W. Dai, Y. Liu, H. Lu, Z. Zheng, and Z. Zhou, A shared control framework for human-multirobot foraging with brain-computer interface, IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 6305–6312, 2021.

[17]

W. Dai, Y. Liu, H. Lu, Z. Zhou, and Z. Zhen, Shared control based on a brain-computer interface for human-multirobot cooperation, IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 6123–6130, 2021.

[18]

E. A. Kirchner, S. K. Kim, M. Tabie, H. Wohrle, M. Maurus, and F. Kirchner, An intelligent man-machine interface-multi-robot control adapted for task engagement based on single-trial detectability of P300, Front Hum Neurosci, vol. 10, p. 291, 2016.

[19]
E. Li, L. Bi, and W. Chi, Brain-controlled leader-follower robot formation based on model predictive control, in Proc. 2020 IEEE/ASME Int. Conf. Adv. Intell. Mechatronics (AIM), Boston, MA, USA, 2020, pp. 290–295.
DOI
[20]

M. Liu, K. Wang, X. Chen, J. Zhao, Y. Chen, H. Wang, J. Wang, and S. Xu, Indoor simulated training environment for brain-controlled wheelchair based on steady-state visual evoked potentials, Front. Neurorobot., vol. 13, p. 101, 2019.

[21]

L. Bi, X. A. Fan, K. Jie, T. Teng, H. Ding, and Y. Liu, Using a head-up display-based steady-state visually evoked potential brain-computer interface to control a simulated vehicle, IEEE Trans. Intell. Transp. Syst., vol. 15, no. 3, pp. 959–966, 2014.

[22]

X. Chen, Y. Wang, S. Gao, T. P. Jung, and X. Gao, Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain-computer interface, J. Neural Eng., vol. 12, no. 4, p. 046008, 2015.

[23]

G. Campion, G. Bastin, and B. Dandréa-Novel, Structural properties and classification of kinematic and dynamic models of wheeled mobile robots, IEEE Trans. Robot. Autom., vol. 12, no. 1, pp. 47–62, 1996.

[24]

A. K. Das, R. Fierro, V. Kumar, J. P. Ostrowski, J. Spletzer, and C. J. Taylor, A vision-based formation control framework, IEEE Trans. Robot. Autom., vol. 18, no. 5, pp. 813–825, 2002.

Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 07 September 2022
Revised: 19 September 2022
Accepted: 24 September 2022
Published: 30 December 2022
Issue date: December 2022

Copyright

© The author(s) 2022

Acknowledgements

Acknowledgment

This work was supported by the National Natural Science Foundation of China (No. 51975052).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return