Journal Home > Volume 4 , Issue 2

In an asteroid sample-return mission, accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point. During the missions of Hayabusa and Hayabusa2, the main part of the visual position estimation procedure was performed by human operators on the Earth based on a sequence of asteroid images acquired and sent by the spacecraft. Although this approach is still adopted in critical space missions, there is an increasing demand for automated visual position estimation, so that the time and cost of human intervention may be reduced. In this paper, we propose a method for estimating the relative position of the spacecraft and asteroid during the descent phase for touchdown from an image sequence using state-of-the-art techniques of image processing, feature extraction, and structure from motion. We apply this method to real Ryugu images that were taken by Hayabusa2 from altitudes of 20 km-500 m. It is demonstrated that the method has practical relevance for altitudes within the range of 5-1 km. This result indicates that our method could improve the efficiency of the ground operation in the global mapping and navigation during the touchdown sequence, whereas full automation and autonomous on-board estimation are beyond the scope of this study. Furthermore, we discuss the challenges of developing a completely automatic position estimation framework.


menu
Abstract
Full text
Outline
About this article

Visual localization for asteroid touchdown operation based on local image features

Show Author's information Yoshiyuki Anzai1( )Takehisa Yairi1Naoya Takeishi2Yuichi Tsuda3Naoko Ogawa3
The University of Tokyo, Tokyo 113-8654, Japan
RIKEN Center for Advanced Intelligence Project, Tokyo 103-0027, Japan
Japan Aerospace Exploration Agency, Sagamihara 252-5210, Japan

Abstract

In an asteroid sample-return mission, accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point. During the missions of Hayabusa and Hayabusa2, the main part of the visual position estimation procedure was performed by human operators on the Earth based on a sequence of asteroid images acquired and sent by the spacecraft. Although this approach is still adopted in critical space missions, there is an increasing demand for automated visual position estimation, so that the time and cost of human intervention may be reduced. In this paper, we propose a method for estimating the relative position of the spacecraft and asteroid during the descent phase for touchdown from an image sequence using state-of-the-art techniques of image processing, feature extraction, and structure from motion. We apply this method to real Ryugu images that were taken by Hayabusa2 from altitudes of 20 km-500 m. It is demonstrated that the method has practical relevance for altitudes within the range of 5-1 km. This result indicates that our method could improve the efficiency of the ground operation in the global mapping and navigation during the touchdown sequence, whereas full automation and autonomous on-board estimation are beyond the scope of this study. Furthermore, we discuss the challenges of developing a completely automatic position estimation framework.

Keywords: touchdown, Hayabusa2, asteroid, visual navigation, structure from motion

References(18)

[1]
Tsuda, Y., Yoshikawa, M., Abe, M., Minamino, H., Nakazawa, S. System design of the Hayabusa 2—Asteroid sample return mission to 1999 JU3. Acta Astronautica, 2013, 91: 356-362.
[2]
Shirakawa, K., Morita, H., Uo, M., Hashimoto, T., Kubota, T., Kawaguchi, J. Accurate landmark tracking for navigating hayabusa prior to final descent. Advances in the Astronautical Sciences, 2006, 124: 1817-1825.
[3]
Hashimoto, T., Kubota, T., Kawaguchi, J., Uo, M., Shirakawa, K., Kominato, T., Morita, H. Vision-based guidance, navigation, and control of Hayabusa spacecraft—Lessons learned from real operation. IFAC Proceedings Volumes, 2010, 43(15): 259-264.
[4]
Yamaguchi, T., Saiki, T., Tanaka, S., Takei, Y., Okada, T., Takahashi, T., Tsuda, Y. Hayabusa2-Ryugu proximity operation planning and landing site selection. Acta Astronautica, 2018, 151: 217-227.
[5]
Lorenz, D. A., Olds, R., May, A., Mario, C., Perry, M. E., Palmer, E. E., Daly, M. Lessons learned from OSIRIS-REx autonomous navigation using natural feature tracking. In: Proceedings of the 2017 IEEE Aerospace Conference, 2017: 1-12.
DOI
[6]
Takeishi, N., Yairi, T. Visual monocular localization, mapping, and motion estimation of a rotating small celestial body. Journal of Robotics and Mechatronics, 2017, 29(5): 856-863.
[7]
Cocaud, C., Kubota, T. SLAM-based navigation scheme for pinpoint landing on small celestial body. Advanced Robotics, 2012, 26(15): 1747-1770.
[8]
Rathinam, A. Dempster, A. G. Monocular vision based simultaneous localization and mapping for close proximity navigation near an asteroid. In: Proceedings of the International Astronautical Congress, 2017: 1-6.
[9]
Terui, F., Ogawa, N., Oda, K., Uo, M. Image based navigation and guidance for approach phase to the asteroid utilizing captured images at the rehearsal approach. In: Proceedings of the 19th Workshop on Astrodynamics and Flight Mechanics, 2009: 1-6.
[10]
Lowe, D. G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004, 60(2): 91-110.
[11]
Takeishi, N., Tanimoto, A., Yairi, T., Tsuda, Y., Terui, F., Ogawa, N., Mimasu, Y. Evaluation of interest-region detectors and descriptors for automatic landmark tracking on asteroids. Transactions of the Japan Society for Aeronautical and Space Sciences, 2015, 58(1): 45-53.
[12]
Hartley, R., Zisserman, A. Multiple View Geometry in Computer Vision, 2nd edn. Cambridge: Cambridge University Press, 2004.
DOI
[13]
Fischler, M. A., Bolles, R. C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 1981, 24(6): 381-395.
[14]
Kameda, S., Suzuki, H., Takamatsu, T., Cho, Y., Yasuda, T., Yamada, M., Sawada, H., Honda, R., Morota, T., Honda, C. et al. Preflight calibration test results for optical navigation camera telescope (ONC-T) onboard the Hayabusa2 spacecraft. Space Science Reviews, 2017, 208(1-4): 17-31.
[15]
Data ARchive and Transmission System. HAYABUSA2 Optical Navigation Camera (ONC). Information on https://www.darts.isas.jaxa.jp/pub/hayabusa2/onc_bundle/browse/.
[16]
Lucas, B. D., Kanade, T. An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th International Joint Conference on Artificial Intelligence, 1981: 674-679.
[17]
Suzuki, H., Yamada, M., Kouyama, T., Tatsumi, E., Kameda, S., Honda, R., Sawada, H., Ogawa, N., Morota, T., Honda, C. et al. Initial inflight calibration for Hayabusa2 optical navigation camera (ONC) for science observations of asteroid Ryugu. Icarus, 2018, 300: 341-359.
[18]
Tsuda, Y., Saiki, T., Terui, F., Nakazawa, S., Yoshikawa, M., Watanabe, S. I. Hayabusa2 mission status: Landing, roving and cratering on asteroid Ryugu. Acta Astronautica, 2020, 171: 42-54.
Publication history
Copyright
Acknowledgements

Publication history

Received: 23 January 2020
Accepted: 23 March 2020
Published: 18 June 2020
Issue date: June 2020

Copyright

© Tsinghua University Press 2020

Acknowledgements

This work was partially supported by JSPS KAKENHI Grant No. 18H01628.

Return