AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (15.8 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Publishing Language: Chinese | Open Access

Research on the applicability of image keypoints in underground mine environments

Jiawen WANG1Chenfei LIAO1Zhongqi ZHAO1Zhipeng HUANG1Peilong XIE1Xinyi LIU1Yanhu HOU1Kehu YANG1,2( )
School of Artificial Intelligence, China University of Mining and Technology-Beijing, Beijing 100083, China
Key Laboratory of Mine Major Disaster Risk Monitoring and Early Warning Technology National Mine Safety Administration, Beijing 100083, China
Show Author Information

Abstract

The keypoint algorithm, as a fundamental algorithm in machine vision, plays a crucial role in enhancing the visual perception capabilities of new mining equipment. The keypoint algorithm can be applied across various mining tasks. The unique characteristics of the mine environment, such as lighting variations, dust interference, lack of environmental texture, and repetitive texture structures, present significant challenges for keypoint algorithms. To effectively evaluate the applicability of keypoints in underground mine environments, this paper constructed two types of datasets. The first dataset was the mine coal wall image test dataset, containing 20 sets of challenging coal wall or tunnel wall image sequences, while the second was the tunnel inspection image dataset, recording 589 image frames from a wheeled robot during an inspection process. In comparative experiments, we evaluated various keypoint algorithms, including SIFT, ORB, SURF, AKAZE, L2-Net, HardNet, GeoDesc, SuperPoint, R2D2, and DISK. The experimental results show that deep learning-based keypoint algorithms exhibit superior overall performance, with R2D2 demonstrating significant advantages over other algorithms. Additionally, we evaluated the efficiency of deep learning-based keypoint algorithms on low-power edge computing platforms, further validating their feasibility in industrial applications.

CLC number: TD67;TP391.41 Document code: A Article ID: 2096-2193(2025)03-0531-11

References

[1]

TAN Zhanglu, WANG Meijun. Research on the conceptual model and technical architecture of data governance for intelligent coal mine[J]. Journal of Mining Science and Technology, 2023, 8(2): 242-255.

[2]

FANG Liangcai. Accelerating the digital transformation of coal industry to provide new kinetic energy for the high-quality development of coal enterprises[J]. China Coal Industry, 2021(11): 10-13.

[3]

TAN Zhanglu, WU Qi. Analysis of the problems of smart mine construction based on the layer-level-chain refe-rence model[J]. Journal of Mining Science and Techno-logy, 2022, 7(2): 257-266.

[4]

FAN Zhongqi, DAI Lin. Research on data trust model and technical architecture of intelligent mines based on blockchain technology[J]. Journal of Mining Science and Technology, 2024, 9(2): 304-314.

[5]

ONIFADE M, SAID K O, SHIVUTE A P. Safe mining operations through technological advancement[J]. Process Safety and Environmental Protection, 2023, 175: 251-258.

[6]

SONG Rui, ZHENG Yukun, LIU Yixiang, et al. Analysis on the application and prospect of coal mine bionic robotics[J]. Journal of China Coal Society, 2020, 45(6): 2155-2169.

[7]

GE Shirong, HU Eryi, PEI Wenliang. Classification system and key technology of coal mine robot[J]. Journal of China Coal Society, 2020, 45(1): 455-463.

[8]

LIAO Zhiwei, YANG Zhen, HE Xiaofeng, et al. Study on the current status and development trend of research and application of underground robots in coal mines[J]. China Coal, 2023, 49(S2): 13-23.

[9]

ZHIRONKIN S, EZDINA N. Review of transition from mining 4.0 to mining 5.0 innovative technologies[J]. Applied Sciences, 2023, 13(8): 4917.

[10]

CHEN L, XIE J K, ZHANG X T, et al. Mining 5.0: concept and framework for intelligent mining systems in CPSS[J]. IEEE Transactions on Intelligent Vehicles, 2023, 8(6): 3533-3536.

[11]

YANG Chunyu, ZHANG Xin. Key technologies of coal mine robots for environment perception and path planning[J]. Journal of China Coal Society, 2022, 47(7): 2844-2872.

[12]

MUR-ARTAL R, TARDóS J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.

[13]
SCHÖNBERGER J L, FRAHM J M. Structure-from-motion revisited[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 27-30, 2016, Las Vegas, NV, USA. IEEE, 2016: 4104-4113.
[14]

BROWN M, LOWE D G. Automatic panoramic image stitching using invariant features[J]. International Journal of Computer Vision, 2007, 74(1): 59-73.

[15]

REN Huaiwei, LI Shuaishuai, ZHAO Guorui, et al. Measurement method of support height and roof beam posture angles for working face hydraulic support based on depth vision[J]. Journal of Mining & Safety Engineering, 2022, 39(1): 72-81, 93.

[16]
LI Shuaishuai. Research on position and attitude mea-surement method of hydraulic support spatial structure based on depth vision principle[D]. Beijing: China Coal Research Institute, 2021.
[17]
CUI Yuming. Research on key technologies of autonomous positioning of mine roadheader based on vision/inertia fusion[D]. Xuzhou: China University of Mi-ning and Technology, 2021.
[18]

MA Hongwei, WANG Yan, YANG Lin. Research on depth vision based mobile robot autonomous navigation in underground coal mine[J]. Journal of China Coal Society, 2020, 45(6): 2193-2206.

[19]
XU Zhan. Research on spatial reconstruction method of coal mine based on SLAM[D]. Xi'an: Xi'an University of Science and Technology, 2021.
[20]

ZHANG Fan, SUN Xiaohui, CUI Donglin. Method of tracking and positioning for mobile target based on ORB features and binocular vision in mine[J]. Journal of China Coal Society, 2018, 43(S2): 654-662.

[21]

ZHANG Xuhui, WANG Yue, YANG Wenjuan, et al. A mine image stitching method based on improved best seam-line[J]. Journal of Mine Automation, 2024, 50(4): 9-17.

[22]

LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Compu-ter Vision, 2004, 60(2): 91-110.

[23]

JIN Y H, MISHKIN D, MISHCHUK A, et al. Image matching across wide baselines: from paper to practice[J]. International Journal of Computer Vision, 2021, 129(2): 517-547.

[24]
ARANDJELOVIC ' R, ZISSERMAN A. Three things everyone should know to improve object retrieval[C]// 2012 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Providence, RI, USA, June 16-21, 2012. Piscataway, NJ: IEEE, 2012: 2911-2918.
[25]
LAGUNA A B, RIBA E, PONSA D, et al. Key. Net: keypoint detection by handcrafted and learned CNN filters[C]// 2019 IEEE International Conference on Computer Vision(ICCV), Seoul, South Korea, October 27-November 2, 2019. Piscataway, NJ: IEEE, 2019: 5835-5843.
[26]
TIAN Y R, FAN B, WU F C. L2-net: deep learning of discriminative patch descriptor in euclidean space[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Honolulu, HI, USA, July 21-26, 2017. Piscataway, NJ: IEEE, 2017: 6128-6136.
[27]

MISHCHUK A, MISHKIN D, RADENOVIC F, et al. Working hard to know your neighbor's margins: local descriptor learning loss[J]. Advances in neural information processing systems, 2017, 30.

[28]
LUO Z X, SHEN T W, ZHOU L, et al. GeoDesc: learning local descriptors by integrating geometry constraints[C]// European Conference on Computer Vision(ECCV), Munich, Germany, September 8-14, 2018. Cham: Springer, 2018: 168-183.
[29]
YI K M, TRULLS E, LEPETIT V, et al. LIFT: learned invariant feature transform[C]// European Conference on Computer Vision(ECCV), Amsterdam, The Netherlands, October 11-14, 2016. Cham: Springer, 2016: 467-483.
[30]
DETONE D, MALISIEWICZ T, RABINOVICH A. SuperPoint: self-supervised interest point detection and description[C]// 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops(CVPRW), Salt Lake City, UT, USA, June 18-22, 2018. Piscataway, NJ: IEEE, 2018: 337-349.
[31]
REVAUD J, WEINZAEPFEL P, SOUZA C De, et al. R2D2: repeatable and reliable detector and descripto[C]// 33rd Conference on Neural Information Processing Systems(NeurIPS), Vancouver, Canada, December 8-14, 2019. Red Hook, NY: Curran Associates Inc., 2019: 12405-12415.
[32]

TYSZKIEWICZ M J, FUA P, TRULLS E. DISK: learning local features with policy gradient[J]. Advances in Neural Information Processing Systems, 2020, 33: 14254-14265.

[33]

SHI Hao, CAO Jianfang, CHEN Lichao, et al. PCA-SIFT feature extraction algorithm of coal mine image in hadoop platform[J]. Journal of Taiyuan University of Science and Technology, 2020, 41(2): 124-129.

[34]

DU T Y, QIAN X. Improved speeded-up robust features registration method based on particle swarm and gray wolf algorithm for coal mine images[J]. Journal of Electronic Imaging, 2022, 31(6): 061821.

[35]

YU R, FANG X Q, HU C J, et al. Research on positioning method of coal mine mining equipment based on monocular vision[J]. Energies, 2022, 15(21): 8068.

[36]

LIU Pengkun, WANG Cong, LIU Shuai. Multi-vision global coordinate system in fully mechanized coal mi-ning face[J]. Journal of China Coal Society, 2019, 44(10): 3272-3280.

[37]

WANG Hongyao, WU Jiaqi, LIN Song, et al. Research on multi-view image stitching method in mine[J]. Industry and Mine Automation, 2021, 47(10): 27-31, 48.

[38]
TAO Zhihui. Research on video mosaic algorithm of coal mine working face[D]. Xi'an: Xi'an University of Science and Technology, 2020.
[39]
MAO Qinghua, WANG Menghan, HU Xin, et al. Improved SURF-FLANN feature extraction and matching algorithm for video stitching of fully mechanical mining face[J/OL]. Journal of China Coal Society, 1-12[2024-02-27].
[40]
SUN Suyuan. Research on the construction method of 3D map of mine environment based on machine vision[D]. Xuzhou: China University of Mining and Technology, 2018.
[41]
LUO Jia. Research on 3D model reconstruction method of mine rescue channel based on depth camera[D]. Xi'an: Chang'an University, 2022.
[42]
YANG Pangbin. Research on image matching and SFM 3D reconstruction method in mine environment[D]. Xi'an: Xi'an University of Science and Technology, 2022.
[43]
ZHANG Feichao, LIANG Xu, WANG Xiaoyong, et al. Video stitching method and device of coal mine fully-mechanized working face: CN116132636A[P]. 2023-05-16.
[44]

CHI Guoming, ZHENG Tiehua, LIU Xiaojun, et al. Unmanned mining technology and application of adaptive cutting intelligent working face based on full-motion video splicing[J]. Journal of Intelligent Mine, 2024, 5(3): 59-63.

[45]

BROWN M, HUA G, WINDER S. Discriminative learning of local image descriptors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(1): 43-57.

[46]
BALNTAS V, LENC K, VEDALDI A, et al. HPatches: a benchmark and evaluation of handcrafted and learned local descriptors[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Honolulu, HI, USA, July 21-26, 2017. Piscataway, NJ: IEEE, 2017: 3852-3861.
[47]
LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: common objects in context[C]// European Conference on Computer Vision(ECCV), Zurich, Switzerland, September 6-12, 2014. Cham: Springer, 2014: 740-755.
[48]
RADENOVIC F, ISCEN A, TOLIAS G, et al. Revisiting Oxford and Paris: large-scale image retrieval benchmarking[C]// 2018 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Salt Lake City, UT, USA, June 18-23, 2018. Piscataway, NJ: IEEE, 2018: 5706-5715.
[49]
SATTLER T, MADDERN W, TOFT C, et al. Benchmarking 6DOF outdoor visual localization in changing conditions[C]// 2018 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Salt Lake City, UT, USA, June 18-23, 2018. Piscataway, NJ: IEEE, 2018: 8601-8610.
[50]
SATTLER T, WEYAND T, LEIBE B, et al. Image retrieval for image-based localization revisited[C]// British Machine Vision Conference(BMVC), Surrey, UK, September 3-7, 2012. Durham: BMVA Press, 2012: 1(2): 4.
[51]
LI Z Q, SNAVELY N. MegaDepth: learning single-view depth prediction from Internet photos[C]// 2018 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Salt Lake City, UT, USA, June 18-23, 2018. Piscataway, NJ: IEEE, 2018: 2041-2050.
Journal of Mining Science and Technology
Pages 531-541
Cite this article:
WANG J, LIAO C, ZHAO Z, et al. Research on the applicability of image keypoints in underground mine environments. Journal of Mining Science and Technology, 2025, 10(3): 531-541. https://doi.org/10.19606/j.cnki.jmst.2025008

41

Views

0

Downloads

0

Crossref

0

Scopus

0

CSCD

Altmetrics

Received: 22 September 2024
Revised: 20 October 2024
Published: 30 June 2025
© The Author(s) 2025

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Return