AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Maximizing Depth of Graph-Structured Convolutional Neural Networks with Efficient Pathway Usage for Remote Sensing

Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China, and with State Key Laboratory of Satellite Ocean Environment Dynamics, Second Institute of Oceanography, Ministry of Natural Resources, Hangzhou 310012, China, and also with Daya Bay Observation and Research Station of Marine Risks and Hazards, Ministry of Natural Resources, Hangzhou 310012, China
Chongqing School, University of Chinese Academy of Sciences, Beijing 101408, China
Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China, and also with State Key Laboratory of Satellite Ocean Environment Dynamics, Second Institute of Oceanography, Ministry of Natural Resources, Hangzhou 310012, China
Show Author Information

Abstract

Recently, Randomly Wired Neural Networks (RWNNs) using random graphs for Convolutional Neural Network (CNN) construction have shown efficient layer connectivity, but may limit depth, affecting approximation, generalization, and robustness. In this work, we increase the depth of graph-structured CNNs while maintaining efficient pathway usage, which is achieved by building a feature-extraction backbone with a depth-first search, employing edges that have not been traversed for parameter-efficient skip connections. The proposed Efficiently Pathed Deep Network (EPDN) reaches maximum graph-based architecture depth without redundant node use, ensuring feature propagation with reduced connectivity. The deep structure of EPDN, coupled with its efficient pathway usage, allows for a nuanced feature extraction. EPDN is highly beneficial for processing remote sensing images, as its performance relies on the ability to resolve intricate spatial details. EPDN facilitates this by preserving low-level details through its deep and efficient skip connections, allowing for enhanced feature extraction. Additionally, the remote-sensing-adapted EPDN variant is akin to a special case of a multistep method for solving an Ordinary Differential Equation (ODE), leveraging historical data for improved prediction. EPDN outperforms existing CNNs in generalization and robustness on image classification benchmarks and remote sensing tasks. The source code is publicly available at https://github.com/AnonymousGithubLink/EPDN.

References

[1]

S. Madan, T. Henry, J. Dozier, H. Ho, N. Bhandari, T. Sasaki, F. Durand, H. Pfister, and X. Boix, When and how convolutional neural networks generalize to out-of-distribution category-viewpoint combinations, Nat. Mach. Intell., vol. 4, no. 2, pp. 146–153, 2022.

[2]

M. Sabbaqi and E. Isufi, Graph-time convolutional neural networks: Architecture and theoretical analysis, IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 12, pp. 14625–14638, 2023.

[3]

B. Yang, Y. Yang, Q. Li, D. Lin, Y. Li, J. Zheng, and Y. Cai, Classification of medical image notes for image labeling by using MinBERT, Tsinghua Science and Technology, vol. 28, no. 4, pp. 613–627, 2023.

[4]

M. Liu, L. Chen, X. Du, L. Jin, and M. Shang, Activated gradients for deep neural networks, IEEE Trans. Neural Netw. Learn. Syst., vol. 34, no. 4, pp. 2156–2168, 2023.

[5]
Y. Xue, L. Li, Z. Wang, C. Jiang, M. Liu, J. Wang, K. Sun, and H. Ma, RFCNet: Remote sensing image super-resolution using residual feature calibration network, Tsinghua Science and Technology, vol. 28, no. 3, pp. 475–485, 2023.
[6]

Z. Wang, L. Li, L. Xing, J. Wang, K. Sun, and H. Ma, Information purification network for remote sensing image super-resolution, Tsinghua Science and Technology, vol. 28, no. 2, pp. 310–321, 2023.

[7]

A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017.

[8]
K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556, 2014.
[9]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, Going deeper with convolutions, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015, pp. 1–9.
[10]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770–778.
[11]
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, Densely connected convolutional networks, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 2261–2269.
[12]

A. C. Rodríguez, S. D’Aronco, K. Schindler, and J. D. Wegner, Fine-grained species recognition with privileged pooling: Better sample efficiency through supervised attention, IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 12, pp. 14575–14589, 2023.

[13]

W. Du and S. Tian, Transformer and GAN-based super-resolution reconstruction network for medical images, Tsinghua Science and Technology, vol. 29, no. 1, pp. 197–206, 2024.

[14]

Q. Zhang, J. Zhang, Y. Xu, and D. Tao, Vision transformer with quadrangle attention, IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 5, pp. 3608–3624, 2024.

[15]

F. He, T. Liu, and D. Tao, Why ResNet works? Residuals generalize, IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 12, pp. 5349–5362, 2020.

[16]
Y. Lu, C. Ma, Y. Lu, J. Lu, and L. Ying, A mean field analysis of deep ResNet and beyond: Towards provably optimization via overparameterization from depth, in Proc. Int. Conf. on Machine Learning, Vienna, Austria, 2020, pp. 6426–6436.
[17]
D. Su, H. Zhang, H. Chen, J. Yi, P.-Y. Chen, and Y. Gao, Is robustness the cost of accuracy? –A comprehensive study on the robustness of 18 deep image classification models, in Proc. ECCV 2018, Munich, Germany, 2018. pp. 644–661,
[18]
J. Zhang, B. Han, L. Wynter, B. K. H. Low, and M. Kankanhalli, Towards robust ResNet: A small step but a giant leap, in Proc. Twenty-Eighth Int. Joint Conf. Artificial Intelligence, Macao, China, 2019, pp.4285–4291.
[19]
K. He, X. Zhang, S. Ren, and J. Sun, Identity mappings in deep residual networks, in Proc. ECCV2016, Amsterdam, The Netherlands, 2016. pp. 630–645,
[20]
S. Xie, A. Kirillov, R. Girshick, and K. He, Exploring randomly wired neural networks for image recognition, in Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), Seoul, Republic of Korea, 2019, pp. 1284–1293.
[21]

C. Seguin, O. Sporns, and A. Zalesky, Brain network communication: Concepts, models and applications, Nat. Rev. Neurosci., vol. 24, no. 9, pp. 557–574, 2023.

[22]
D. B. West, Introduction to Graph Theory. Upper Saddle River, NJ, USA: Prentice Hall, 2001.
[23]
J. A. Bondy and U. S. R. Murty, Graph Theory with Applications. New York, NY, USA: North-Holland, 1976.
[24]

D. Su, P. S. Stanimirović, L. B. Han, and L. Jin, Neural dynamics for improving optimiser in deep learning with noise considered, CAAI Trans. Intell. Technol., vol. 9, no. 3, pp. 722–737, 2024.

[25]

J. Tan, M. Shang, and L. Jin, Metaheuristic-based RNN for manipulability optimization of redundant manipulators, IEEE Trans. Ind. Inform., vol. 20, no. 4, pp. 6489–6498, 2024.

[26]

L. Jin, J. Zhao, L. Chen, and S. Li, Collective neural dynamics for sparse motion planning of redundant manipulators without hessian matrix inversion, IEEE Trans. Neural Netw. Learn. Syst., doi: 10.1109/TNNLS.2024.3363241.

[27]

L. Jin, Y. Li, X. Zhang, and X. Luo, Fuzzy k-winner-take-all network for competitive coordination in multirobot systems, IEEE Trans. Fuzzy Syst., vol. 32, no. 4, pp. 2005–2016, 2024.

[28]

Y. Jin, Y. Zhang, and Y. Zhang, Neighbor library-aware graph neural network for third party library recommendation, Tsinghua Science and Technology, vol. 28, no. 4, pp. 769–785, 2023.

[29]
D. Ma, Y. Wang, J. Ma, and Q. Jin, SGNR: A social graph neural network based interactive recommendation scheme for E-commerce, Tsinghua Science and Technology, vol. 28, no. 4, pp. 786–798, 2023.
[30]
H. Sevi, M. Jonckheere, and A. Kalogeratos, Generalized spectral clustering for directed and undirected graphs, arXiv preprint arXiv: 2203.03221, 2022.
[31]

E. Goles and G. A. Ruz, Dynamics of neural networks over undirected graphs, Neural Netw., vol. 63, pp. 156–169, 2015.

[32]

W. Xu, G. Niu, A. Hyvärinen, and M. Sugiyama, Direction matters: On influence-preserving graph summarization and max-cut principle for directed graphs, Neural Comput., vol. 33, no. 8, pp. 2128–2162, 2021.

[33]

J. Xu, Y. Sun, J. Gan, M. Zhou, and D. Wu, Leveraging structured information from a passage to generate questions, Tsinghua Science and Technology, vol. 28, no. 3, pp. 464–474, 2023.

[34]

B. Awerbuch, A new distributed Depth-First-Search algorithm, Inf. Process. Lett., vol. 20, no. 3, pp. 147–150, 1985.

[35]
D. J. King and J. Launchbury, Structuring depth-first search algorithms in Haskell, in Proc. 22nd ACM SIGPLAN-SIGACT Symp. on Principles of programming languages - POPL ’95, San Francisco, CA, USA, 1995, pp. 344–354.
[36]
V. Palanisamy and S. Vijayanathan, A novel agent based depth first search algorithm, in Proc. IEEE 5th Int. Conf. Computing Communication and Automation (ICCCA), Greater Noida, India, 2020, pp. 443–448.
[37]

R. Albert and A.-L. Barabási, Statistical mechanics of complex networks, Rev. Mod. Phys., vol. 74, no. 1, pp. 47–97, 2002.

[38]
D. J. Watts and S. H. Strogatz, Collective dynamics of ‘small-world’ networks, Nature, vol. 393, no. 6684, pp. 440–442, 1998.
[39]

P. Erdös and A. Rényi, On the evolution of random graphs, Publications of the Mathematical Institute of the Hungarian Academy of Sciences, vol. 5, no. 1, pp. 17–60, 1960.

[40]

L. Chen, L. Jin, and M. Shang, Zero stability well predicts performance of convolutional neural networks, Proc. AAAI Conf. Artif. Intell., vol. 36, no. 6, pp. 6268–6277, 2022.

[41]

M. Liu, H. Wu, Y. Shi, and L. Jin, High-order robust discrete-time neural dynamics for time-varying multilinear tensor equation with M-tensor, IEEE Trans. Ind. Inform., vol. 19, no. 9, pp. 9457–9467, 2023.

[42]

L. Jin, L. Liu, X. Wang, M. Shang, and F.-Y. Wang, Physical-informed neural network for MPC-based trajectory tracking of vehicles with noise considered, IEEE Transactions on Intelligent Vehicles, vol. 9, no. 3, pp. 4493–4503, 2024.

[43]
L. Ying, L. Jin, M. Shang, X. Wang, and F.-Y. Wang, ACP-incorporated perturbation-resistant neural dynamics controller for autonomous vehicles, IEEE Trans. Intell. Veh., vol. 9, no. 4, pp. 4675–4686, 2024.
[44]
S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, Aggregated residual transformations for deep neural networks, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 5987–5995.
[45]
J. Hu, L. Shen, and G. Sun, Squeeze-and-excitation networks, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 7132–7141.
Tsinghua Science and Technology
Pages 1940-1953
Cite this article:
Wang D, Chen L, Gong F, et al. Maximizing Depth of Graph-Structured Convolutional Neural Networks with Efficient Pathway Usage for Remote Sensing. Tsinghua Science and Technology, 2025, 30(5): 1940-1953. https://doi.org/10.26599/TST.2024.9010102

675

Views

93

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 27 March 2024
Revised: 09 May 2024
Accepted: 04 June 2024
Published: 29 April 2025
© The Author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return