AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (5.9 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

AMTrans: Auto-Correlation Multi-Head Attention Transformer for Infrared Spectral Deconvolution

College of Electronic and Optical Engineering & College of Flexible Electronics (Future Technology), Nanjing University of Posts and Telecommunications, Nanjing 210003, China
Jiangsu Province Key Lab on Image Processing and Image Communication, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
National Engineering Research Center of Communication and Network Technology, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
School of Computer Science and Technology, Hainan University, Haikou 570228, China
Show Author Information

Abstract

Infrared spectroscopy analysis has found widespread applications in various fields due to advancements in technology and industry convergence. To improve the quality and reliability of infrared spectroscopy signals, deconvolution is a crucial preprocessing step. Inspired by the transformer model, we propose an Auto-correlation Multi-head attention Transformer (AMTrans) for infrared spectrum sequence deconvolution. The auto-correlation attention model improves the scaled dot-product attention in the transformer. It utilizes attention mechanism for feature extraction and implements attention computation using the auto-correlation function. The auto-correlation attention model is used to exploit the inherent sequence nature of spectral data and to effectively recovery spectra by capturing auto-correlation patterns in the sequence. The proposed model is trained using supervised learning and demonstrates promising results in infrared spectroscopic restoration. By comparing the experiments with other deconvolution techniques, the experimental results show that the method has excellent deconvolution performance and can effectively recover the texture details of the infrared spectrum.

References

[1]
Y. Jiang, L. Li, J. Zhu, Y. Xue, and H. Ma, DEANet: Decomposition enhancement and adjustment network for low-light image enhancement, Tsinghua Science and Technology, vol. 28, no. 4, pp. 743–753, 2023.
[2]

E. Al Ibrahim and A. Farooq, Augmentations for selective multi-species quantification from infrared spectroscopic data, Chemom. Intell. Lab. Syst., vol. 240, p. 104913, 2023.

[3]
H. Zhu, Y. Qiao, G. Xu, L. Deng, and Y. F. Yu, DSPNet: A lightweight dilated convolution neural networks for spectral deconvolution with self-paced learning, IEEE Trans. Ind. Inf., vol. 16, no. 12, pp. 7392–7401, 2020.
[4]

U. Blazhko, V. Shapaval, V. Kovalev, and A. Kohler, Comparison of augmentation and pre-processing for deep learning and chemometric classification of infrared spectra, Chemom. Intell. Lab. Syst., vol. 215, p. 104367, 2021.

[5]

H. Guan, M. Yu, X. Ma, L. Li, C. Yang, and J. Yang, A recognition method of mushroom mycelium varieties based on near-infrared spectroscopy and deep learning model, Infrared Phys. Technol., vol. 127, p. 104428, 2022.

[6]

P. Fu, Y. Wen, Y. Zhang, L. Li, Y. Feng, L. Yin, and H. Yang, SpectraTr: A novel deep learning model for qualitative analysis of drug spectroscopy based on transformer structure, J. Innov. Opt. Health Sci., vol. 15, no. 3, p. 2250021, 2022.

[7]
H. Liu, Q. An, Z. Huan, M. Bürmen, Q. Deng, and T. Marques, ISRToken: Learning similarities tokens for precise infrared spectrum recognition model via transformer, Infrared Phys. Technol., vol. 133, p. 104700, 2023.
[8]
R. Aljadaany, D. K. Pal, and M. Savvides, Douglas-rachford networks: Learning both the image prior and data fidelity terms for blind image deconvolution, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 10227–10236.
[9]

L. Li, J. Pan, W. S. Lai, C. Gao, N. Sang, and M. H. Yang, Blind image deblurring via deep discriminative priors, Int. J. Comput. Vis., vol. 127, no. 8, pp. 1025–1043, 2019.

[10]
G. Xu, L. Deng, H. Wang, and H. Zhu, CPAD: Component pixel-aware multispectral image destriping, IEEE Trans. Aerosp. Electron. Syst., vol. 59, no. 5, pp. 6374–6386, 2023.
[11]

F. Liu, C. Gao, F. Chen, D. Meng, W. Zuo, and X. Gao, Infrared small and dim target detection with transformer under complex backgrounds, IEEE Trans. Image Process., vol. 32, pp. 5921–5932, 2023.

[12]

Z. Wang, X. Wang, K. Tan, B. Han, J. Ding, and Z. Liu, Hyperspectral anomaly detection based on variational background inference and generative adversarial network, Pattern Recognit., vol. 143, p. 109795, 2023.

[13]

S. Yu, Z. Dou, and S. Wang, Prompting and tuning: A two-stage unsupervised domain adaptive person re-identification method on vision transformer backbone, Tsinghua Science and Technology, vol. 28, no. 4, pp. 799–810, 2023.

[14]

D. Zhu, W. Zeng, and J. Su, Construction of transformer substation fault knowledge graph based on a depth learning algorithm, Int. J. Model., Simul., Sci. Comput., vol. 14, no. 1, p. 2341017, 2023.

[15]
S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M. H. Yang, Restormer: Efficient transformer for high-resolution image restoration, in Proc. 2022 IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, LA, USA, 2022, pp. 5718–5729.
[16]
H. Zhang, F. Li, H. Xu, S. Huang, S. Liu, L. M. Ni, and L. Zhang, MP-former: Mask-piloted transformer for image segmentation, in Proc. 2023 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Vancouver, Canada, 2023, pp. 18074–18083.
[17]

W. Huang, Y. Deng, S. Hui, Y. Wu, S. Zhou, and J. Wang, Sparse self-attention transformer for image inpainting, Pattern Recognit., vol. 145, p. 109897, 2024.

[18]
Q. He, G. Wang, L. Huo, H. Wang, and C. Zhang, ACAM-AD: Autocorrelation and attention mechanism-based anomaly detection in multivariate time series, J. Intell. Fuzzy Syst., vol. 44, no. 3, pp. 9039–9051, 2023.
[19]

X. Wang, H. Liu, J. Du, Z. Yang, and X. Dong, Clformer: Locally grouped auto-correlation and convolutional transformer for long-term multivariate time series forecasting, Eng. Appl. Artif. Intell., vol. 121, p. 106042, 2023.

[20]
Y. Qin, Y. Fang, H. Luo, F. Zhao, and C. Wang, Next point-of-interest recommendation with auto-correlation enhanced multi-modal transformer network, in Proc. 45 th Int. ACM SIGIR Conf. Research and Development in Information Retrieval, Madrid, Spain, 2022, pp. 2612–2616.
[21]

L. Deng, G. Xu, J. Pi, H. Zhu, and X. Zhou, Unpaired self-supervised learning for industrial cyber-manufacturing spectrum blind deconvolution, ACM Trans. Internet Technol., vol. 23, no. 4, p. 52, 2023.

[22]

H. Zhu, L. Deng, H. Li, and Y. Li, Deconvolution methods based on convex regularization for spectral resolution enhancement, Comput. Electr. Eng., vol. 70, pp. 959–967, 2018.

[23]
M. Arjovsky, S. Chintala, and L. Bottou, Wasserstein generative adversarial networks, in Proc. 34 th Int. Conf. Machine Learning, Sydney, Australia, 2017, pp. 214–223.
[24]

L. Deng, G. Xu, Y. Dai, and H. Zhu, A dual stream spectrum deconvolution neural network, IEEE Trans. Ind. Inf., vol. 18, no. 5, pp. 3086–3094, 2022.

[25]

B. Jacobs, H. Tobi, and G. M. Hengeveld, Linking error measures to model questions, Ecol. Modell., vol. 487, p. 110562, 2024.

[26]

E. Temizhan, H. Mirtagioglu, and M. Mendes, Which correlation coefficient should be used for investigating relations between quantitative variables, Am. Acad. Sci. Res. J. Eng. Technol. Sci., vol. 85, pp. 265–277, 2022.

[27]

R. M. Zulqarnain, I. Siddique, M. Asif, H. Ahmad, S. Askar, and S. H. Gurmani, Extension of correlation coefficient based TOPSIS technique for interval-valued pythagorean fuzzy soft set: A case study in extract, transform, and load techniques, PLoS One, vol. 18, no. 10, p. e0287032, 2023.

Tsinghua Science and Technology
Pages 1329-1341
Cite this article:
Gao L, Cui L, Chen S, et al. AMTrans: Auto-Correlation Multi-Head Attention Transformer for Infrared Spectral Deconvolution. Tsinghua Science and Technology, 2025, 30(3): 1329-1341. https://doi.org/10.26599/TST.2024.9010131

263

Views

46

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 09 February 2024
Revised: 05 June 2024
Accepted: 18 July 2024
Published: 30 December 2024
© The Author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return