Journal Home > Volume 29 , Issue 1

Super-resolution reconstruction in medical imaging has become more demanding due to the necessity of obtaining high-quality images with minimal radiation dose, such as in low-field magnetic resonance imaging (MRI). However, image super-resolution reconstruction remains a difficult task because of the complexity and high textual requirements for diagnosis purpose. In this paper, we offer a deep learning based strategy for reconstructing medical images from low resolutions utilizing Transformer and generative adversarial networks (T-GANs). The integrated system can extract more precise texture information and focus more on important locations through global image matching after successfully inserting Transformer into the generative adversarial network for picture reconstruction. Furthermore, we weighted the combination of content loss, adversarial loss, and adversarial feature loss as the final multi-task loss function during the training of our proposed model T-GAN. In comparison to established measures like peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM), our suggested T-GAN achieves optimal performance and recovers more texture features in super-resolution reconstruction of MRI scanned images of the knees and belly.


menu
Abstract
Full text
Outline
About this article

Transformer and GAN Based Super-Resolution Reconstruction Network for Medical Images

Show Author's information Weizhi Du1Shihao Tian2( )
Arts & Sciences College, Washington University in St. Louis, St. Louis, MO 63130, USA
Department of Electric and Computing Engineering, Cornell University, Ithaca, NY 14850, USA

Abstract

Super-resolution reconstruction in medical imaging has become more demanding due to the necessity of obtaining high-quality images with minimal radiation dose, such as in low-field magnetic resonance imaging (MRI). However, image super-resolution reconstruction remains a difficult task because of the complexity and high textual requirements for diagnosis purpose. In this paper, we offer a deep learning based strategy for reconstructing medical images from low resolutions utilizing Transformer and generative adversarial networks (T-GANs). The integrated system can extract more precise texture information and focus more on important locations through global image matching after successfully inserting Transformer into the generative adversarial network for picture reconstruction. Furthermore, we weighted the combination of content loss, adversarial loss, and adversarial feature loss as the final multi-task loss function during the training of our proposed model T-GAN. In comparison to established measures like peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM), our suggested T-GAN achieves optimal performance and recovers more texture features in super-resolution reconstruction of MRI scanned images of the knees and belly.

Keywords: Transformer, generative adversarial network (GAN), super-resolution, image reconstruction

References(25)

[1]
X. Lu, Z. Huang, and Y. Yuan, MR image super-resolution via manifold regularized sparse learning, Neurocomputing, vol. 162, pp. 96–104, 2015.
[2]
A. Rueda, N. Malpica, and E. Romero, Single-image super-resolution of brain MR images using overcomplete dictionaries, Med. Image Anal., vol. 17, no. 1, pp. 113–132, 2013.
[3]
Y. Zhang, Z. Dong, P. Phillips, S. Wang, G. Ji, and J. Yang, Exponential wavelet iterative shrinkage thresholding algorithm for compressed sensing magnetic resonance imaging, Inf. Sci., vol. 322, pp. 115–132, 2015.
[4]
G. Zheng, G. Han, and N. Q. Soomro, An inception module CNN classifiers fusion method on pulmonary nodule diagnosis by signs, Tsinghua Science and Technology, vol. 25, no. 3, pp. 368–383, 2020.
[5]
X. Yang, S. Zhan, C. Hu, Z. Liang, and D. Xie, Super-resolution of medical image using representation learning, in Proc. 2016 8th Int. Conf. Wireless Communications & Signal Processing (WCSP), Yangzhou, China, 2016, pp. 1–6.
[6]
F. Yang, H. Yang, J. Fu, H. Lu, and B. Guo, Learning texture transformer network for image super-resolution, in Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 5790–5799.
[7]
I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, in Proc. 27th Int. Conf. on Neural Information Processing Systems, Montreal, Canada, 2014, pp. 2672–2680.
[8]
Z. Zhang, Z. Wang, Z. Lin, and H. Qi, Image super-resolution by neural texture transfer, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 7974–7983.
[9]
Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, The light field attachment: Turning a DSLR into a light field camera using a low budget camera ring, IEEE Trans. Vis. Comput. Graph., vol. 23, no. 10, pp. 2357–2364, 2016.
[10]
H. Yue, X. Sun, J. Yang, and F. Wu, Landmark image super-resolution by retrieving web images, IEEE Trans. Image Process., vol. 22, no. 12, pp. 4865–4878, 2013.
[11]
H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, CrossNet: An end-to-end reference-based super resolution network using cross-scale warping, in Proc. European Conference on Computer Vision, Munich, Germany, 2018, pp. 87–104.
[12]
V. Boominathan, K. Mitra, and A. Veeraraghavan, Improving resolution and depth-of-field of light field cameras using a hybrid imaging, in Proc. 2014 IEEE Int. Conf. on Computational Photography (ICCP), Santa Clara, CA, USA, 2014, pp. 1–10.
[13]
H. Zheng, M. Ji, L. Han, Z. Xu, H. Wang, Y. Liu, and L. Fang, Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution, in Proc. British Machine Vision Conference, London, UK, 2017, pp. 1–13.
[14]
C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., Photo-realistic single image super-resolution using a generative adversarial network, in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 105–114.
[15]
M. S. M. Sajjadi, B. Schölkopf, and M. Hirsch, EnhanceNet: Single image super-resolution through automated texture synthesis, in Proc. 2017 IEEE Int. Conf. Computer Vision (ICCV), Venice, Italy, 2017, pp. 4501–4510.
[16]
X. Wang, K. Yu, C. Dong, and C. C. Loy, Recovering realistic texture in image super-resolution by deep spatial feature transform, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 606–615.
[17]
Y. Wang, F. Perazzi, B. McWilliams, A. Sorkine-Hornung, O. Sorkine-Hornung, and C. Schroers, A fully progressive approach to single-image super-resolution, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 977–97709.
[18]
M. Arjovsky, S. Chintala, and L. Bottou, Wasserstein generative adversarial networks, in Proc. 34th Int. Conf. Machine Learning, Sydney, Australia, 2017, pp. 214–223.
[19]
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, Improved training of Wasserstein GANs, in Proc. 31st Int. Conf. on Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 5769–5779.
[20]
X. Zhu, L. Zhang, L. Zhang, X. Liu, Y. Shen, and S. Zhao, GAN-based image super-resolution with a novel quality loss, Math. Probl. Eng., vol. 2020, p. 5217429, 2020.
[21]
B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, Enhanced deep residual networks for single image super-resolution, in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 2017, pp. 1132–1140.
[22]
J. Yu, Y. Fan, J. Yang, N. Xu, Z. Wang, X. Wang, and T. Huang, Wide activation for efficient and accurate image super-resolution, arXiv preprint arXiv: 1808.08718, 2018.
[23]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
[24]
D. P. Kingma and L. J. Ba, Adam: A method for stochastic optimization, presented at Int. Conf. on Learning Representations, San Diego, CA, USA, 2015.
[25]
H. Gunraj, L. Wang, and A. Wong, COVIDNet-CT: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest CT images, Front. Med. (Lausanne), vol. 7, p. 608525, 2020.
Publication history
Copyright
Rights and permissions

Publication history

Received: 17 October 2022
Revised: 21 December 2022
Accepted: 29 December 2022
Published: 21 August 2023
Issue date: February 2024

Copyright

© The author(s) 2024.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return