Journal Home > Volume 6 , Issue 2

Improper functioning, or lack, of human cone cells leads to vision defects, making it impossible for affected persons to distinguish certain colors. Colorblind persons have color perception, but their ability to capture color information differs from that of normal people: colorblind and normal people perceive the same image differently. It is necessary to devise solutions to help persons with color blindness understand images and distinguish different colors. Most research on this subject is aimed at adjusting insensitive colors, enabling colorblind persons to better capture color information, but ignores the attention paid by colorblind persons to the salient areas of images. The areas of the image seen as salient by normal people generally differ from those seen by the colorblind. To provide the same saliency for colorblind persons and normal people, we propose a saliency-based image correction algorithm for color blindness. Adjusted colors in the adjusted image are harmonious and realistic, and the method is practical. Our experimental results show that this method effectively improves images, enabling the colorblind to see the same salient areas as normal people.


menu
Abstract
Full text
Outline
About this article

Saliency-based image correction for colorblind patients

Show Author's information Jinjiang Li1,2( )Xiaomei Feng1,2Hui Fan1,2
School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China.
Co-innovation Center of Shandong Colleges and Universities: Future Intelligent Computing, Yantai 264005, China.

Abstract

Improper functioning, or lack, of human cone cells leads to vision defects, making it impossible for affected persons to distinguish certain colors. Colorblind persons have color perception, but their ability to capture color information differs from that of normal people: colorblind and normal people perceive the same image differently. It is necessary to devise solutions to help persons with color blindness understand images and distinguish different colors. Most research on this subject is aimed at adjusting insensitive colors, enabling colorblind persons to better capture color information, but ignores the attention paid by colorblind persons to the salient areas of images. The areas of the image seen as salient by normal people generally differ from those seen by the colorblind. To provide the same saliency for colorblind persons and normal people, we propose a saliency-based image correction algorithm for color blindness. Adjusted colors in the adjusted image are harmonious and realistic, and the method is practical. Our experimental results show that this method effectively improves images, enabling the colorblind to see the same salient areas as normal people.

Keywords: saliency, color vision, colorblindness, color correction

References(56)

[1]
F. Ohata,; H. Kudo,; T. Matsumoto,; Y. Takeuchi,; N. Ohnishi, Image transform based on the distribution of representative colors for color deficient. IEEJ Transactions on Electronics, Information and Systems Vol. 130, No. 12, 2176-2177, 2010.
[2]
M. Meguro,; A. Taguchi, A color conversion method for realizing barrier free of color defective vision. IEEJ Transactions on Electronics, Information and Systems Vol. 131, No. 2, 482-483, 2011.
[3]
T. Yanagida,; K. Okajima,; H. Mimura, Color scheme adjustment by fuzzy constraint satisfaction for color vision deficiencies. Color Research & Application Vol. 40, No. 5, 446-464, 2015.
[4]
W. Y. Shen,; X. Y. Mao,; X. H. Hu,; T. T. Wong, Seamless visual sharing with color vision deficiencies. ACM Transactions on Graphics Vol. 35, No. 4, Article No. 70, 2016.
[5]
E. Tanuwidjaja,; D. Huynh,; K. Koa,; C. Nguyen,; C. Shao,; P. Torbett,; C. Emmenegger,; N. Weibel, Chroma: A wearable augmented-reality solution for color blindness. In: Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing, 799-810, 2014.
DOI
[6]
P. Melillo,; D. Riccio,; L. di Perna,; G. Sanniti di Baja,; M. de Nino,; S. Rossi,; F. Testa,; F. Simonelli,; M. Frucci, Wearable improved vision system for color vision deficiency correction. IEEE Journal of Translational Engineering in Health and Medicine Vol. 5, 1-7, 2017.
[7]
R. Weale, Defective colour vision: Fundamentals, diagnosis and management. British Journal of Ophthalmology Vol. 70, No. 2, 159, 1986.
[8]
H. B. Rosenstock,; D. A. Swick, Color discrimination for the color blind. Aerospace Medicine Vol. 45, No. 10, 1194, 1974.
[9]
J. Kessler, What can be done for the color blind? Annals of Ophthalmology Vol. 9, No. 4, 431-433, 1977.
[10]
V. Subbian,; J. Ratcliff,; J. Meunier,; J. Korfhagen,; F. Beyette,; G. Shaw, Integration of new technology for research in the emergency department: Feasibility of deploying a robotic assessment tool for mild traumatic brain injury evaluation. IEEE Journal of Translational Engineering in Health and Medicine Vol. 3, Article No. 3200109, 2015.
[11]
K. Nakayama, Assist device in color discrimination using Heilmeier type guest-host liquid crystal for red-green color vision defect. Electronics and Communications in Japan Vol. 102, No. 8, 17-24, 2019.
[12]
R. W. G. Hunt, Colour standards and calculations. In: The Reproduction of Colour. M. A. Kriss,; R. Hunt, John Wiley & Sons, Ltd, 92-125, 2005.
DOI
[13]
J. Nathans,; D. Thomas,; D. Hogness, Molecular genetics of human color vision: The genes encoding blue, green, and red pigments. Science Vol. 232, No. 4747, 193-202, 1986.
[14]
B. Wong, Points of view: Color blindness. Nature Methods Vol. 8, No. 6, 441, 2011.
[15]
D. Scoles,; Y. N. Sulai,; A. Dubra, In vivo dark-field imaging of the retinal pigment epithelium cell mosaic. Biomedical Optics Express Vol. 4, No. 9, 1710, 2013.
[16]
H. Brettel,; F. Viénot,; J. D. Mollon, Computerized simulation of color appearance for dichromats. Journal of the Optical Society of America A Vol. 14, No. 10, 2647, 1997.
[17]
G. W. Meyer,; D. P. Greenberg, Color-defective vision and computer graphics displays. IEEE Computer Graphics and Applications Vol. 8, No. 5, 28-40, 1988.
[18]
G. M. MacHado,; M. M. Oliveira,; L. Fernandes, A physiologically-based model for simulation of color vision deficiency. IEEE Transactions on Visualization and Computer Graphics Vol. 15, No. 6, 1291-1298, 2009.
[19]
C. S. Chen,; S. Y. Wu,; J. B. Huang, Enhancing color representation for the color vision impaired. In: Proceedings of ECCV Workshop on Computer Vision Applications for the Visually Impaired, 2008.
[20]
K. Okajima,; S. Kanbe, A real-time color simulation of dichromats. Technical Report of the IEICE, 107: 107110, 2007.
[21]
D. R. Flatla,; C. Gutwin, Individual models of color differentiation to improve interpretability of information visualization. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2563-2572, 2010.
DOI
[22]
G. R. Kuhn,; M. M. Oliveira,; L. Fernandes, An efficient naturalness-preserving image-recoloring method for dichromats. IEEE Transactions on Visualization and Computer Graphics Vol. 14, No. 6, 1747-1754, 2008.
[23]
H. Z. Jiang,; J. D. Wang,; Z. J. Yuan,; Y. Wu,; N. N. Zheng,; S. P. Li, Salient object detection: A discriminative regional feature integration approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2083-2090, 2013.
DOI
[24]
H. W. Peng,; B. Li,; H. B. Ling,; W. M. Hu,; W. H. Xiong,; S. J. Maybank, Salient object detection via structured matrix decomposition. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 39, No. 4, 818-832, 2017.
[25]
R. M. Cong,; J. J. Lei,; H. Z. Fu,; Q. M. Huang,; X. C. Cao,; C. P. Hou, Co-saliency detection for RGBD images based on multi-constraint feature matching and cross label propagation. IEEE Transactions on Image Processing Vol. 27, No. 2, 568-579, 2018.
[26]
M. M. Cheng,; N. J. Mitra,; X. L. Huang,; P. H. S. Torr,; S. M. Hu, Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 37, No. 3, 569-582, 2015.
[27]
W. G. Wang,; J. B. Shen,; L. Shao,; F. Porikli, Correspondence driven saliency transfer. IEEE Transactions on Image Processing Vol. 25, No. 11, 5025-5034, 2016.
[28]
A. Borji,; M. M. Cheng,; H. Z. Jiang,; J. Li, Salient object detection: A benchmark. IEEE Transactions on Image Processing Vol. 24, No. 12, 5706-5722, 2015.
[29]
L. J. Wang,; H. C. Lu,; X. Ruan,; M. H. Yang, Deep networks for saliency detection via local estimation and global search. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3183-3192, 2015.
DOI
[30]
R. Zhao,; W. L. Ouyang,; H. S. Li,; X. G. Wang, Saliency detection by multi-context deep learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1265-1274, 2015.
DOI
[31]
J. W. Han,; D. W. Zhang,; X. T. Hu,; L. Guo,; J. C. Ren,; F. Wu, Background prior-based salient object detection via deep reconstruction residual. IEEE Transactions on Circuits and Systems for Video Technology Vol. 25, No. 8, 1309-1321, 2015.
[32]
X. Huang,; C. Y. Shen,; X. Boix,; Q. Zhao, SALICON: Reducing the semantic gap in saliency prediction by adapting deep neural networks. In: Proceedings of the IEEE International Conference on Computer Vision, 262-270, 2015.
DOI
[33]
T. S. Chen,; L. Lin,; L. B. Liu,; X. N. Luo,; X. L. Li, DISC: Deep image saliency computing via progressive representation learning. IEEE Transactions on Neural Networks and Learning Systems Vol. 27, No. 6, 1135-1149, 2016.
[34]
J. M. Zhang,; S. Sclaroff,; Z. Lin,; X. H. Shen,; B. Price,; R. Mech, Unconstrained salient object detection via proposal subset optimization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5733-5742, 2016.
DOI
[35]
G. B. Li,; Y. Xie,; L. Lin,; Y. Z. Yu, Instance-level salient object segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2386-2395, 2017.
DOI
[36]
W. G. Wang,; J. B. Shen,; J. W. Xie,; M. M. Cheng,; H. B. Ling,; A. Borji, Revisiting video saliency prediction in the deep learning era. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2019.
[37]
W. G. Wang,; J. B. Shen,; R. G. Yang,; F. Porikli, Saliency-aware video object segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 40, No. 1, 20-33, 2018.
[38]
Y. C. Wei,; J. S. Feng,; X. D. Liang,; M. M. Cheng,; Y. Zhao,; S. C. Yan, Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1568-1576, 2017.
DOI
[39]
Y. C. Wei,; X. D. Liang,; Y. P. Chen,; X. H. Shen,; M. M. Cheng,; J. S. Feng,; Y. Zhao,; S. Yan, STC: A simple to complex framework for weakly-supervised semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 39, No. 11, 2314-2320, 2017.
[40]
W. G. Wang,; J. B. Shen,; H. B. Ling, A deep network solution for attention and aesthetics aware photo cropping. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 41, No. 7, 1531-1544, 2019.
[41]
J. Sun,; H. B. Ling, Scale and object aware image retargeting for thumbnail browsing. In: Proceedings of the International Conference on Computer Vision, 1511-1518, 2011.
[42]
L. Zhou,; Z. H. Yang,; Z. T. Zhou,; D. W. Hu, Salient region detection using diffusion process on a two-layer sparse graph. IEEE Transactions on Image Processing Vol. 26, No. 12, 5882-5894, 2017.
[43]
V. A. Mateescu,; I. V. Bajic, Visual attention retargeting. IEEE MultiMedia Vol. 23, No. 1, 82-91, 2016.
[44]
T. V. Nguyen,; B. Ni,; H. Liu,; W. Xia,; J. Luo,; M. Kankanhalli,; S. Yan, Image re-attentionizing. IEEE Transactions on Multimedia Vol. 15, No. 8, 1910-1919, 2013.
[45]
V. A. Mateescu,; I. V. Bajić, Attention retargeting by color manipulation in images. In: Proceedings of the 1st International Workshop on Perception Inspired Video Processing, 15-20, 2014.
DOI
[46]
E. Mendez,; S. Feiner,; D. Schmalstieg, Focus and context in mixed reality by modulating first order salient features. In: Smart Graphics. Lecture Notes in Computer Science, Vol. 6133. R. Taylor,; P. Boulanger,; A. Krüger,; P. Olivier, Eds. Springer Berlin Heidelberg, 232-243, 2010.
DOI
[47]
S. P. Lu,; G. Dauphin,; G. Lafruit,; A. Munteanu, Color retargeting: Interactive time-varying color image composition from time-lapse sequences. Computational Visual Media Vol. 1, No. 4, 321-330, 2015.
[48]
O. Fried,; E. Shechtman,; D. B. Goldman,; A. Finkelstein, Finding distractors in images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1703-1712, 2015.
DOI
[49]
S. L. Su,; F. Durand,; M. Agrawala, De-emphasis of distracting image regions using texture power maps. In: Proceedings of the 2nd Symposium on Applied Perception in Graphics and Visualization, 164, 2005.
DOI
[50]
J. J. Li,; G. H. Li,; H. Fan, Image dehazing using residual-based deep CNN. IEEE Access Vol. 6, 26831-26842, 2018.
[51]
R. Mechrez,; E. Shechtman,; L. Zelnik-Manor, Saliency driven image manipulation. Machine Vision and Applications Vol. 30, No. 2, 189-202, 2019.
[52]
J. B. Huang,; C. S. Chen,; T. C. Jen,; S. J. Wang, Image recolorization for the colorblind. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1161-1164, 2009.
DOI
[53]
H. Y. Lin,; L. Q. Chen,; M. L. Wang, Improving discrimination in color vision deficiency by image re-coloring. Sensors Vol. 19, No. 10, 2250, 2019.
[54]
N. Sundaram,; T. Brox,; K. Keutzer, Dense point trajectories by GPU-accelerated large displacement optical flow. In: Computer Vision - ECCV 2010. Lecture Notes in Computer Science, Vol. 6311. K. Daniilidis,; P. Maragos,; N. Paragios, Eds. Springer Berlin Heidelberg, 438-451, 2010.
DOI
[55]
C. Yang,; L. H. Zhang,; H. C. Lu,; X. Ruan,; M. H. Yang, Saliency detection via graph-based manifold ranking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3166-3173, 2013.
DOI
[56]
S. Y. Zhang,; R. Z. Liang,; M. Wang, ShadowGAN: Shadow synthesis for virtual objects with conditional adversarial networks. Computational Visual Media Vol. 5, No. 1, 105-115, 2019.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 26 February 2020
Revised: 26 February 2020
Accepted: 31 March 2020
Published: 10 June 2020
Issue date: June 2020

Copyright

© The Author(s) 2020

Acknowledgements

The authors acknowledge the National Natural Science Foundation of China (Grant Nos. 61772319, 61976125, 61873177, and 61773244), and Shandong Natural Science Foundation of China (GrantNo. ZR2017MF049). We thank the editors and anonymous reviewers for their comments.

Rights and permissions

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return