Journal Home > Volume 23 , Issue 4

The growing popularity of Internet applications and services has rendered high subjective video quality crucial to the user experience. Increasing needs for better video resolution and faster transmission bandwidths present challenges to the goal of achieving balance between video quality and coding cost. In this paper, we propose a Perceptive Variable Bit-Rate Control (PVBRC) framework for the state-of-the-art video coding standard High-Efficiency Video Coding (HEVC)/H.265. PVBRC allocates a bit-rate to a picture while taking a Comprehensive Picture Quality Assessment (CPQA) model and perceptive target bit-rate allocation into consideration. The CPQA model calculates the objective and perceptive quality of both source and reconstructed pictures by referring to the human vision system. The coding bit-rate is then dynamically allocated by the result of the CPQA model according to differences in picture content. In PVBRC, the quantization parameter for current picture encoding is updated by an effective fuzzy logical controller to satisfy the transmission requirements of the Internet of Things. Experimental results show that the proposed PVBRC can achieve average bit savings by 11.49% when compared with constant bit-rate control under the same objective and subjective video quality.


menu
Abstract
Full text
Outline
About this article

Picture Quality Assessment-Based on Rate Control for Variable Bandwidth Networks

Show Author's information Ling TianJiaxin LiYimin Zhou( )Hongyu Wang
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.

Abstract

The growing popularity of Internet applications and services has rendered high subjective video quality crucial to the user experience. Increasing needs for better video resolution and faster transmission bandwidths present challenges to the goal of achieving balance between video quality and coding cost. In this paper, we propose a Perceptive Variable Bit-Rate Control (PVBRC) framework for the state-of-the-art video coding standard High-Efficiency Video Coding (HEVC)/H.265. PVBRC allocates a bit-rate to a picture while taking a Comprehensive Picture Quality Assessment (CPQA) model and perceptive target bit-rate allocation into consideration. The CPQA model calculates the objective and perceptive quality of both source and reconstructed pictures by referring to the human vision system. The coding bit-rate is then dynamically allocated by the result of the CPQA model according to differences in picture content. In PVBRC, the quantization parameter for current picture encoding is updated by an effective fuzzy logical controller to satisfy the transmission requirements of the Internet of Things. Experimental results show that the proposed PVBRC can achieve average bit savings by 11.49% when compared with constant bit-rate control under the same objective and subjective video quality.

Keywords: variable bit rate, picture quality assessment, rate control, networking bandwidth

References(28)

[1]
G. J. Sullivan, J. R. Ohm, W. J. Han, and T. Wiegand, Overview of the high efficiency video coding (HEVC) standard, IEEE Trans. Circ. Syst. Video Technol., vol. 22, no. 12, pp. 1649-1668, 2012.
[2]
X. Zheng and Z. P. Cai, Real-time big data delivery in wireless networks: A case study on video delivery, IEEE Trans. Industr. Inform., vol. 13, no. 4, pp. 2048-2057, 2017.
[3]
ITU-T and ISO/IEC JTC 1, Advanced video coding for generic audiovisual services, 2005.
DOI
[4]
G. J. Sullivan and T. Wiegand, Video compression – From concepts to the H.264/AVC standard, Proc. IEEE, vol. 93, no. 1, pp. 18-31, 2005.
[5]
A. Ghodke, V. N. More, and P. Pawar, Search pattern improvement using content based adaptive search in HEVC, in Proc. 2015 Int. Conf. Information Processing (ICIP), Pune, India, 2015, pp. 452-457.
DOI
[6]
X. Zheng, Z. P. Cai, J. Z. Li, and H. Gao, Scheduling flows with multiple service frequency constraints, IEEE Internet Things J., vol. 4, no. 2, pp. 496-504, 2017.
[7]
X. Zheng, Z. P. Cai, J. Z. Li, and H. Gao, An application-aware scheduling policy for real-time traffic, in Proc. 35@th Int. Conf. Distributed Computing Systems, Columbus, OH, USA, 2015, pp. 421-430.
DOI
[8]
X. Zheng, Z. P. Cai, J. Z. Li, and H. Gao, A study on application-aware scheduling in wireless networks, IEEE Trans. Mobile Comput., vol. 16, no. 7, pp. 1787-1801, 2017.
[9]
M. H. Wang, K. N. Ngan, and H. L. Li, An efficient frame-content based intra frame rate control for high efficiency video coding, IEEE Signal Process. Lett., vol. 22, no. 7, pp. 896-900, 2015.
[10]
B. Li, H. Q. Li, L. Li, and J. L. Zhang, λ domain rate control algorithm for high efficiency video coding, IEEE Trans. Image Process., vol. 23, no. 9, pp. 3841-3854, 2014.
[11]
M. K. Luo, Y. M. Zhou, M. Zhong, and C. Zhou, Rate control algorithm of AVS2 video coding, (in Chinese), Systems Engineering and Electronics, vol. 38, no. 9, pp. 2192-2200, 2016.
[12]
B. Han and B. F. Zhou, VBR rate control for perceptually consistent video quality, IEEE Trans. Consumer Electron., vol. 54, no. 4, pp. 1912-1919, 2008.
[13]
S. Q. Wang, A. Rehman, K. Zeng, J. H. Wang, and Z. Wang, SSIM-motivated Two-pass VBR coding for HEVC, IEEE Trans. Circ. Syst. Video Technol., vol. 27, no. 10, pp. 2189-2203, 2017.
[14]
M. Rezaei, M. M. Hannuksela, and M. Gabbouj, Semi-fuzzy rate controller for variable bit rate video, IEEE Trans. Circ. Syst. Video Technol., vol. 18, no. 5, pp. 633-645, 2008.
[15]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Trans. Image Process., vol. 13, no. 4, pp. 600-612, 2004.
[16]
A. M. Eskicioglu and P. S. Fisher, Image quality measures and their performance, IEEE Trans. Commun., vol. 43, no. 12, pp. 2959-2965, 1995.
[17]
Z. Wang and A. C. Bovik, A universal image quality index, IEEE Signal Process. Lett., vol. 9, no. 3, pp. 81-84, 2002.
[18]
L. D. Li, W. H. Xia, W. S. Lin, Y. M. Fang, and S. Q. Wang, No-reference and robust image sharpness evaluation based on multiscale spatial and spectral features, IEEE Trans. Multimed., vol. 19, no. 5, pp. 1030-1040, 2017.
[19]
X. Y. Ma, X. H. Jiang, X. H. Lei, H. Zhang, and P. Liu, A no-reference image blur metric based on two-pass edge analysis, in Proc. 11@th Int. Conf. Natural Computation (ICNC), Zhangjiajie, China, 2015, pp. 919-924.
[20]
H. Kim, S. Ahn, W. Kim, and S. Lee, Visual preference assessment on ultra-high-definition images, IEEE Trans. Broadcast., vol. 62, no. 4, pp. 757-769, 2016.
[21]
N. D. Narvekar and L. J. Karam, A no-reference image blur metric based on the cumulative probability of blur detection (CPBD), IEEE Trans. Image Process., vol. 20, no. 9, pp. 2678-2683, 2011.
[22]
R. Ferzli and L. J. Karam, A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB), IEEE Trans. Image Process., vol. 18, no. 4, pp. 717-728, 2009.
[23]
TEXAS, LIVE image quality assessment database, http://live.ece.utexas.edu/research/quality/subjective.htm, 2017.
[24]
X. W. Shang, G. Z. Wang, H. W. Zhao, J. Liang, C. J. Wu, and C. Lin, A new combined PSNR for objective video quality assessment, in Proc. 2017 IEEE Int. Conf. Multimedia and Expo (ICME), Hong Kong, China, 2017, pp. 811-816.
DOI
[25]
S. H. Bae and M. Kim, A novel SSIM index for image quality assessment using a new luminance adaptation effect model in pixel intensity domain, in Proc. 2015 Visual Communications and Image Processing (VCIP), Singapore, 2015, pp. 1-4.
DOI
[26]
P. Gupta, P. Srivastava, S. Bhardwaj, and V. Bhateja, A modified PSNR metric based on HVS for quality assessment of color images, in Proc. 2011 Int. Conf. Communication and Industrial Application, Kolkata, West Bengal, India, 2011, pp. 1-4.
DOI
[27]
Y. Fu and M. D. Yin, A blurred image quality assessment method based on content-partitioned, in Proc. 2016 Int. Symp. Computer, Consumer and Control (IS3C), Xi’an, China, 2016, pp. 571-574.
DOI
[28]
H. Ying, Introduction to Fuzzy Control and Modeling. Wiley-IEEE Press, 2000, pp. 15-39.
DOI
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 29 July 2017
Revised: 07 September 2017
Accepted: 20 September 2017
Published: 16 August 2018
Issue date: August 2018

Copyright

© The authors 2018

Acknowledgements

This work was supported by Foundation of Science and Technology Department of Sichuan Province (Nos. 2017JY0007 and 2017HH0075).

Rights and permissions

Return