AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

Huge Page Friendly Virtualized Memory Management

School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China
Peng Cheng Laboratory, Shenzhen 518052, China
Shenzhen Key Laboratory for Cloud Computing Technology & Applications, School of Electronic and Computer Engineering, Peking University Shenzhen, Shenzhen 518000, China
Department of Computer Science, Michigan Technological University, Michigan 49246, U.S.A.
Show Author Information

Abstract

With the rapid increase of memory consumption by applications running on cloud data centers, we need more efficient memory management in a virtualized environment. Exploiting huge pages becomes more critical for a virtual machine’s performance when it runs large working set size programs. Programs with large working set sizes are more sensitive to memory allocation, which requires us to quickly adjust the virtual machine’s memory to accommodate memory phase changes. It would be much more efficient if we could adjust virtual machines’ memory at the granularity of huge pages. However, existing virtual machine memory reallocation techniques, such as ballooning, do not support huge pages. In addition, in order to drive effective memory reallocation, we need to predict the actual memory demand of a virtual machine. We find that traditional memory demand estimation methods designed for regular pages cannot be simply ported to a system adopting huge pages. How to adjust the memory of virtual machines timely and effectively according to the periodic change of memory demand is another challenge we face. This paper proposes a dynamic huge page based memory balancing system (HPMBS) for efficient memory management in a virtualized environment. We first rebuild the ballooning mechanism in order to dispatch memory in the granularity of huge pages. We then design and implement a huge page working set size estimation mechanism which can accurately estimate a virtual machine’s memory demand in huge pages environments. Combining these two mechanisms, we finally use an algorithm based on dynamic programming to achieve dynamic memory balancing. Experiments show that our system saves memory and improves overall system performance with low overhead.

Electronic Supplementary Material

Download File(s)
jcst-35-2-433-Highlights.pdf (707.2 KB)

References

[1]
Khalidi Y A, Talluri M, Nelson M N, Williams D. Virtual memory support for multiple pages. Technical Report, Sun Microsystems Laboratories, Inc., 1993. http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=AA2CA3D6205E02FDA1FC545D691C5C20?doi=10.1.1.32.6162&rep=rep1&type=pdf, Sept. 2019.
[2]
Arcangeli A. Transparent hugepage support. In Proc. the 2010 KVM Forum, August 2010.
[3]

Wang X, Luo T, Hu J, Wang Z, Luo Y. Evaluating the impacts of hugepage on virtual machines. Science China Information Sciences, 2017, 60(1): Article No. 12103.

[4]
Denning P J. The working set model for program behavior. In Proc. the 1st ACM Symposium on Operating System Principles, October 1967, Article No. 15.
[5]
Hu J, Bai X, Sha S et al. Working set size estimation with hugepages in virtualization. In Proc. the 2018 IEEE ISPA/IUCC/BDCloud/SocialCom/SustainCom, Dec. 2018, pp.501-508.
[6]
Hu J, Bai X, Sha S et al. HUB: Hugepage ballooning in kernel-based virtual machines. In Proc. International Symposium on Memory Systems, Oct. 2018, pp.31-37.
[7]

Mattson R L, Gecsei J, Slutz D R, Traiger I L. Evaluation techniques for storage hierarchies. IBM Systems Journal, 1970, 9(2): 78-117.

[8]
Waldspurger C A, Park N, Garthwaite A T, Ahmad I. Efficient MRC construction with SHARDS. In Proc. the 13th USENIX Conference on File and Storage Technologies, February 2015, pp.95-110.
[9]

Wang Z, Wang X, Hou F, Luo Y,Wang Z. Dynamic memory balancing for virtualization. ACM Transactions on Architecture and Code Optimization, 2016, 13(1): Article No. 2.

[10]

Zhao W, Wang Z, Luo Y. Dynamic memory balancing for virtual machines. ACM SIGOPS Operating Systems Review, 2009, 43(3): 37-47.

[11]

Waldspurger C A. Memory resource management in VMware ESX server. ACM SIGOPS Operating Systems Review, 2002, 36(5): 181-194.

[12]
Zhao W, Jin X, Wang Z, Wang X, Luo Y, Li X. Low cost working set size tracking. In Proc. the 2011 USENIX Annual Technical Conference, June 2011, Article No. 14.
[13]

Zhou P, Pandey V, Sundaresan J, Raghuraman A, Zhou Y, Kumar S. Dynamic tracking of page miss ratio curve for memory management. ACM SIGOPS Operating Systems Review, 2004, 38(5): 177-188.

[14]
Wires J, Ingram S, Drudi Z, Harvey N J, Warfield A, Data C. Characterizing storage workloads with counter stacks. In Proc. the 11th USENIX Symposium on Operating Systems Design and Implementation, October 2014, pp.335-349.
[15]
Niu Q, Dinan J, Lu Q, Sadayappan P. PARDA: A fast parallel reuse distance analysis algorithm. In Proc. the 26th International Parallel and Distributed Processing Symposium, May 2012, pp.1284-1294.
[16]
Tam D K, Azimi R, Soares L B, Stumm M. RapidMRC: Approximating L2 miss rate curves on commodity systems for online optimizations. In Proc. the 14th International Conference on Architectural Support for Programming Languages and Operating Systems, March 2009, pp.121-132.
[17]
Xiang X, Bao B, Ding C, Gao Y. Linear-time modeling of program working set in shared cache. In Proc. the 2011 International Conference on Parallel Architectures and Compilation Techniques, October 2011, pp.350-360.
[18]
Hu X, Wang X, Zhou L, Luo Y, Ding C, Wang Z. Kinetic modeling of data eviction in cache. In Proc. the 2016 USENIX Annual Technical Conference, June 2016, pp.351-364.
[19]

Hu X, Wang X, Zhou L, Luo Y, Wang Z, Ding C, Ye C. Fast miss ratio curve modeling for storage cache. ACM Transactions on Storage, 2018, 14(2): Article No. 12.

[20]

Xiao Z, Song W, Chen Q. Dynamic resource allocation using virtual machines for cloud computing environment. IEEE Transactions on Parallel and Distributed Systems, 2012, 24(6): 1107-1117.

[21]
Tasoulas E, Haugerund H, Begnum K. Bayllocator: A proactive system to predict server utilization and dynamically allocate memory resources using Bayesian networks and ballooning. In Proc. the 26th Large Installation System Administration Conference on Strategies, Tools, and Techniques, December 2012, pp.111-122.
[22]
Gordon A, Hines M, Silva D, Ben-Yehuda M, Silva M, Lizarraga G. Ginkgo: Automated, application-driven memory overcommitment for cloud computing. In Proc. the 2011 Workshop on Runtime Environments/Systems, Layering, and Virtualized Environments, May 2011.
[23]

Nitu V, Kocharyan A, Yaya H, Tchana A, Hagimont D, Astsatryan H. Working set size estimation techniques in virtualized environments: One size does not fit all. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 2018, 2(1): Article No. 19.

[24]

Liu H, Jin H, Liao X, Deng W, He B, Xu C. Hotplug or ballooning: A comparative study on dynamic memory management techniques for virtual machines. IEEE Transactions on Parallel and Distributed Systems, 2015, 26(5): 1350-1363.

Journal of Computer Science and Technology
Pages 433-452
Cite this article:
Sha S, Hu J-Y, Luo Y-W, et al. Huge Page Friendly Virtualized Memory Management. Journal of Computer Science and Technology, 2020, 35(2): 433-452. https://doi.org/10.1007/s11390-020-9693-0

516

Views

4

Crossref

N/A

Web of Science

5

Scopus

0

CSCD

Altmetrics

Received: 07 May 2019
Revised: 14 October 2019
Published: 27 March 2020
©Institute of Computing Technology, Chinese Academy of Sciences 2020
Return