• 4 ways to optimize memory management in ESX with Eco4Cloud – part 2

This is the second article (here is the first one) of a series regarding memory management in VMware ESX/ESXi strategy advised by Eco4Cloud and facilitated through the use of Eco4Cloud’s workload consolidationsmart ballooning and troubleshooter solutions. The topic of this article is Transparent Page Sharing.

Transparent Page Sharing

VMware defines Transparent Page Sharing (TPS) as a method by which redundant copies of pages are eliminated.

TPS is an ESX/ESXi level process, scanning every page of the guest physical memory, searching for sharing opportunities. A full bit-by-bit comparison is performed between pages having same TPS hash values and, if two pages match, then the guest-physical to host-physical mapping of the candidate page is changed to the shared host-physical page, and the redundant host memory copy is reclaimed, thus reducing memory consumption and enabling a higher level of memory over-commitment.

TPS works with memory pages with “regular” small pages (i.e., 4KB contiguous memory regions), while with newer OSes and/or hosts with hardware MMU systems that make use of large pages (i.e., 2MB contiguous memory regions), the sharing is postponed until memory pressure happens, and ESX/ESXi breaks each large page into 2048 small pages to ease memory swapping (and activate TPS, of course).

Put that simple, one can challenge this approach as the probability that two memory pages composed of 4KB, that is 32768 bits, fully match is clearly infinitesimal: 1/2^32768, which is right.

Actually, it is an established fact that when different VMs run the same OS and/or applications, and have same data, they WILL have an amount of memory pages that fully match, by design.

In addition, there are several situations (e.g., OS boot) where guest OS zeroes-out many memory pages; that is, it deletes data of a each memory page by over-writing “0” on every byte of the page. Obviously every zeroed memory page matches with the other zeroed memory pages, and gets shared by TPS.

TPS can also lead to performance increase with reference to memory latency for large VMs in systems composed of Non-Uniform Memory Access (NUMA) nodes. More insights on TPS and NUMA nodes are available here and here.

Eco4Cloud strategy for TPS maximization

Eco4Cloud’s workload consolidation optimizes the energy efficiency of physical hosts by maximizing the number of VMs running on each host, while increasing performances. In order to do that, it computes assignment scores of VMs to hosts. A high assignment score makes a vMotion more likely to be issued by Eco4Cloud, and the higher, the better.

Eco4Cloud’s workload consolidation is fully aware of the TPS memory reclamation capabilities. In fact, the homogeneity of guest OSes on each physical host plays an important role in the Eco4Cloud score assignment, increasing the odds of memory reclamation through TPS, and leading to higher consolidation levels.

Leave a Reply