Google engineer James Houghton has sent two rounds of patches to the kernel to bring the concept of HugeTLB High Granular Mapping (HGM) into the Linux kernel. He has currently sent a total of 46 patchsets in post-RFC status for review.
Linux memory management adopts “paging mechanism”. When the running memory demand is large, too large (4K/2M/1G) pages will cause more TLB misses and page fault interrupts, which will greatly affect application performance. HugeTLB is equivalent to the manager of this hugepage. It records the entries in the TLB and points to the Hugepage. The allocation and release of Hugepage pages are all in charge of this module.
The HugeTLB HGM allows HugeTLB pages to be mapped at a high granularity in a manner similar to PTE-mapped Transparent Huge Pages (THP). Google introduces HugeTLB HGM into Linux, which greatly optimizes VM live migration and memory fault handling. James explained some of the main benefits of the HugeTLB HGM patch series in the email:
HugeTLB HGM is able to unpause vCPUs 100x faster, helps guest stability, and is fully capable of using 1G pages, significantly improving steady-state guest performance.
After fully copying a huge page over the network, we want to collapse the mapping to what it usually looks like (eg, one PUD for one 1G page). But instead of letting the kernel do this automatically, we let userspace tell us to collapse ranges (via MADV_COLLAPSE).
Memory faults: When a memory fault is found in a HugeTLB page, it would be ideal if we could just unmap the portion of PAGE_SIZE that contains the fault. It is possible to do this with high granularity mapping, but currently this patch series does not address this.
Initially, this HugeTLB high-granularity mapping support is only available on x86_64, but there are plans for AArch64 and other CPU architectures as well. Learn more details about HugeTLB HGM support through the kernel patch series.
#Google #adds #HugeTLB #highgranularity #mapping #function #Linux #kernelNews Fast Delivery