分享

KVM 内存管理概述

 浸心阁 2015-06-08

This was written in February 2010, during the era of qemu-kvm 0.12.




The qemu/kvm process runs mostly like a normal Linux program.  It
allocates its memory with normal malloc() or mmap() calls.  If a guest
is going to have 1GB of physical memory, qemu/kvm will effectively do a
malloc(1<<30), allocating 1GB of host virtual space.  However,
just like a normal program doing a malloc(), there is no actual physical
memory allocated at the time of the malloc().  It will not be actually
allocated until the first time it is touched.


Once the guest is running, it sees that malloc()'d memory area as
being its physical memory.  If the guest's kernel were to access what
it sees as physical address 0x0, it will see the first page of that
malloc() done by the qemu/kvm process.




It used to be that every time a KVM guest changed its page tables,
the host had to be involved.  The host would validate that the entries
the guest put in its page tables were valid and that they did not access
any memory which was not allowed.  It did this with two mechanisms.


One was that the actual set of page tables being used by the
virtualization hardware are separate from the page tables that the guest
*thought* were being used.  The guest first makes a change in its page
tables.  Later, the host notices this change, verifies it, and then
makes a real page table which is accessed by the hardware.  The guest
software is not allowed to directly manipulate the page tables accessed
by the hardware.  This concept is called shadow page tables and it is a
very common technique in virtualization.


The second part was that the VMX/AMD-V extensions allowed the
host to trap whenever the guest tried to set the register pointing to
the base page table (CR3).


This technique works fine.  But, it has some serious performance
implications.  A single access to a guest page can take up to 25 memory
accesses to complete, which gets very costly. See this paper: http://developer./assets/NPT-WP-1%201-final-TM.pdf
for more information.  The basic problem is that every access to memory
must go through both the page tables of the guest and then the page
tables of the host.  The two-dimensional part comes in because the page
tables of the guest must *themselves* go through the page tables of the
host.


It can also be very costly for the host to verify and maintain
the shadow page tables. 




Both AMD and Intel sought solutions to these problems and came up
with similar answers called EPT and NPT.  They specify a set of
structures recognized by the hardware which can quickly translate guest
physical addresses to host physical addresses *without* going through
the host page tables.  This shortcut removes the costly two-dimensional
page table walks.


The problem with this is that the host page tables are what we
use to enforce things like process separation.  If a page was to be
unmapped from the host (when it is swapped, for instance), it then we
*must* coordinate that change with these new hardware EPT/NPT
structures.




The solution in software is something Linux calls mmu_notifiers. 
Since the qemu/kvm memory is normal Linux memory (from the host Linux
kernel's perspective) the kernel may try to swap it, replace it, or even
free it just like normal memory.


But, before the pages are actually given back to the host kernel
for
other use, the kvm/qemu guest is notified of the host's intentions. The
kvm/qemu guest can then remove the page from the shadow page tables or
the NPT/EPT structures. After the kvm/qemu guest has done this, the host
kernel is then free to do what it wishes with the page.




A day in the life of a KVM guest physical page:







Fault-in path



  1. QEMU calls malloc() and allocates virtual space for the page,
    but no backing physical page

  2. The guest process touches what it thinks is a physical
    address, but this traps into the host since the memory is unallocated

  3. The host kernel sees a page fault, calls do_page_fault() in
    the area that was malloc()'d, and if all goes well, allocates some
    memory to back it. 

  4. The host kernel creates a pte_t to connect the malloc()'d
    virtual address to a host physical address, makes rmap entries, puts it
    on the LRU, etc...

  5. mmu_notifier change_pte()?? is called, which allows KVM to
    create an NPT/EPT entry for the new page. (and an spte entry??)

  6. Host returns from page fault, guest execution resumes







Swap-out path


Now, let's say the host is under memory pressure.  The page from
above has gone through the Linux LRU and has found itself on the
inactive list.  The kernel decides that it wants the page back:



  1. The host kernel uses rmap structures to find out in which VMA
    (vm_area_struct) the page is mapped.

  2. The host kernel looks up the mm_struct associated with that
    VMA, and walks down the Linux page tables to find the host hardware page
    table entry (pte_t) for the page.

  3. The host kernel swaps out the page and clears out the pte_t
    (let's assume that this page was only used in a single place). But,
    before freeing the page:

  4. The host kernel calls the mmu_notifier invalidate_page(). This
    looks up the page's entry in the NPT/EPT structures and removes it.

  5. Now, any subsequent access to the page will trap into the host
    ((2) in the fault-in path above)


 









Memory Overcommit


Given all of the above, it should be apparent that just like normal
processes on linux, the host memory allocated to the host processes
representing kvm guests may be overcommitted.  One distro's discussion
of this appears here
.

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多