[PATCH] bug fixes and performance improvements to shadow
Because of a bug with reference counting against the target guest page
when searching the list for L1 shadow pages to write protect that page
(at shadow_promote(), which is called by alloc_shadow_page()), the code
was always scanning _all_ the entries in the hash list. The length of
the hash list can be >500 for L1 shadow pages, and for each page we
needed to check all the PTEs in the page.
The patch attached does the following things:
- Correct the reference count (for the target guest page) so that
it can exit the loop when all the L1 shadow pages to modify are found.
Even with this, we can search the entire list if the page is at
- Try to avoid the search in the hash list, by having a
back pointer (as a hint) to the shadow page pfn. For most cases,
there is a single translation for the guest page in the shadow.
- Cleanups, remove the nested function fix_entry
With those, the kernel build performance, for example, was improved
approximately by 20%, 40% on 32-bit, 64-bit unmodified Linux guests,
respectively. Tested log-dirty mode for plain 32-bit as well.