diff options
author | Paolo Bonzini <pbonzini@redhat.com> | 2017-12-13 12:58:02 +0100 |
---|---|---|
committer | Radim Krčmář <rkrcmar@redhat.com> | 2018-01-16 16:49:56 +0100 |
commit | 44900ba65e16ab3c6608e105654f38f54d030caa (patch) | |
tree | 765cc767aa237de4fa791d0cfce3663580024bef /tools/perf/scripts/python/stackcollapse.py | |
parent | c5d167b27e00026711ad19a33a23d5d3d562148a (diff) | |
download | linux-44900ba65e16ab3c6608e105654f38f54d030caa.tar.gz linux-44900ba65e16ab3c6608e105654f38f54d030caa.tar.xz |
KVM: VMX: optimize shadow VMCS copying
Because all fields can be read/written with a single vmread/vmwrite on
64-bit kernels, the switch statements in copy_vmcs12_to_shadow and
copy_shadow_to_vmcs12 are unnecessary.
What I did in this patch is to copy the two parts of 64-bit fields
separately on 32-bit kernels, to keep all complicated #ifdef-ery
in init_vmcs_shadow_fields. The disadvantage is that 64-bit fields
have to be listed separately in shadow_read_only/read_write_fields,
but those are few and we can validate the arrays when building the
VMREAD and VMWRITE bitmaps. This saves a few hundred clock cycles
per nested vmexit.
However there is still a "switch" in vmcs_read_any and vmcs_write_any.
So, while at it, this patch reorders the fields by type, hoping that
the branch predictor appreciates it.
Cc: Jim Mattson <jmattson@google.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Diffstat (limited to 'tools/perf/scripts/python/stackcollapse.py')
0 files changed, 0 insertions, 0 deletions