-----BEGIN PGP SIGNED MESSAGE-----
Xen Security Advisory CVE-2018-3620,CVE-2018-3646 / XSA-273
L1 Terminal Fault speculative side channel
In x86 nomenclature, a Terminal Fault is a pagetable walk which aborts
due to the page being not present (e.g. paged out to disk), or because
of reserved bits being set.
Architecturally, such a memory access will result in a page fault
exception, but some processors will speculatively compute the physical
address and issue an L1D lookup. If data resides in the L1D cache, it
may be forwarded to dependent instructions, and may be leaked via a side
* SGX protections are not applied
* EPT guest to host translations are not applied
* SMM protections are not applied
This issue is split into multiple CVEs depending on circumstance. The
CVEs which apply to Xen are:
* CVE-2018-3620 - Operating Systems and SMM
* CVE-2018-3646 - Hypervisors
For more details, see:
An attacker can potentially read arbitrary host RAM. This includes data
belonging to Xen, data belonging to other guests, and data belonging to
different security contexts within the same guest.
An attacker could be a guest kernel (which can manipulate the pagetables
directly), or could be guest userspace either directly (e.g. with
mprotect() or similar system call) or indirectly (by gaming the guest
kernel's paging subsystem).
Systems running all versions of Xen are affected.
Only x86 processors are vulnerable. ARM processors are not known to be
Only Intel Core based processors (from at least Merom onwards) are
potentially affected. Other processor designs (Intel Atom/Knights
range), and other manufacturers (AMD) are not known to be affected.
x86 PV guests fall into the CVE-2018-3620 (OS and SMM) category. x86
HVM and PVH guests fall into the CVE-2018-3646 (Hypervisors) category.
This issue can be mitigated with a combination of software and firmware
Switching guests to being HVM with shadow paging enabled (hap=0 in
xl.cfg) is believed to mitigate the vulnerability on systems which don't
have terabytes of RAM. However the performance impact of shadow paging
in combination with in-guests Meltdown mitigations (KPTI, KVAS, etc)
will most likely make this option prohibitive to use.
New microcode, and possibly a new firmware image is required to prevent
SMM data from being leaked with this vulnerability. Consult your
Software updates to Xen (details below) are required to prevent guests
from being able to leak data belonging to Xen or to other guests in the
Guest kernel software updates are required to prevent guest userspace
from being able to leak data belonging to the kernel or other processes
within the same guest. Consult your OS vendors.
1) For PV guests (which fall into the CVE-2018-3620 - OS/SMM case),
leakage of data from Xen or other guests can be prevented entirely
with software changes in Xen.
If the PV guest tries to write an L1TF-vulnerable PTE (for current
kernels, very likely when paging data out to disk), shadow paging is
activated and forced upon the guest. Alternatively, if shadow paging
is compiled out, the guest is crashed instead.
Shadowing comes with a workload-dependent performance hit to the
guest. Once the guest kernel software updates have been applied, a
well behaved guest will not write vulnerable PTEs, and will therefore
avoid the performance penalty (or crash) entirely.
This behaviour is active by default for guests on affected hardware
(controlled by `pv-l1tf=`), but is disabled by default for dom0.
Dom0's exemption is because of instabilities when being shadowed,
which are under investigation, but dom0 kernel updates should still
be taken to mitigate the userspace aspect.
2) For HVM and PVH guests running with Hardware Assisted Paging (which fall
into the CVE-2018-3646 - Hypervisors case), leakage of data from Xen or
other guests can only be prevented entirely by disabling
SMT/Hyper-threading (if available and active in the BIOS), and by using the
L1D_FLUSH feature (available in the new microcode) on every VMEntry.
On affected hardware, L1D_FLUSH is enabled by default (controlled by
`spec-ctrl=[no-]l1d-flush`), subject to microcode availability.
However, SMT/Hyper-threading is not disabled by default, because Xen does
not have enough information to choose an appropriate default. Safety can
be arranged in a number of ways by the toolstack, including with finer
granularity than simply on or off.
Therefore, users are expected to perform a risk assessment of their
deployment, and explicitly chose a default (`smt=<bool>`). See the RISK
ASSESSMENT section below. Xen will issue a warning at boot on vulnerable
hardware when no explicit smt choice has been set.
There are ongoing experimentation and development efforts to find lower
overhead mitigations for the HVM case.
We are not supplying separate patches because the changes have many
complicated prerequisites. To get the fixes, it is necessary to
update to the latest Xen applicable staging-XX branch.
The relevant git commit object ids are as follows:
aa67b97ed34279c43a43d9ca46727b5746caa92e staging # xen-unstable
In each case the tip commit is "xl.conf: Add global affinity masks".
RISK ASSESSMENT OF SMT/HYPER-THREADING
1) If hyper-threading is unavailable, or already disabled in the BIOS, no
further action is necessary.
2) If you are using exclusively PV or HVM Shadow guests, hyper-threading has
no impact on security, and is safe to remain enabled.
3) If an HVM guest kernel is trusted (i.e. under host admin control), and has
been updated to include the OS vendor mitigations, then it is probably safe
to be scheduled with hyper-threading active.
4) If an HVM guest kernel is untrusted (i.e. not under host admin control), it
is probably not safe to be scheduled with hyper-threading active.
FINER GRAINED SMT/HYPER-THREADING CONTROL WITH TOOLSTACK SETTINGS
New options (vm.cpumask, vm.hvm.cpumask and vm.pv.cpumask) have been
added in the xl/libxl toolstack to provide global control over CPU
hard affinity settings. The global masks are applied when a guest is
created or when a vcpu is pinned.
Sketch of how to use the new options:
1. Livepatch the hypervisor.
2. Identify all sibling threads and partition them with the new
options in xl.conf.
3. For each DomU, run `xl vcpu-pin $DOM all all`, which should
cause the global masks to be applied to all vcpus of a DomU.
4. Verify the required affinity has taken effect by running `xl
The default behaviour of xl is to always apply global masks unless
`--ignore-global-affinity-masks` is specified. Please refer to
xl.conf(5) for details.
NOTE CONCERNING CVE-2018-3615
CVE-2018-3615 covers the interaction of L1TF and Intel SGX. Xen has
no support for enclaves in any currently released version, so no Xen
systems are affected.
NOTE REGARDING LACK OF EMBARGO
Despite an attempt to organise predisclosure, the discoverers ultimately
did not authorise a predisclosure.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
-----END PGP SIGNATURE-----
Xen-announce mailing list
|Free forum by Nabble||Edit this page|