Skip to content

Commit

Permalink
doc: update release_2.1 with new docs
Browse files Browse the repository at this point in the history
Signed-off-by: David B. Kinder <[email protected]>
  • Loading branch information
dbkinder committed Aug 8, 2020
1 parent c3800ae commit d8ee2f3
Show file tree
Hide file tree
Showing 28 changed files with 544 additions and 413 deletions.
17 changes: 17 additions & 0 deletions doc/asa.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,23 @@
Security Advisory
#################

Addressed in ACRN v2.1
************************

We recommend that all developers upgrade to this v2.1 release (or later), which
addresses the following security issue that was discovered in previous releases:

------

- Missing access control restrictions in the Hypervisor component
A malicious entity with root access in the Service VM
userspace could abuse the PCIe assign/de-assign Hypercalls via crafted
ioctls and payloads. This attack can result in a corrupt state and Denial
of Service (DoS) for previously assigned PCIe devices to the Service VM
at runtime.

**Affected Release:** v2.0 and v1.6.1.

Addressed in ACRN v1.6.1
************************

Expand Down
1 change: 1 addition & 0 deletions doc/develop.rst
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ Enable ACRN Features
tutorials/setup_openstack_libvirt
tutorials/acrn_on_qemu
tutorials/using_grub
tutorials/pre-launched-rt

Debug
*****
Expand Down
8 changes: 7 additions & 1 deletion doc/developer-guides/hld/hld-devicemodel.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ options:
[-l lpc] [-m mem] [-p vcpu:hostcpu] [-r ramdisk_image_path]
[-s pci] [-U uuid] [--vsbl vsbl_file_name] [--ovmf ovmf_file_path]
[--part_info part_info_name] [--enable_trusty] [--intr_monitor param_setting]
[--acpidev_pt HID] [--mmiodev_pt MMIO_regions]
[--vtpm2 sock_path] [--virtio_poll interval] [--mac_seed seed_string]
[--ptdev_no_reset] [--debugexit]
[--lapic_pt] <vm>
Expand Down Expand Up @@ -86,6 +87,8 @@ options:
--intr_monitor: enable interrupt storm monitor
its params: threshold/s,probe-period(s),delay_time(ms),delay_duration(ms),
--virtio_poll: enable virtio poll mode with poll interval with ns
--acpidev_pt: acpi device ID args: HID in ACPI Table
--mmiodev_pt: MMIO resources args: physical MMIO regions
--vtpm2: Virtual TPM2 args: sock_path=$PATH_OF_SWTPM_SOCKET
--lapic_pt: enable local apic passthrough
--rtvm: indicate that the guest is rtvm
Expand All @@ -104,6 +107,7 @@ Here's an example showing how to run a VM with:
- GPU device on PCI 00:02.0
- Virtio-block device on PCI 00:03.0
- Virtio-net device on PCI 00:04.0
- TPM2 MSFT0101

.. code-block:: bash
Expand All @@ -113,6 +117,7 @@ Here's an example showing how to run a VM with:
-s 5,virtio-console,@pty:pty_port \
-s 3,virtio-blk,b,/data/clearlinux/clearlinux.img \
-s 4,virtio-net,tap_LaaG --vsbl /usr/share/acrn/bios/VSBL.bin \
--acpidev_pt MSFT0101 \
--intr_monitor 10000,10,1,100 \
-B "root=/dev/vda2 rw rootwait maxcpus=3 nohpet console=hvc0 \
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
Expand Down Expand Up @@ -1193,4 +1198,5 @@ Passthrough in Device Model
****************************

You may refer to :ref:`hv-device-passthrough` for passthrough realization
in device model.
in device model and :ref:`mmio-device-passthrough` for MMIO passthrough realization
in device model and ACRN Hypervisor..
1 change: 1 addition & 0 deletions doc/developer-guides/hld/hld-hypervisor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ Hypervisor high-level design
Virtual Interrupt <hv-virt-interrupt>
VT-d <hv-vt-d>
Device Passthrough <hv-dev-passthrough>
mmio-dev-passthrough
hv-partitionmode
Power Management <hv-pm>
Console, Shell, and vUART <hv-console>
Expand Down
2 changes: 1 addition & 1 deletion doc/developer-guides/hld/hv-console.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Specifically:
the hypervisor shell. Inputs to the physical UART will be
redirected to the vUART starting from the next timer event.

- The vUART is deactivated after a :kbd:`Ctrl + Space` hotkey is received
- The vUART is deactivated after a :kbd:`Ctrl` + :kbd:`Space` hotkey is received
from the physical UART. Inputs to the physical UART will be
handled by the hypervisor shell starting from the next timer
event.
Expand Down
126 changes: 41 additions & 85 deletions doc/developer-guides/hld/hv-rdt.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,58 +38,36 @@ IA32_PQR_ASSOC MSR to CLOS 0. (Note that CLOS, or Class of Service, is a
resource allocator.) The user can check the cache capabilities such as cache
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
CLOS ID, to select a cache mask to take effect. ACRN uses
VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes to
enforce the settings.
CLOS ID, to select a cache mask to take effect. These configurations can be
done in scenario xml file under ``FEATURES`` section as shown in the below example.
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes
to enforce the settings.

.. code-block:: none
:emphasize-lines: 3,7,11,15
struct platform_clos_info platform_l2_clos_array[MAX_PLATFORM_CLOS_NUM] = {
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 0,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 1,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 2,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 3,
},
};
:emphasize-lines: 2,4
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">y</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK>
Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario xml file under ``VM`` section. If user desires
to use CDP feature, CDP_ENABLED should be set to ``y``.

.. code-block:: none
:emphasize-lines: 6
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] __aligned(PAGE_SIZE) = {
{
.type = SOS_VM,
.name = SOS_VM_CONFIG_NAME,
.guest_flags = 0UL,
.clos = 0,
.memory = {
.start_hpa = 0x0UL,
.size = CONFIG_SOS_RAM_SIZE,
},
.os_config = {
.name = SOS_VM_CONFIG_OS_NAME,
},
},
};
:emphasize-lines: 2
<clos desc="Class of Service for Cache Allocation Technology. Please refer SDM 17.19.2 for details and use with caution.">
<vcpu_clos>0</vcpu_clos>
.. note::
ACRN takes the lowest common CLOS max value between the supported
resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS
supported by L3 is 16 and L2 is 8, ACRN programs MAX_PLATFORM_CLOS_NUM to
8. ACRN recommends consistent capabilities across all RDT
resources by using the common subset CLOS. This is done in order to
minimize misconfiguration errors.
resources as maximum supported CLOS ID. For example, if max CLOS
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
to 8. ACRN recommends to have consistent capabilities across all RDT
resources by using a common subset CLOS. This is done in order to minimize
misconfiguration errors.


Objective of MBA
Expand Down Expand Up @@ -128,53 +106,31 @@ that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS
users can check the MBA capabilities such as mba delay values and
max supported CLOS as described in :ref:`rdt_detection_capabilities` and
then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID.
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root
modes to enforce the settings.
These configurations can be done in scenario xml file under ``FEATURES`` section
as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit
for non-root and root modes to enforce the settings.

.. code-block:: none
:emphasize-lines: 3,7,11,15
struct platform_clos_info platform_mba_clos_array[MAX_PLATFORM_CLOS_NUM] = {
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 0,
},
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 1,
},
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 2,
},
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 3,
},
};
:emphasize-lines: 2,5
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">y</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask"></CLOS_MASK>
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY>
Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario xml file under ``VM`` section.

.. code-block:: none
:emphasize-lines: 6
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] __aligned(PAGE_SIZE) = {
{
.type = SOS_VM,
.name = SOS_VM_CONFIG_NAME,
.guest_flags = 0UL,
.clos = 0,
.memory = {
.start_hpa = 0x0UL,
.size = CONFIG_SOS_RAM_SIZE,
},
.os_config = {
.name = SOS_VM_CONFIG_OS_NAME,
},
},
};
:emphasize-lines: 2
<clos desc="Class of Service for Cache Allocation Technology. Please refer SDM 17.19.2 for details and use with caution.">
<vcpu_clos>0</vcpu_clos>
.. note::
ACRN takes the lowest common CLOS max value between the supported
resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS
resources as maximum supported CLOS ID. For example, if max CLOS
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
to 8. ACRN recommends to have consistent capabilities across all RDT
resources by using a common subset CLOS. This is done in order to minimize
Expand Down
10 changes: 5 additions & 5 deletions doc/developer-guides/hld/ivshmem-hld.rst
Original file line number Diff line number Diff line change
Expand Up @@ -186,15 +186,15 @@ Inter-VM Communication Security hardening (BKMs)
************************************************

As previously highlighted, ACRN 2.0 provides the capability to create shared
memory regions between Post-Launch user VMs known as Inter-VM Communication”.
memory regions between Post-Launch user VMs known as "Inter-VM Communication".
This mechanism is based on ivshmem v1.0 exposing virtual PCI devices for the
shared regions (in Service VM's memory for this release). This feature adopts a
community-approved design for shared memory between VMs, following same
specification for KVM/QEMU (`Link <https://git.qemu.org/?p=qemu.git;a=blob_plain;f=docs/specs/ivshmem-spec.txt;hb=HEAD>`_).

Following the ACRN threat model, the policy definition for allocation and
assignment of these regions is controlled by the Service VM, which is part of
ACRNs Trusted Computing Base (TCB). However, to secure inter-VM communication
ACRN's Trusted Computing Base (TCB). However, to secure inter-VM communication
between any userspace applications that harness this channel, applications will
face more requirements for the confidentiality, integrity, and authenticity of
shared or transferred data. It is the application development team's
Expand All @@ -218,17 +218,17 @@ architecture and threat model for your application.
- Add restrictions based on behavior or subject and object rules around information flow and accesses.
- In Service VM, consider the ``/dev/shm`` device node as a critical interface with special access requirement. Those requirements can be fulfilled using any of the existing opensource MAC technologies or even ACLs depending on the OS compatibility (Ubuntu, Windows, etc..) and integration complexity.
- In the User VM, the shared memory region can be accessed using ``mmap()`` of UIO device node. Other complementary info can be found under:

- ``/sys/class/uio/uioX/device/resource2`` --> shared memory base address
- ``/sys/class/uio/uioX/device/config`` --> shared memory Size.

- For Linux-based User VMs, we recommend using the standard ``UIO`` and ``UIO_PCI_GENERIC`` drivers through the device node (for example, ``/dev/uioX``).
- Reference: `AppArmor <https://wiki.ubuntuusers.de/AppArmor/>`_, `SELinux <https://selinuxproject.org/page/Main_Page>`_, `UIO driver-API <https://www.kernel.org/doc/html/v4.12/driver-api/uio-howto.html>`_


3. **Crypto Support and Secure Applied Crypto**

- According to the applications threat model and the defined assets that need to be shared securely, define the requirements for crypto algorithms.Those algorithms should enable operations such as authenticated encryption and decryption, secure key exchange, true random number generation, and seed extraction. In addition, consider the landscape of your attack surface and define the need for security engine (for example CSME services.
- According to the application's threat model and the defined assets that need to be shared securely, define the requirements for crypto algorithms.Those algorithms should enable operations such as authenticated encryption and decryption, secure key exchange, true random number generation, and seed extraction. In addition, consider the landscape of your attack surface and define the need for security engine (for example CSME services.
- Don't implement your own crypto functions. Use available compliant crypto libraries as applicable, such as. (`Intel IPP <https://github.com/intel/ipp-crypto>`_ or `TinyCrypt <https://01.org/tinycrypt>`_)
- Utilize the platform/kernel infrastructure and services (e.g., :ref:`hld-security` , `Kernel Crypto backend/APIs <https://www.kernel.org/doc/html/v5.4/crypto/index.html>`_ , `keyring subsystem <https://www.man7.org/linux/man-pages/man7/keyrings.7.html>`_, etc..).
- Implement necessary flows for key lifecycle management including wrapping,revocation and migration, depending on the crypto key type used and if there are requirements for key persistence across system and power management events.
Expand Down
40 changes: 40 additions & 0 deletions doc/developer-guides/hld/mmio-dev-passthrough.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
.. _mmio-device-passthrough:

MMIO Device Passthrough
########################

The ACRN Hypervisor supports both PCI and MMIO device passthrough.
However there are some constraints on and hypervisor assumptions about
MMIO devices: there can be no DMA access to the MMIO device and the MMIO
device may not use IRQs.

Here is how ACRN supports MMIO device passthrough:

* For a pre-launched VM, the VM configuration tells the ACRN hypervisor
the addresses of the physical MMIO device's regions and where they are
mapped to in the pre-launched VM. The hypervisor then removes these
MMIO regions from the Service VM and fills the vACPI table for this MMIO
device based on the device's physical ACPI table.

* For a post-launched VM, the same actions are done as in a
pre-launched VM, plus we use the command line to tell which MMIO
device we want to pass through to the post-launched VM.

If the MMIO device has ACPI Tables, use ``--acpidev_pt HID`` and
if not, use ``--mmiodev_pt MMIO_regions``.

.. note::
Currently, the vTPM and PT TPM in the ACRN-DM have the same HID so we
can't support them both at the same time. The VM will fail to boot if
both are used.

These issues remain to be implemented:

* Save the MMIO regions in a field of the VM structure in order to
release the resources when the post-launched VM shuts down abnormally.
* Allocate the guest MMIO regions for the MMIO device in a guest-reserved
MMIO region instead of being hard-coded. With this, we could add more
passthrough MMIO devices.
* De-assign the MMIO device from the Service VM first before passing
through it to the post-launched VM and not only removing the MMIO
regions from the Service VM.
Loading

0 comments on commit d8ee2f3

Please sign in to comment.