diff --git a/riscv-unix.md b/riscv-unix.md
index 2e22464..c80acbb 100644
--- a/riscv-unix.md
+++ b/riscv-unix.md
@@ -35,6 +35,9 @@ previous value stored to that location. (That is, the fetched instruction is
not an unpredictable value, nor is it a hybrid of the bytes of the old and new
values.)
+LR/SC forward progress is guaranteed on main-memory regions that are cacheable
+and coherent.
+
Unless otherwise specified by a given I/O device,
I/O regions are at least point-to-point strongly ordered.
All devices attached to a given PCIe root complex are on the same ordered
@@ -43,3 +46,58 @@ be on the same ordering channel.
On RV64I-based Unix-class systems the negative virtual addresses are
reserved for the kernel.
+
+## Misaligned Physical-Memory Access Atomicity
+
+Consider a data memory access of size *w* to physical address *p0*,
+where *w* does not evenly divide *p0*. Let *p1* denote
+the physical address of the last byte of the access, and let *P* denote the
+address pair *(p0, p1)*. There are two cases:
+
+1. *P* lies within a single physical-memory region. One of the following
+ holds:
+
+ 1. Loads and stores to *P* execute atomically with respect to other
+ accesses to *P*. AMOs to *P* either execute atomically
+ with respect to other accesses to *P* or raise access
+ exceptions. LRs and SCs to *P* either execute atomically
+ with respect to other accesses to *P* or raise access exceptions.
+
+ 2. Loads and stores to *P* execute without guarantee of atomicity. AMOs,
+ LRs, and SCs to *P* raise access exceptions.
+
+2. *P* spans two physical-memory regions. AMOs, LRs, and SCs all raise access
+ exceptions. Additionally, one of the following holds:
+
+ 1. Loads and stores to *P* raise access exceptions.
+
+ 2. Loads and stores to *P* succeed without guarantee of atomicity.
+
+ 3. Loads and stores to *P* proceed partially, then raise access exceptions.
+ No register writebacks occur.
+
+## Misaligned Virtual-Memory Access Atomicity
+
+Consider a data memory access of size *w* to virtual address *v0*,
+where *w* does not evenly divide *v0*. Let *v1* denote
+the virtual address of the last byte of the access. Let *p0* and
+*p1* be the physical addresses corresponding to *v0* and
+*v1*, if translations exist. One of the following must hold:
+
+1. *v0* is an impermissible virtual address; the access raises
+ a page-fault exception with trap value *v0*.
+
+3. *v0* is a permissible virtual address; *v1* lies
+ in a different, impermissible page.
+ The access raises a page-fault exception with a trap value equal
+ to the base virtual address of the page containing *v1*.
+ Alternatively, if the same access to physical-address pair
+ *(p0, p0+w-1)* would have caused an access exception,
+ the implementation may raise that exception instead. (This design
+ simplifies the emulation of misaligned accesses in more-privileged software.)
+
+3. *v0* and *v1* are both permissible virtual
+ addresses.
+ The access proceeds according to the misaligned physical-memory access
+ rules above, noting that *v0* and *v1* may lie
+ in different physical-memory regions, despite their virtual contiguity.