-
設定
-
FLI=log2(pool_size) - log2(GRANULARITY) = 16
-
SLI=4
-
GRANULARITY=4*4=2^4
-
pool_size=2MB=10242=220 bytes
-
fn map_ceil(size: usize) -> Option<(usize, usize)> {
debug_assert!(size >= GRANULARITY);
debug_assert!(size % GRANULARITY == 0);
let mut fl = usize::BITS - GRANULARITY_LOG2 - 1 - size.leading_zeros();
// The shift amount can be negative, and rotation lets us handle both
// cases without branching.
let mut sl = size.rotate_right((fl + GRANULARITY_LOG2).wrapping_sub(Self::SLI));
// The most significant one of `size` should be now at `sl[SLI]`
debug_assert!(((sl >> Self::SLI) & 1) == 1);
// Underflowed digits appear in `sl[SLI + 1..USIZE-BITS]`. They should
// be rounded up
sl = (sl & (SLLEN - 1)) + (sl >= (1 << (Self::SLI + 1))) as usize;
// if sl[SLI] { fl += 1; sl = 0; }
fl += (sl >> Self::SLI) as u32;
// `fl` must be in a valid range
if fl as usize >= FLLEN {
return None;
}
Some((fl as usize, sl & (SLLEN - 1)))
}
-
main struct:
tlsf::Tlsf
-
fl_bitmap, sl_bitmap, freelist
-
-
allocation granularity:
size_of::<usize>() * 4
-
free block aligned to ↑
-
-
block header:
tlsf::BlockHdr
-
↓とのことからプール内の最後のブロックは常に`SIZE_LAST_IN_POOL`と`SIZE_USED`がどっちも立っている
-
The end of each memory pool is capped by a sentinel block (a permanently occupied block) instead of a normal block with a last-block-in-pool flag. This simplifies the code a bit and improves its worst-case performance and code size. ---
-
BlockHdr::next_phys_block
: 次のブロックの存在を仮定 -
BlockHdr
を共通部分として構成-
FreeBlockHdr
: 双方向リスト -
UsedBlockHdr
:BlockHdr
のみ
-
-
Tlsf::new
-
const genericsでパラメータを初期化している←RefinedRust向けに動的な初期化に書き換える必要がありそう
-
const parameterについてコンパイル時にvalidationしてるので、適当な動的初期化を検証してconst genericsを使ったトレイトでラップすれば良さそう
-
-
-
Tlsf::map_floor(size: usize)
-
size
から適当なfreelistを見つける
-
-
first level indexの計算
let fl = usize::BITS - GRANULARITY_LOG2 - 1 - size.leading_zeros();
-
second level indexの計算
let sl = size.rotate_right((fl + GRANULARITY_LOG2).wrapping_sub(Self::SLI));
The shift amount can be negative, and rotation lets us handle both cases without branching. Underflowed digits can be simply masked out in `map_floor`.
-
Tlsf::map_ceil
: すべてのアイテムが要求サイズ以上なfreelistを見つける -
Tlsf::map_ceil_and_unmap
: すべてのアイテムが要求サイズ以上なfreelistを見つけ, そのリストの最小の要素のサイズを返す -
Tlsf::link_free_block
: map_floorで計算して新しいブロックをリストに挿入 -
Tlsf::unlink_free_block
: blockへのnonnullptrとsizeで削除 -
Tlsf::insert_free_block_ptr
: スライスのポインタからメモリプールに追加する -
Tlsf::insert_free_block_ptr_aligned
-
Tlsf::append_free_block_ptr
:
In the current implementation, this method can coalesce memory pools only if the maximum pool size is outside the range of `usize`, i.e., `log2(GRANULARITY) + FLLEN >= usize::BITS`. This is because it does not track each pool's size and cannot check whether the resulting pool will have a valid size.
-
Tlsf::insert_free_block
: safe wrapper mutでexcl,'pool
でoutlive self -
Tlsf::pool_size_to_contain_allocation
: layoutの割当が成功するような最小のGRANULARITY-alignedなメモリプールのサイズ -
Tlsf::allcate
:-
layout.alignがGRANUALITYより小さいときは最終的にアライン後のポインタを比較して検証している
-
After choosing a free block, we need to adjust the payload's location to meet the alignment requirement. Every block is aligned to `GRANULARITY` bytes. `size_of::<UsedBlockHdr>` is `GRANULARITY / 2` bytes, so the address immediately following `UsedBlockHdr` is only aligned to `GRANULARITY / 2` bytes. Consequently, we need to insert a padding containing at most `max(align - GRANULARITY / 2, 0)` bytes.
// Invariant: No two adjacent free blocks debug_assert!((next_phys_block.as_ref().size & SIZE_USED) != 0); next_phys_block.as_mut().prev_phys_block = Some(new_free_block.cast());
-
Tlsf::deallocate_block
: used block headerをとって、freelistに戻す -
前後のfree blockを結合
-
sizeのlsb 2ビットは 使ってないので加算でflagを継承できる
// It's coalescable. Add its size to `size`. This will transfer // any `SIZE_LAST_IN_POOL` flag `next_phys_block` may have at // the same time. size += next_phys_block_size;