-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cralloc: add BatchAllocator #12
Conversation
t := &b.buf[b.used] | ||
b.used++ | ||
if b.used < batchSize { | ||
// Batch has more objects available, put it back into the pool. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting, I've never seen that pattern. I know that pools are simply "cleared" during GC cycles. But this doesn't mean that this code is illegal, right? The object won't be GC'ed out from under us.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I know, a GC cycle removes all objects from the pool that have not been used since the last GC cycle. (each GC cycle moves the objects to a "victim pool" from where they can move back).
Corect, b
is not in the pool at all while we're holding it.
// ... | ||
// x := someTypeBatchAlloc.Alloc() | ||
type BatchAllocator[T any] struct { | ||
pool sync.Pool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting to me to see a sync.Pool
here. Could you add a comment about this choice? I think benefits are avoiding a mutex, and good locality for which socket the memory is accessed from? Anything else?
We don't control how many batches are going to be instantiated at any given point (usually on the order of GOMAXPROCS, but there's no strict limit) which could be a concern, though likely a theoretical one unless someone decides to us this with very large T
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a comment. Even though it's not its intended functionality, sync.Pool
can be used to approximate per-cpu structures (some discussion in golang/go#8281 and golang/go#18802).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any unused objects from the pool will go away in two GC cycles, so we shouldn't hold on to them under memory pressure.
3b0e720
to
f45fe30
Compare
Add a batch allocator which reduces the number of individual object allocations. ``` name time/op BatchAllocator/Baseline-10 1.99µs ± 3% BatchAllocator/Batched-10 1.78µs ± 0% name alloc/op BatchAllocator/Baseline-10 2.40kB ± 0% BatchAllocator/Batched-10 2.60kB ± 0% name allocs/op BatchAllocator/Baseline-10 100 ± 0% BatchAllocator/Batched-10 12.0 ± 0% ```
f45fe30
to
66e953f
Compare
TFTR! |
Add a batch allocator which reduces the number of individual object
allocations.
This change is