-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MPMC 2-stack queue #112
base: main
Are you sure you want to change the base?
MPMC 2-stack queue #112
Conversation
fe48302
to
cf2beeb
Compare
fa07ee9
to
d090cb7
Compare
e99cf2a
to
e96063f
Compare
32fbde3
to
4184353
Compare
Out of curiosity I tried to test the theory of whether one could improve the performance of this queue on Opteron by writing to the cache line (non-atomically) before reading from the cache line to avoid the cache line being put into the shared/owned mode (assuming that is the problem). I made the following changes to the code: modified src_lockfree/two_stack_queue.ml
@@ -12,7 +12,17 @@
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE. *)
-module Atomic = Transparent_atomic
+module Atomic = struct
+ include Transparent_atomic
+
+ let[@inline] get_for_set (x : _ t) =
+ Array.unsafe_set (Obj.magic x : int array) 6 0; (* NOTE: A padded atomic! *)
+ get x
+
+ let[@inline] fenceless_get_for_set (x : _ t) =
+ Array.unsafe_set (Obj.magic x : int array) 6 0; (* NOTE: A padded atomic! *)
+ fenceless_get x
+end
type 'a t = { head : 'a head Atomic.t; tail : 'a tail Atomic.t }
@@ -67,7 +77,7 @@ let rec push t value backoff = function
if move != Obj.magic () then begin
let (Snoc move_r) = move in
begin
- match Atomic.get t.head with
+ match Atomic.get_for_set t.head with
| H (Head head_r as head) when head_r.counter < move_r.counter ->
let after = rev move in
if
@@ -88,9 +98,10 @@ and push_with t value backoff counter prefix =
if new_tail != prefix then push t value backoff new_tail
else if not (Atomic.compare_and_set t.tail prefix (T after)) then
let backoff = Backoff.once backoff in
- push t value backoff (Atomic.fenceless_get t.tail)
+ push t value backoff (Atomic.fenceless_get_for_set t.tail)
-let push t value = push t value Backoff.default (Atomic.fenceless_get t.tail)
+let push t value =
+ push t value Backoff.default (Atomic.fenceless_get_for_set t.tail)
exception Empty
@@ -103,7 +114,7 @@ let rec pop_as : type a r. a t -> _ -> (a, r) poly -> a head -> r =
match poly with Value -> cons_r.value | Option -> Some cons_r.value
else
let backoff = Backoff.once backoff in
- pop_as t backoff poly (Atomic.fenceless_get t.head)
+ pop_as t backoff poly (Atomic.fenceless_get_for_set t.head)
| H (Head head_r as head) -> begin
match Atomic.fenceless_get t.tail with
| T (Snoc snoc_r as move) ->
@@ -112,14 +123,14 @@ let rec pop_as : type a r. a t -> _ -> (a, r) poly -> a head -> r =
match poly with
| Value -> snoc_r.value
| Option -> Some snoc_r.value
- else pop_as t backoff poly (Atomic.fenceless_get t.head)
+ else pop_as t backoff poly (Atomic.fenceless_get_for_set t.head)
else
let tail = Tail { counter = snoc_r.counter; move } in
let new_head = Atomic.get t.head in
if new_head != H head then pop_as t backoff poly new_head
else if Atomic.compare_and_set t.tail (T move) (T tail) then
pop_moving_as t backoff poly head move tail
- else pop_as t backoff poly (Atomic.fenceless_get t.head)
+ else pop_as t backoff poly (Atomic.fenceless_get_for_set t.head)
| T (Tail tail_r as tail) ->
let move = tail_r.move in
if move == Obj.magic () then pop_emptyish_as t backoff poly head
@@ -148,7 +159,7 @@ and pop_moving_as :
end
else
let backoff = Backoff.once backoff in
- pop_as t backoff poly (Atomic.fenceless_get t.head)
+ pop_as t backoff poly (Atomic.fenceless_get_for_set t.head)
else pop_emptyish_as t backoff poly head
and pop_emptyish_as : type a r. a t -> _ -> (a, r) poly -> (a, _) tdt -> r =
@@ -158,8 +169,10 @@ and pop_emptyish_as : type a r. a t -> _ -> (a, r) poly -> (a, _) tdt -> r =
match poly with Value -> raise_notrace Empty | Option -> None
else pop_as t backoff poly new_head
-let pop t = pop_as t Backoff.default Value (Atomic.fenceless_get t.head)
-let pop_opt t = pop_as t Backoff.default Option (Atomic.fenceless_get t.head)
+let pop t = pop_as t Backoff.default Value (Atomic.fenceless_get_for_set t.head)
+
+let pop_opt t =
+ pop_as t Backoff.default Option (Atomic.fenceless_get_for_set t.head) The results are not entirely conclusive: The 1st result from the right is after dropping the experiment and the 6th results from the right is the first result with some of the extra writes. Using eyeball statistics it would seem like there is potentially some improvement. Of course, performing an actual write to memory is quite different from prefetching a cache line in anticipation of a write. |
2655ec6
to
2578a23
Compare
c461de3
to
2a7e58d
Compare
f7a1c73
to
224fcfe
Compare
224fcfe
to
6592a73
Compare
cf16d30
to
fa499cd
Compare
I agree, that CPU is not supported by its vendor anymore. See https://www.amd.com/en/support/download/drivers.html and https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/log/amd-ucode/README?showmsg=1, notice that lack of microcode updates since 2018: https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/log/amd-ucode/microcode_amd_fam15h.bin.asc I'd suggest using at least a Zen1 CPU for AMD, and Skylake for Intel.
The short comparison here doesn't show confidence intervals, but rerunning it shows similar variation on master:
|
Note that the
Yes, the "statistics" in multicore-bench is very rudimentary. The idea has been to move that to the benchmarking service/frontend and just have the benchmarking db store the raw results. This way it should then be possible to view/analyze the data in multiple ways. I'm not sure at what point we might get around to do that, however. |
fa499cd
to
8ed33e5
Compare
8ed33e5
to
ec37b10
Compare
35fe5bc
to
79880f8
Compare
79880f8
to
32e220f
Compare
This PR implements a MPMC queue using two stacks. This is a new lock-free queue algorithm / data structure. It uses two stacks for the tail and head of the queue. Operations on the tail (pushes) and head (pops) are like with a Treiber stack. A simple lock-free algorithm is used to transfer elements from the tail to head after a pop is attempted on an empty head (and the tail is non-empty).
The interesting feature of this queue is that it seems to outperform an optimized Michael-Scott queue (see #122) on many machines. Here are results from a benchmark run on my M3 Max:
As one can see, the (median) thruput is substantially higher than that of the optimized Michael-Scott queue (see #122) and that seems to be the case on most machines I've tested this on. Most interestingly, however, on the "fermat" machine we use for benchmarking, the Michael-Scott queue seems to perform better. See my comment below for a possible explanation.