-
-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Import mutex, semaphore, and condition variable from LWSP #8455
Conversation
One or more of the commits in this PR do not match the code submission policy, please check the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reviewing locking/syncing code is kinda nerve-wrecking for me, but here's a bunch of comments regardless :P
These are convinient wrappers over the most used futex operations. futex_wait() also does some smarts for timeout and clock handling. Use the new futex_wait() instead of a similar private helper in LibPthread.
pthread_mutex is now an actual "sleeping" mutex, and not just a spinlock! It still has a fast path that only uses atomics and (in the successful case) returns immediately without sleeping. In case of contention, it calls futex_wait(), which lets the kernel scheduler put this thread to sleep, *and* lets it know exactly when to consider scheduling it again.
This is a private function that locks the lock much like the regular pthread_mutex_lock(), but causes the corresponding unlock operation to always assume there may be other waiters. This is useful in case some waiters are made to wait on the mutex's futex directly, without going through pthread_mutex_lock(). This is going to be used by the condition variable implementation in the next commit.
This implementation features a fast path for pthread_cond_signal() and pthread_cond_broadcast() for the case there's no thread waiting, and does not exhibit the "thundering herd" issue in pthread_cond_broadcast(). Fixes SerenityOS#8432
This implementation does not use locking or condition variables internally; it's purely based on atomics and futexes. Notably, concurrent sem_wait() and sem_post() calls can run *completely in parallel* without slowing each other down, as long as there are empty slots for them all to succeed without blocking. Additionally, sem_wait() never executes an atomic operation with release ordering, and sem_post() never executes an atomic operation with acquire ordering (unless you count the syscall). This means the compiler and the hardware are free to reorder code *into* the critical section.
This all looks better than what we have right now, so let's get it merged. |
if (!(value & POST_WAKES)) [[likely]] | ||
return 0; | ||
int rc = futex_wake(&sem->value, 1); | ||
VERIFY(rc == 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, this is stupid, and gives away that I haven't actually tested the Serenity version beside seeing that it boots 😀
Will submit a fix soon.
Seems like after reverting these changes, booting x86_64 mode works OK for me. I tried to figure out why, but to be honest, I really don't know what is going wrong with this now. |
This is a rewrite of mutexes, condition variables, and semaphores based on code from Let's write synchronization primitives 🎉 We've already had a trial run with importing the once primitive in #4158, and it's been a success, so let's do more.
The rest of this paper is organized as follows... err, I mean, let's talk about some details and caveats though.
The semaphore implementation is complex, and hard to understand. The good things are:
The condition variable is simpler, and the code is also commented. However,
I did not replace the rwlocks implementation with the one from LWSP. The existing one, which @alimpfard says he has written after having taken a look at LWSP, seems sophisticated enough. He says it "is pretty damn dumb, and probably has more bugs than features. But for the time being, it seems to work. however, we should definitely replace it with a good implementation sometime very soon". But I don't know, I don't feel like ripping it out and replacing with my own version. I'd at least need to carefully read it & understand how it works first.
cc:
sem_post
andsem_wait
#7645