Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix rarely crash when try to call c4socket_gotHTTPResponse on sock_id… #55

Closed

Conversation

CedricCouton
Copy link

@CedricCouton CedricCouton commented Dec 23, 2020

… with connection already closed

Try to fix #54

@Dushistov
Copy link
Owner

Sorry for delay, I was extremely busy with another projects.

If I get you right, you call restart_replicator with very short delay,
so close initiated on the client, not on the server,
so we get situation:

  1. ws_open, which return control back, but run async work to finish initialization
  2. ws_request_close which close socket before async work that start ws_open would have chance to be finished.

I got it right?

Then such construction should only mask problem, not solve it:
if !*socket.closed.lock().await {

I mean mutex at this line locked and unlocked, all in the same line of code. So if close happens between lines 365 and 379 this would be bad?

May be better have some kind of barrier in async part of ws_open, so until we reach 'read_loop: loop {
async part of ws_request_close will wait. May be in busy loop, like:

init_done: Arc<AtomicBool>,

open_connection:
   init_done.store(true);

ws_requse_close:
    while !init_done.load() {
     }

@Dushistov Dushistov mentioned this pull request Nov 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Panic Cause: null pointer dereference in c4socket_gotHTTPResponse
2 participants