Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detect latency? #57

Open
christianbundy opened this issue Dec 11, 2019 · 4 comments
Open

Detect latency? #57

christianbundy opened this issue Dec 11, 2019 · 4 comments

Comments

@christianbundy
Copy link
Contributor

In the future I'd like to add some sort of debounce to streams like this one, but it would be important to tell what sort of latency I have with the stream reader. For example, when peers are on the same LAN the debounce should be low, but if they're not on the same continent it would be very beneficial to omit items that would be redundant.

cc: @dominictarr

@dominictarr
Copy link
Contributor

that would be interesting. we do have a not very good latency/ping thing in gossip plugin.
I've considered putting that in a new version of box-stream too, because at that level it can be useful to keep the stream alive and actually know if it's still connected.
Oh, to get backpressure working on muxrpc, we need some sort of latency detection.

Adding latency measurement to a particular api means it needs to be a duplex stream, instead of adding that to a particular stream... I think it would be better to just make the amount of debounce be an option on that api.

For that particular stream, I don't think it's just about the latency, it's also that too frequent updates just don't matter. As long as the software behaves well at the human scale.

https://www.humanbenchmark.com/tests/reactiontime

my reaction time was 255 ms. so a time faster that probably wouldn't even be noticed.

Oh, and for that stream, the use case is client side views, local server, then application client, same machine. the latency is gonna be very low, but I think it would still work fine for the user if the debounce was ~1 second.

@christianbundy
Copy link
Contributor Author

Thanks for the info!I I agree that we can allow for latency with humans, especially in apps like Patchwork, but I was thinking about this because even tiny latency (50 ms) breaks tests (which makes you add artificial latency). Maybe I'm really just looking for 1 bit of "should I even bother with a debounce" information so that I can either give the raw stream or add some debounce because of non-negligible latency.

@dominictarr
Copy link
Contributor

Oh, okay maybe what you need is a way to do a write, then call a view, and have that view callback be consistent with your write. on serverside flume, that works because the main log's since is updated before the write callback is called, and then the next call to the view will be delayed if the view isn't up to date yet.

So, if you requested the current seq after the write succeeds, then you will know when the view is at least that uptodate, then your reads will be consistent with your writes.

So, simple is always debounce the stream (at whatever rate) but then ask what the server is up to before reading. hmm, I think you can skip that if you havn't done a write since the last time you checked, then reads will definitely be consistent with your own writes, but maybe not writes received from the peers. I think that would be okay though, and should get your tests to behave consistently.

@dominictarr
Copy link
Contributor

the reason not having debounce works, is assuming muxrpc, the since stream gets written to before the write callback does, so the client view knows about the update before the user tries to query the view.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants