Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ratelimiting with bursts #1782

Open
lifeofguenter opened this issue Dec 28, 2024 · 1 comment
Open

Ratelimiting with bursts #1782

lifeofguenter opened this issue Dec 28, 2024 · 1 comment

Comments

@lifeofguenter
Copy link

Would be great if the following could be supported:

limit [average average] [burst burst] [kod kod]
Set the parameters of the limited facility which protects the server from client abuse. Internally, each [MRU](https://docs.ntpsec.org/latest/ntpq.html#mrulist) slot contains a score in units of packets per second. It is updated each time a packet arrives from that IP Address. The score decays exponentially at the burst rate and is bumped by 1.0/burst when a packet arrives.

average average
Specify the allowed average rate for response packets in packets per second. The default is 1.0

burst burst
Specify the allowed burst size if the bursts are far enough apart to keep the average rate below average. The default is 20.0

kod kod
Specify the allowed average rate for KoD packets in packets per second. The default is 0.5

e.g. essentially in addition to average (which is what I think the current implementation?) allowing for bursts.

@rnijveld
Copy link
Member

I guess what we are really doing is a little bit different from either: what we're doing is looking at the time between requests, if any two requests from the same client appear to quickly after each other, we send a RATE response. I guess what we could do is store a bursting counter in our datastructure as well to allow some bursting behavior.

It should be noted that the rate limiting behavior is not something to rely on when strict security is needed. The rate limiting relies on a limited size cache, where if the number of concurrent clients increases you will eventually end up in a situation where we have collisions between the different cache entries (a large enough cache gets especially hard with IPv6, with single clients often having access to multiple IP addresses, allowing them to easily get around the rate limiting mechanism). There isn't really a way around this, as more robust rate limiting mechanisms take more time than just generating the response, which is in the end the best way to handle this: just respond to everything, responses are relatively cheap to generate anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants