-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate limiting #116
Comments
Even if you're using fakelag a server will disconnect you if you spam enough when your input buffer fills up.
Some servers already use RPL_LOAD2HI for this. It only applies to some messages though and without labeled-response its hard to associate the response to a specific message. |
Why would the server need to disconnect clients when the buffer is full? The server can just leave the connection as-is and read more from the buffer later on.
Indeed. To be able to leverage it in clients we'd need servers to ensure that they always send it for all messages. |
In my experience, networks have been resistant to advertise their rate limits to avoid giving up intel to spammers. |
Networks can indicate conservative limits if they are particularly concerned about spammers. Or they can choose to not implement the new mechanism at all, in which case clients will default to their current conservative behavior. |
Because memory is limited. Even if you're not reading it in your server that data still has to live in the kernel buffers. |
Every IRC client consumes memory. It's up to the IRC server to keep the memory allocated (both in user-space allocations and in kernel-wide socket buffers) under a reasonable amount. The IRC server can configure the socket buffer size if desirable. |
@emersion It can configure the buffer size, but there is always a limit. And unlike clients (which only deal with a handful of semi-trusted connections), it's unlikely servers will allow a very large size. |
What I'm suggesting is that servers lower the kernel socket size if they are concerned about memory usage. Once the kernel buffer is filled, that's fine, the kernel won't ACK any more packets, and the client will need to wait before sending more data. |
This seems like it could be solved by an This would also let clients stop accepting input form users until they can send messages which is much better than not delivering the message after the user thinks they've sent it. As far as hiding the limit from spammers, I'd be surprised if serious spammers aren't able to figure it out on their own as they're likely utilizing multiple connections from multiple ips. |
Its unfortunately not that simple. At least in our implementation different commands have different levels of penalty associated with them so you can't just say a single limit like that. |
Well it could be a baseline then? maybe there's join burst limit too. Something else for tagmsg, etc. |
Many IRC servers have a rate limiting mechanism, where clients are only allowed to send a given amount of messages for a given time window. Sometimes there are different restrictions depending on the type of message, the state of the connection, etc. Usually the server behavior when the limit is reached is pretty punitive: the server disconnects the client.
For this reason many clients implement a rate limiter for their outgoing message. Because they don't know better, they have to pick a pretty conservative default limit. For instance, soju allows bursts of 10 messages, then waits 2 seconds between each message. However, this limit is unnecessarily restrictive on some less strict networks.
It is possible to add knobs to let users configure the limits on a per-network basis, however it would be much nicer if users wouldn't need to concern themselves with such issues and things just worked by default. In other words, I'm interested in a solution to gracefully handle rate limiting without risking disconnection.
Possible solutions I can think of:
The text was updated successfully, but these errors were encountered: