Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate limiting #116

Open
emersion opened this issue Jan 18, 2024 · 11 comments
Open

Rate limiting #116

emersion opened this issue Jan 18, 2024 · 11 comments

Comments

@emersion
Copy link

Many IRC servers have a rate limiting mechanism, where clients are only allowed to send a given amount of messages for a given time window. Sometimes there are different restrictions depending on the type of message, the state of the connection, etc. Usually the server behavior when the limit is reached is pretty punitive: the server disconnects the client.

For this reason many clients implement a rate limiter for their outgoing message. Because they don't know better, they have to pick a pretty conservative default limit. For instance, soju allows bursts of 10 messages, then waits 2 seconds between each message. However, this limit is unnecessarily restrictive on some less strict networks.

It is possible to add knobs to let users configure the limits on a per-network basis, however it would be much nicer if users wouldn't need to concern themselves with such issues and things just worked by default. In other words, I'm interested in a solution to gracefully handle rate limiting without risking disconnection.

Possible solutions I can think of:

  • Add a new cap to indicate that a server won't disconnect a client because of rate limits. The server would still be free to slowly process message bursts, this would result in writes blocking on the client-side.
  • Advertise a simple burst+delay setting from the servers. This won't cover all peculiarities but will still significantly improve the status quo.
  • Add a standard response for servers to indicate that they didn't process a message and that the client needs to slow down. The server doesn't disconnect the client, it discards the incoming message and sends a non-fatal error. The client can slow down its outgoing message rate and retry. This is similar to HTTP's 429 Too Many Requests.
@SadieCat
Copy link

SadieCat commented Jan 18, 2024

Add a new cap to indicate that a server won't disconnect a client because of rate limits.

Even if you're using fakelag a server will disconnect you if you spam enough when your input buffer fills up.

Add a standard response for servers to indicate that they didn't process a message and that the client needs to slow down.

Some servers already use RPL_LOAD2HI for this. It only applies to some messages though and without labeled-response its hard to associate the response to a specific message.

@emersion
Copy link
Author

emersion commented Jan 18, 2024

Even if you're using fakelag a server will disconnect you if you spam enough when your input buffer fills up.

Why would the server need to disconnect clients when the buffer is full? The server can just leave the connection as-is and read more from the buffer later on.

Some servers already use RPL_LOAD2HI for this. It only applies to some messages though and without labeled-response its hard to associate the response to a specific message.

Indeed. To be able to leverage it in clients we'd need servers to ensure that they always send it for all messages.

@jwheare
Copy link
Member

jwheare commented Jan 18, 2024

In my experience, networks have been resistant to advertise their rate limits to avoid giving up intel to spammers.

@emersion
Copy link
Author

emersion commented Jan 18, 2024

In my experience, networks have been resistant to advertise their rate limits to avoid giving up intel to spammers.

Networks can indicate conservative limits if they are particularly concerned about spammers. Or they can choose to not implement the new mechanism at all, in which case clients will default to their current conservative behavior.

@SadieCat
Copy link

Why would the server need to disconnect clients when the buffer is full? The server can just leave the connection as-is and read more from the buffer later on.

Because memory is limited. Even if you're not reading it in your server that data still has to live in the kernel buffers.

@emersion
Copy link
Author

Every IRC client consumes memory. It's up to the IRC server to keep the memory allocated (both in user-space allocations and in kernel-wide socket buffers) under a reasonable amount. The IRC server can configure the socket buffer size if desirable.

@progval
Copy link

progval commented Jan 22, 2024

@emersion It can configure the buffer size, but there is always a limit. And unlike clients (which only deal with a handful of semi-trusted connections), it's unlikely servers will allow a very large size.

@emersion
Copy link
Author

What I'm suggesting is that servers lower the kernel socket size if they are concerned about memory usage. Once the kernel buffer is filled, that's fine, the kernel won't ACK any more packets, and the client will need to wait before sending more data.

@grimmy
Copy link

grimmy commented Sep 6, 2024

This seems like it could be solved by an RPL_ISUPPORT Parameter that has a burst and delay in it. something like RATELIMIT=10,2 where 10 is the burst, and 2 is the delay in seconds.

This would also let clients stop accepting input form users until they can send messages which is much better than not delivering the message after the user thinks they've sent it.

As far as hiding the limit from spammers, I'd be surprised if serious spammers aren't able to figure it out on their own as they're likely utilizing multiple connections from multiple ips.

@SadieCat
Copy link

SadieCat commented Sep 6, 2024

Its unfortunately not that simple. At least in our implementation different commands have different levels of penalty associated with them so you can't just say a single limit like that.

@grimmy
Copy link

grimmy commented Sep 6, 2024

Well it could be a baseline then? maybe there's join burst limit too. Something else for tagmsg, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants