-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add API keys to help control usage #152
Comments
I did a little bit of a deep dive into API keys and usage plans and came up with three possible solutions. I would like to test whichever we prefer before making any final decisions as I think there is a bit of complexity to this. 1) API keys & usage planI think we can definitely set up API keys and hand them out to folks or hand out a single API that we use to limit requests. They would need to make sure the API key was present in the We can also use a Lambda authorizer to limit requests with a usage plan which would still need to use an API key but I am thinking we could use a single API key referenced in the Lambda and therefore the user would not have to obtain and use an API key. The Lambda authorizer must return the API key as a part of its output. You configure a Lambda authorizer for an API gateway method. You can set rate (requests per second) and burst (I am not sure what this setting does) for the usage plan as well as a quota which is the total number of requests a user can make in the time of period of the plan. Pros
Cons
2) Setting up stage-level throttlingWe can define a rate in requests per second and a target burst rate which is the capacity of the token bucket. This operates on the concept of a token bucket algorithm, where a token counts for a request. Pros
Cons
For 1 and 2 - I think the user can receive a 429 Too Many Requests error response which they can then handle and try to submit requests in a way that does not bump up against the limit. I think this makes either option more user friendly. See documentation on throttling API requests. 3) Web Application Firewall (WAF)You protect resources like the API gateway with web access controls lists (ACL) where each ACL has a rule or rules associated with them. These rules tell the AWS WAF how to inspect the web request. You can then make decisions based on this inspection. You can aggregate requests by their IP address and any other information that gets passed in the request and rate limit those criteria. Pros
Cons
@frankinspace - What do you think might be best to try and test out first? |
Going with the Lambda authorizer approach to start. Will have 2 keys to start with: default/general public and "trusted" partners. @nikki-t is working on prototyping in SIT Current problem is that the API key from end users does not get passed through to hydrocron API; it gets block by the platform api gateway at the moment. Need an NGAP ticket to discuss passing API key to tenant API gateway. |
API Gateway has a built in mechanism for throttling/limiting individuals called usage plans:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
We may need to consider configuring this for hydrocron as currently, there are no usage limits other than the AWS imposed account-level limits (10,000 requests per second (RPS) with an additional burst capacity provided by the token bucket algorithm, using a maximum bucket capacity of 5,000 requests.)
We don't necessarily need to require 1 API key per user or to tie them to an EDL account. But this would be the quickest way to help throttle requests if we see API usage spikes.
The text was updated successfully, but these errors were encountered: