Replies: 2 comments
-
For a comparison, what does memory consumption look like on a similarly configured instance that is not running the s3 gateway njs code? As I understand it, NGINX memory utilization is quite variable and the OS is not quick to reclaim memory that is reclaimable. There are a number of things going on that may or may not be issues. Also, NJS has no garbage collection. Memory is reclaimed only after a request finishes. The OS may not actually reclaim this memory until it decides to do so, leaving more memory allocated to NGINX. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your reply. |
Beta Was this translation helpful? Give feedback.
-
Hello !
Thank you for creating and maintainting this great project !
I'm working to support chunked upload based on this project. Because the size of my client request body is large (512MB or bigger), I enabled the temp files:
And I need to resign the chunk-signatures for chunked data, so I set "proxy_set_body"
Here's the explanation of my s3 body functions:
Everything works fine.
But the memory consumpation is very large. In my test, I uploaded 17 MB data, the memory consumption of the nginx-s3-gateway docker container was 250 MB.
Even though the upload is completed, the memory is not freed.
When the next upload starts, nginx reuses this memory, so the memory consumption does not keep increasing. So I'm guessing this is in a memory pool allocated by Nginx for NJS and not returned to the system after a request is completed.
My question is:
What is wrong with my implementation? How to reduce memory consumption?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions