Failed downloads for large files due to insufficient ephemeral storage #213
-
I am running nginx-s3-gateway as a Kubernetes Deployment running on VMs with about 30GB free local flash storage each. When people attempt to download large files (for example there is a 290GB data file needed by someone), the pods are evicted with messages like:
Basically, there is some buffer of data that is getting stored by nginx that becomes too large for the Kubernetes node storage capacity. I have tried setting the PROXY_CACHE_MAX_SIZE = "0g" as suggested in another discussion to disable caching, but this has not altered the behavior. Is there a way to limit whatever local buffer is being filled? Hopefully this is not some fundamental limitation, where the nginx server must have enough free space to temporarily store each file being proxied. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments
-
Hi there, I do not know the details of your Kubernetes cluster configuration, but I suspect that the file path in which NGINX is writing its cache files is mounted as ephemeral storage. The default path in the S3 Gateway is: Once you know, you can modify/overwrite the file and point the cache path somewhere else: common/etc/nginx/templates/cache.conf.template Please keep us posted on how this goes. |
Beta Was this translation helpful? Give feedback.
-
The
The S3 cache settings file looks like:
As you can see, the
|
Beta Was this translation helpful? Give feedback.
-
I attempted to use |
Beta Was this translation helpful? Give feedback.
-
I solved it by disabling buffering by adding the line Thanks for your help @dekobon ; without your suggestion I doubt I would have figured this out. |
Beta Was this translation helpful? Give feedback.
I solved it by disabling buffering by adding the line
proxy_buffering off
! Here is the relevant commit.Thanks for your help @dekobon ; without your suggestion I doubt I would have figured this out.