-
Notifications
You must be signed in to change notification settings - Fork 600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Huge pages support #2258
Huge pages support #2258
Comments
Thanks for the suggestion. In the interim, you can modify the |
We are running into the same issue it seems. But we could not locate the |
@ns-rsuvorov in v5 you can set any compute request or limit in the |
@cbandy yes while we can add that to the, we still need to be able to set volume mounts which is not allowed, here is a patch we created to work around the issue {
"spec": {
"template": {
"spec": {
"containers": [{
"name": "database",
"resources": {
"limits": {
"hugepages-2Mi": "100Mi"
}
},
"volumeMounts": [{
"mountPath": "/hugepages-2Mi",
"name": "hugepage-2mi"
}]
}],
"volumes": [{
"name": "hugepage-2mi",
"emptyDir": {
"meduim": "HugePages-2Mi"
}
}]
}
}
}
} what concerns me is that even if we set huge_pages: off in parameters, it's not making into postgres configMap, I can see it on the cluster configmap for patroni, but it doesn't seem to give that to the postgres process that is running the db |
When the system has huge_pages turned on initdb is using the "postgresql.conf.sample" file causing the process to crash in Kubernetes. Turning off huge pages in this file would resolve the issue. Here are some links for further information Crunchydata CrunchyData/postgres-operator#3477 CrunchyData/postgres-operator#3039 CrunchyData/postgres-operator#2258 CrunchyData/postgres-operator#3126 CrunchyData/postgres-operator#3421 Bitnami bitnami/charts#7901
Hi, thanks for the feedback! We've been running some tests with huge_pages and have determined that you shouldn't have to mount additional volumes to use huge_pages. You should be able to request those through the postgrescluster spec directly in the resources field. There are also some additional considerations when requesting huge_pages:
Point (2) shouldn't be an issue here because the PG default for huge_pages is If you do continue to see a need to mount additional volumes, please reopen and provide additional information about your specific Kubernetes environment and operator Deployment (e.g. Kubernetes version, PGO version, etc.)? |
**What is the motivation or use case for the change? **
Improve performances for large databases
Describe the solution you'd like
Huge pages support by adding declaration of hugepages-2Mi resource limit for the postgresql container.
https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/
Please tell us about your environment:
The text was updated successfully, but these errors were encountered: