Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit pod resources #237

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

lpiwowar
Copy link
Collaborator

This patch introduces back limits for the pods spawned by the
test-operator after they were increased and later removed with these
two PRs [1][2].

The problem with the previous two patches was that they only set the
Resources.Limits field and not the Resources.Requests field. When
Resources.Limits is set and Resources.Requests is empty then it
inherrits the value from Resources.Limits.

Therefore, we first hit the OOM killed issue when we set the
Resources.Limits too low and later when we increased the value we hit
the "Insufficient memory" error (due to high value in
Resources.Requests field)

This patch addresses the above mentioned issue by:

  • setting sane default values for Resource.Limits
  • setting sane default values for Resource.Requests and
  • introduces new parameter called .Spec.Resources which can be used
    to change the default values.

[1] #222
[2] #224

Copy link

openshift-ci bot commented Oct 30, 2024

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

Copy link

openshift-ci bot commented Oct 30, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: lpiwowar

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/d1f08eda674d4d749df5a6f8ee9b5ad1

openstack-k8s-operators-content-provider FAILURE in 6m 09s
⚠️ podified-multinode-edpm-deployment-crc-test-operator SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider

This patch introduces back limits for the pods spawned by the
test-operator after they were increased and later removed with these
two PRs [1][2].

The problem with the previous two patches was that they only set the
Resources.Limits field and not the Resources.Requests field. When
Resources.Limits is set and Resources.Requests is empty then it
inherrits the value from Resources.Limits.

Therefore, we first hit the OOM killed issue when we set the
Resources.Limits too low and later when we increased the value we hit
the "Insufficient memory" error (due to high value in
Resources.Requests field)

This patch addresses the above mentioned issue by:
  - setting sane default values for Resource.Limits
  - setting sane default values for Resource.Requests and
  - introduces new parameter called .Spec.Resources which can be used
    to change the default values.

[1] openstack-k8s-operators#222
[2] openstack-k8s-operators#224
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/3ac9e170c7f044dca35650459699e731

openstack-k8s-operators-content-provider FAILURE in 7m 44s
⚠️ podified-multinode-edpm-deployment-crc-test-operator SKIPPED Skipped due to failed job openstack-k8s-operators-content-provider

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant