Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MGMT-14453: Fix bugs in installercache #7205

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

paul-maidment
Copy link
Contributor

@paul-maidment paul-maidment commented Jan 20, 2025

This PR is for the purpose of resolving multiple bugs within the installer cache, due to the poor condition of the current cache, it makes sense to fix this in a single PR.

Fixes:

I have implemented fixes for each of the following issues.

  • Mutex was ineffective as not instantiated corrctly, leading to MGMT-14452, MGMT-14453.
  • Naming convention for hardlinks changed to be UUID based to resolve MGMT-14457.
  • Any time we either extract or use a release, the modified time must be updated, not only for cached releases. This was causing premature pruning of hardlinks.
  • LRU cache order updated to be based on microseconds instead of seconds.
  • Eviction checks updated to consider max release size and also cache threshold.
  • We now check there is enough space before writing.
  • During eviction - releases without hard links will be evicted before releases with hard links.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Jan 20, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Jan 20, 2025

@paul-maidment: This pull request references MGMT-14453 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the bug to target the "4.19.0" version, but no target version was set.

In response to this:

Presently we instantiate a new installercache.Installers instance per execution of the code generator. This renders any mutex handling pointless as we will never have any mutex conflict over a single request.

This PR moves the instantiation into the main and effectively makes it a singleton so that mutexes will function as intended.

This is the root cause behind a couple of reported issues

https://issues.redhat.com/browse/MGMT-14452 - Installer cache removes in-used cached image when out of space https://issues.redhat.com/browse/MGMT-14453 - INSTALLER_CACHE_CAPACITY small value cause to assisted-service crash

Both of these issues are reproducible on master and do not reproduce when running on code based on this PR.

If the cache size is set to a low value such as 1 then the current release will be deleted once the caller has finished with the release.

List all the issues related to this PR

  • New Feature
  • Enhancement
  • Bug fix
  • Tests
  • Documentation
  • CI/CD

What environments does this code impact?

  • Automation (CI, tools, etc)
  • Cloud
  • Operator Managed Deployments
  • None

How was this code tested?

  • assisted-test-infra environment
  • dev-scripts environment
  • Reviewer's test appreciated
  • Waiting for CI to do a full test run
  • Manual (Elaborate on how it was tested)
  • No tests needed

Checklist

  • Title and description added to both, commit and PR.
  • Relevant issues have been associated (see CONTRIBUTING guide)
  • This change does not require a documentation update (docstring, docs, README, etc)
  • Does this change include unit-tests (note that code changes require unit-tests)

Reviewers Checklist

  • Are the title and description (in both PR and commit) meaningful and clear?
  • Is there a bug required (and linked) for this change?
  • Should this PR be backported?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jan 20, 2025
@openshift-ci openshift-ci bot requested review from carbonin and mlorenzofr January 20, 2025 10:50
Copy link

openshift-ci bot commented Jan 20, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: paul-maidment

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 20, 2025
@paul-maidment paul-maidment marked this pull request as draft January 20, 2025 11:12
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 20, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Jan 20, 2025

@paul-maidment: This pull request references MGMT-14453 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the bug to target the "4.19.0" version, but no target version was set.

In response to this:

Presently we instantiate a new installercache.Installers instance per execution of the code generator. This renders any mutex handling pointless as we will never have any mutex conflict over a single request.

This PR moves the instantiation into the main and makes sure there is only a single instance so that mutexes will function as intended.

This is the root cause behind a couple of reported issues

https://issues.redhat.com/browse/MGMT-14452 - Installer cache removes in-used cached image when out of space https://issues.redhat.com/browse/MGMT-14453 - INSTALLER_CACHE_CAPACITY small value cause to assisted-service crash

Both of these issues are reproducible on master and do not reproduce when running on code based on this PR.

If the cache size is set to a low value such as 1 then the current release will be deleted once the caller has finished with the release.

List all the issues related to this PR

  • New Feature
  • Enhancement
  • Bug fix
  • Tests
  • Documentation
  • CI/CD

What environments does this code impact?

  • Automation (CI, tools, etc)
  • Cloud
  • Operator Managed Deployments
  • None

How was this code tested?

  • assisted-test-infra environment
  • dev-scripts environment
  • Reviewer's test appreciated
  • Waiting for CI to do a full test run
  • Manual (Elaborate on how it was tested)
  • No tests needed

Checklist

  • Title and description added to both, commit and PR.
  • Relevant issues have been associated (see CONTRIBUTING guide)
  • This change does not require a documentation update (docstring, docs, README, etc)
  • Does this change include unit-tests (note that code changes require unit-tests)

Reviewers Checklist

  • Are the title and description (in both PR and commit) meaningful and clear?
  • Is there a bug required (and linked) for this change?
  • Should this PR be backported?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@paul-maidment paul-maidment force-pushed the MGMT-14453 branch 4 times, most recently from 0369404 to 715734c Compare January 26, 2025 14:54
@openshift-ci openshift-ci bot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 26, 2025
@paul-maidment paul-maidment force-pushed the MGMT-14453 branch 12 times, most recently from fa9cd8d to 27013f9 Compare January 26, 2025 23:43
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 26, 2025
@paul-maidment paul-maidment marked this pull request as ready for review January 30, 2025 11:53
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 30, 2025
@openshift-ci openshift-ci bot requested review from danmanor and rccrdpccl January 30, 2025 11:54
@openshift-ci-robot
Copy link

openshift-ci-robot commented Jan 30, 2025

@paul-maidment: This pull request references MGMT-14453 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the bug to target the "4.19.0" version, but no target version was set.

In response to this:

This PR is for the purpose of resolving multiple bugs within the installer cache, due to the poor condition of the current cache, it makes sense to fix this in a single PR.

Fixes:

I have implemented fixes for each of the following issues.

  • Mutex was ineffective as not instantiated corrctly, leading to MGMT-14452, MGMT-14453.
  • Naming convention for hardlinks changed to be UUID based to resolve MGMT-14457.
  • Any time we either extract or use a release, the modified time must be updated, not only for cached releases. This was causing premature pruning of hardlinks.
  • LRU cache order updated to be based on microseconds instead of seconds.
  • Eviction checks updated to consider max release size and also cache threshold.
  • We now check there is enough space before writing.
  • During eviction - releases without hard links will be evicted before releases with hard links.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

codecov bot commented Jan 30, 2025

Codecov Report

Attention: Patch coverage is 76.36364% with 26 lines in your changes missing coverage. Please review.

Project coverage is 67.95%. Comparing base (ef9f0c8) to head (889994e).
Report is 6 commits behind head on master.

Files with missing lines Patch % Lines
internal/installercache/installercache.go 77.66% 18 Missing and 5 partials ⚠️
internal/metrics/disk_stats_helper.go 40.00% 3 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #7205      +/-   ##
==========================================
+ Coverage   67.89%   67.95%   +0.05%     
==========================================
  Files         298      300       +2     
  Lines       40711    40981     +270     
==========================================
+ Hits        27642    27848     +206     
- Misses      10592    10640      +48     
- Partials     2477     2493      +16     
Files with missing lines Coverage Δ
internal/ignition/installmanifests.go 56.02% <100.00%> (-0.06%) ⬇️
internal/metrics/disk_stats_helper.go 64.51% <40.00%> (-6.92%) ⬇️
internal/installercache/installercache.go 77.19% <77.66%> (-0.13%) ⬇️

... and 15 files with indirect coverage changes

@openshift-ci openshift-ci bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. api-review Categorizes an issue or PR as actively needing an API review. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Jan 30, 2025
@danmanor
Copy link
Contributor

danmanor commented Jan 30, 2025

@paul-maidment Any chance you can provide some description about the installerCache now with your changes (what configuration it should get (and why), what is its main purpose, how it operators, etc. ?

I know it should function as a cache for openshift/installer binaries, but not much more than that

@paul-maidment
Copy link
Contributor Author

paul-maidment commented Jan 30, 2025

@paul-maidment Any chance you can provide some description about the installerCache now with your changes (what configuration it should get (and why), what is its main purpose, how it operators, etc. ?

I know it should function as a cache for openshift/installer binaries, but not much more than that

I can certainly add a summary to the description with some of these details, thank you for the suggestion.

The installercache is essentially a directory in ephemeral storage where we store the openshift-install binaries for clusters that are being prepared for installation.

It's used at the point where the manifests and install configuration are being prepared.

The manifest generator makes a request to get a specific release from the cache, is given a 'Hardlink' to the release. Once the manifest generator is done with the release, it calls Cleanup on the release which causes the hardlink to be removed.

Typically, it could take between 30 seconds and 90 seconds (this is an educated guess and we have another ticket in place to gather more evidence/stats) to free a release after use.

The cache is meant to keep the most common releases around in Ephemeral storage so that if another user wants to obtain the same release, we can save the 30 seconds or so required to download it and just return the one we have.

Obviously the number of openshift verisions available makes in prohibitive to store all openshift versions, so we maintain a "Least recently used" cache of releases.

The cache is configured using the following environment variables

INSTALLER_CACHE_CAPACITY_GIB
Which is the total capacity in gigabytes that is desired for the cache. The cache will maintain itself within this limit.

INSTALLER_CACHE_MAX_RELEASE_SIZE_GIB
This is an estimate of how large we think a release could get, we require that at least this much free space be present in the cache before we will allow it to download a release.

This could perhaps be improved or automated in the future but as skopeo and other tools only return compressed image sizes, it was not possible to obtain this from the release without downloading it (which defeats the purpose here.)

INSTALLER_CACHE_EVICTION_THRESHOLD_PERCENT
This setting tells the installer cache how "full" it may be before starting to evict items from the cache (least recently used first)

INSTALLER_CACHE_RELEASE_FETCH_RETRY_INTERVAL_SECONDS
If the cache is too full to take another release, we do not permit further writes to the cache until the space has been freed. This means that we have to start rejecting requests.

There is a retry mechanism of course, and that's what this setting pertains to.

Because we are using hardlinks, an eviction of the original file does not immediately mean that the space will be freed. Only when the release is cleaned up after use will the space be reconciled. This interval should be ideally set to a duration that covers this length of time and nothing more.

@paul-maidment paul-maidment force-pushed the MGMT-14453 branch 3 times, most recently from a12f31d to 20398ce Compare January 30, 2025 18:36
@openshift-ci openshift-ci bot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Jan 30, 2025
@paul-maidment
Copy link
Contributor Author

/retest

@paul-maidment
Copy link
Contributor Author

/test edge-subsystem-kubeapi-aws

@paul-maidment
Copy link
Contributor Author

/test okd-scos-e2e-aws-ovn

@carbonin
Copy link
Member

@paul-maidment there's no reason to constantly retest failing optional tests unless you think the specific failure is relevant to your changes.

Looking at that test's history it seems to be failing much more frequently than it's passing so I wouldn't worry about it.

@carbonin
Copy link
Member

Still working on a full review, but one thing I'm noticing is that you have a lot of log info messages that I'm not sure are going to be super interesting once this is all up and running.

I'd consider whether we will really need to see these every time and either remove some or change them to debug.

I just don't want to clutter up the review with a bunch of comments saying "consider changing this to debug or removing it"

)

BeforeEach(func() {
// setup temp workdir
workDir, err = os.MkdirTemp("", "bootstrap-ignition-update-test-")
Expect(err).NotTo(HaveOccurred())
Expect(err).NotTo(HaveOccurred())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this likely belongs down under installercache.New, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

for {
select {
case <-ctx.Done():
return nil, errors.Errorf("context cancelled or timed out while fetching release %s", releaseID)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Context has an Err() method that should be used for contexts that are closed.
I'd add that here.

if !isCapacityError {
return nil, errors.Wrapf(err, "failed to get installer path for release %s", releaseID)
}
i.log.WithError(err).Errorf("insufficient installer cache capacity for release %s", releaseID)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we always be able to evict something? How could we end up in this situation?

Copy link
Contributor Author

@paul-maidment paul-maidment Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not the case and this scenario is very easy to reproduce.

  • Installer cache should be set to a capacity that is only enough for a single release.
  • Fetch 2 different releases
  • The second release will have this issue due to a pending hard link cleanup.

The capacity check performed here is of the amount of free space while accounting for hard links that have not yet been removed.

An eviction is removing release files, not hard links (unless we count the questionable " hard link pruning" functionality.)

A release that we have deleted may still be in use by the caller, until that release is freed by calling Cleanup then the actual space will not be freed.

By the time we reached this line we have done the following

1: Checked the free space while accounting for hard links
2: Determined that we do not have enough space in which to store a release
3: Called evict and performed some eviction on the releases -- note that there may be no eviction candidates that are without hard links yet.
4: Returned to the top of the loop and checked the free space while accounting for hard links again.
5: Determined that we do not have enough space in which to store a release
6: Detected that this is a 'blocking' issue and that unless we have the space, we cannot write
7: Determined that the eviction has already been called and should not be called again as it will most likely not have an impact.

This is a "fail fast" operation to allow a retry to be attempted outside of the context of the Mutex.

Comment on lines +176 to +172
// Installer cache directory will not exist prior to first extraction
i.log.WithError(err).Warnf("skipping capacity check as first extraction will trigger creation of directory")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we do this when the cache object is created so we don't have to check every time here?

Copy link
Contributor Author

@paul-maidment paul-maidment Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could argue that the release extraction code wants to retain responsibility for this as it knows the exact file and directory permissions required. Ultimately oc adm release extract performs the actions.

If a user were to manually run this command, they would not be expected to manage the permissions as it would be assumed that oc had made the right choices when it created the directory/files.

I wonder if we should keep the responsibility there?

}
if evictionAlreadyPerformed {
// We still don't have enough capacity after eviction, so we need to exit and retry
return nil, &errorInsufficientCacheCapacity{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this all in a loop if we're just going to exit on the second try? Would it not be simpler to just call GetDiskUsage again after evicting something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the loop I expected this to continue evicting releases until we had space for another one.
Why not do that?


Edit after reading to the bottom ... Isn't my suggestion what you're actually doing in the loop in the evict function? Why do we need this case then?

Copy link
Contributor Author

@paul-maidment paul-maidment Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason we use the loop here? To avoid clumsy repetition of the same code, a loop feels appropriate here.

The check in the first loop (the one that runs twice and no more) is looking at the disk usage while accounting for hard links. This is to guarantee that we cover the case where we don't have enough disk space to perform even a single write.

As I see it, we must guarantee not to perform this write, this is why we count the space including hard links.

The eviction loop however, only accounts for the accumulated size of the files in the cache while not accounting for hard links.

The reason for this is that we are evicting these files, not the hard links and we should only base our eviction capacity calculations on what we are able to evict. If we were to check capacity including hard links during eviction, it would lead to 'over eviction' when called repeatedly as we are awaiting the deletion of hard links and that is beyond our control.

In the worst case scenario we could evict every cached release if we make this mistake.

if finfo.info.ModTime().Unix() < grace {
os.Remove(finfo.path)
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just FYI it's tough to review stuff like this where you both move and slightly change something.
Given that this PR is already pretty big and the changes you made in this case really a no-op I think it would have been better to just not change this at all.

Now that you've done it, it's fine, but just wanted to make you aware.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will bear this in mind, I'll try to keep things aligned so they 'diff' better in future.
Thanks for raising this.

// prefer files without hardlinks first because
// 1: files without hardlinks are likely to be least recently used anyway
// 2: files without hardlinks will immediately free up storage
queues := i.splitQueueOnHardLinks(files)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're using the actual space used on disk as our metric here why consider evicting files with hardlinks at all?
Removing one of those won't get us any space back.

The amount of extra engineering here is the reason I suggested we use number of releases rather than the actual size for this. This extra complexity is bound to make this fragile, but I understand if we don't want to change directions now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there could be limited some edge cases where pruning a release with a hard link could present a small advantage but certainly nothing too significant.

It's safe to include them but in this case, I tend to agree that we should just filter them out.

@@ -119,19 +145,162 @@ var _ = Describe("installer cache", func() {
}
Expect(l.Path).ShouldNot(BeEmpty())

time.Sleep(1 * time.Second)
time.Sleep(10 * time.Millisecond)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this sleep necessary? Can you add a comment explaining it if it really is needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, I just tested this function again and we don't need this for the assertions we are performing here.

I think I was attempting to simulate a small amount of latency between fetching a release using installers.Get(...) and calling Cleanup(...) on the release. In practice however, this is not needed for this simple, non parallel test.

// runParallelTest launches a batch of fetches from the installer cache in order to simulate multiple requests to the same node
// releases are automatically cleaned up as they are gathered
// returns the first error encountered or nil if no error encountered.
runParallelTest := func(params []launchParams) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this kind of test belongs in unit-tests.
At best if/when it fails it will give inconsistent results.

I totally understand if you want to write these tests for yourself, but I'm not as sure about having them here.
Curious what @rccrdpccl thinks.

Copy link
Contributor Author

@paul-maidment paul-maidment Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intent here is to have these as smoke tests for the happy path.

The general idea is to have clear scenarios where depending on the scenario at least one of the parallel processes will fail or where none will fail.

All I expect in these cases is that any time there is a failure they will break the test so that we know there is a problem.

We are making sure that the installer cache is instantiated and sending multiple parallel requests, I think that this simulates the environment in which the cache will run reasonably well. Obviously it's not a full 'end to end' context.

If I am not mistaken, the "unit" under test is the "Get" function of installer cache (if not the cache itself)

This call has a mutex and we should ensure that it is being correctly used. A parallel test is an appropriate way to do this.

Steps have been taken to reduce the runtime of the code and they do not hinder the speed of the unit tests, no tests are reported as 'slow'

I don't see this as the comprehensive solution to testing this and expect that a lot more will be done in QE.
If we really want to push this further, we could do some e2e tests in openshift/release that push multiple parallel installation attempts.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At best if/when it fails it will give inconsistent results. <-

How do you feel the results will be inconsistent, can you point out an example in this case?

// 2: files without hardlinks will immediately free up storage
queues := i.splitQueueOnHardLinks(files)
for _, q := range queues {
for i.shouldEvict(totalSize) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we're evaluating this each loop it feels very much like we should not need the loop in get and should never hit the capacity error, right?

Doesn't this effectively ensure we'll evict releases until we have enough space to download a new one? Or am I misunderstanding something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Already discussed above, I think something has been misunderstood here.

…in the installer cache

This PR is for the purpose of resolving  multiple bugs within the installer cache, due to the poor condition of the current cache, it makes sense to fix this in a single PR.

* https://issues.redhat.com/browse/MGMT-14452
Installer cache removes in-used cached image when out of space
* https://issues.redhat.com/browse/MGMT-14453
INSTALLER_CACHE_CAPACITY small value cause to assisted-service crash
* https://issues.redhat.com/browse/MGMT-14457
Installer cache - fails to install when running parallel with same version
* Additionally, the cache did not respect limits, so this has been addressed here.

Fixes:

I have implemented fixes for each of the following issues.

* Mutex was ineffective as not instantiated corrctly, leading to [MGMT-14452](https://issues.redhat.com//browse/MGMT-14452), [MGMT-14453](https://issues.redhat.com//browse/MGMT-14453).
* Naming convention for hardlinks changed to be UUID based to resolve [MGMT-14457](https://issues.redhat.com//browse/MGMT-14457).
* Any time we either extract or use a release, the modified time must be updated, not only for cached releases. This was causing premature pruning of hardlinks.
* LRU cache order updated to be based on microseconds instead of seconds.
* Eviction checks updated to consider max release size and also cache threshold.
* We now check there is enough space before writing.
* During eviction - releases without hard links will be evicted before releases with hard links.
Copy link

openshift-ci bot commented Feb 2, 2025

@paul-maidment: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api-review Categorizes an issue or PR as actively needing an API review. approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants