-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Idea: Pipeline Mutexes #2828
Comments
/kind feature |
We've built a queueing system to manage our way around this problem, so a +1 for it being a useful thing to tackle. I don't know whether it should be a core primitive or in the catalog, but in our use case is was necessary at a very early stage, and for deployments specifically it seems to me that having one deployment per app per environment at a time, and ideally in a sensible order, is going to be a very common requirement. So I'm leaning towards 'core'. |
I'd really appreciate this as well, perhaps with Task granularity rather than pipeline. My use case for this, which is to do with cross-talk between concurrent runs of integration tests/db management. In an integration test scenario, for example, the tasks depend on an external resource. If that resource is stateful (like a database), some tasks are rebuilding the database while others might be executing tests which use the database. I'd love to be able to single-thread pipeline runs through the integration test phase. |
I also think this is a very common use case for a CI/CD Pipeline. |
I got inspired by @tragiclifestories 's suggestion of a queueing system as a workaround, so I made one too. I documented the steps - hopefully it's useful to someone else while this is pending: https://medium.com/@holly.k.cummins/using-lease-resources-to-manage-concurrency-in-tekton-builds-344ba84df297 |
Interesting! We took a different approach by storing the queue data in configmaps and defining all the queue operations as scripts that run in task steps. So no explicit modelling through CRDs but it works well enough for our use case. Hopefully we'll get around to the blog-post stage of the project soon. |
Nice :) |
Here is the use case we currently have. Imagine this simplified CD pipeline:
Currently we use a task in front of the section that polls a REST service to "ask to enter the section". The implementation of the REST service is specific to our pipeline and uses the Tekton API to analyse the state of all the PipelineRuns. It's ugly 😊 but it works so far. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
I'd prefer both task and pipeline granularity mutex. |
Hi @pritidesai , is there any update about this issue? Thanks :) |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
+1 |
Dear tekton Team, is there any update about this? It would be great if there was an option to set pipeline runs to a serial mode. For example: Kind regards |
@julweber this is being explored in tekton experimental repo: tektoncd/experimental#699 - @imjasonh shared an idea in that issue |
Hey @jerop , sorry for the late reply. Thanks a lot for the link, i will have a look. Cheers, |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@afrittoli: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
Feels like this could be part of a possible solution to the discussion we've been having over at: tektoncd/plumbing#888 (comment) If we want to take this forward I think what will really help is fleshing out the use cases that this feature would solve; @afrittoli this might not be the quite behavior you'd want for some of our common dogfooding use cases (tho it would be better than having a race!):
For PR triggered PipelineRuns/TaskRuns I think what you often want is to run the newest one and cancel the others (e.g. imagining a PR being updated after kicking off PipelineRuns/TaskRuns) |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lifecycle frozen |
Related to tektoncd/community#716 |
This was an idea that @k floated to me awhile back, but I finally got around to making an issue to discuss. What I'm curious about:
Details intentionally vague - this is a "should we do this?" issue, not a "how we'll do this" issue.
Idea
I may want to control how Pipelines run in relation to others and ensure only 1 pipeline for a given selector can run at a time (hence a "mutex").
I may want to reject new Pipelines if one if a similar one is already running, or queue it up and just make sure it does not run in parallel. This might be because:
Possible solution
Have a mechanism to select conditions to allow Pipeline execution, as well as a strategy for what to do in response.
Examples
If a new Pipeline is created that was labelled as a pull request, cancel existing runs.
Only run 1 pipeline at a time that was labelled as being started by a push to master. (does not guarantee ordering)
Deny new pipeline create requests if they match a pipeline currently running.
Alternatives
Implement as a task
Cancellation could be handled by having the first step of every pipeline could include something along the lines of
This would clobber over any other Pipelines with a particular label.
Queueing could be handled by having a Condition that runs
kubectl get
for running pods, and only proceed if a condition is true. This is difficult since you'd have to get creative in inspecting runtime information of other runs (e.g. are they also in a wait state, or are they running). This also creates container waste since the pipelines would all be running.Deny could not be implemented this way.
The text was updated successfully, but these errors were encountered: