-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Controlling max parallel jobs per pipeline #2591
Comments
There isn't a configuration for this today, but it should be possible if there's demand and the use cases make sense. In the meantime you can run pipelines in a namespace with a resource limit such that no more than X CPUs are available to tasks, and those over the cap will queue until others finish. If you're just trying to limit the resource footprint this is likely the best way to express the limitation. Can you give more details about why you want to limit concurrency of tasks? https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/ |
Using Kubernetes resource-wise limits is one of the ways. But which pretty hard to achieve due to the kind of resource limits that I have to dynamically do, based on different aspects. Controlling concurrency is well needed for a CI-CD system, where though all can be run in parallel, the user should get an option to control due resource limitation/availability. Note: Currently I have achieved it using our consul server, and using consul lock -n ${concurrency} -child-exit-code ${jobkey} "bash $script $@" There are few problems in this case:
But the same can be directly added as a feature to tekton to use a semaphore and lock max concurrent jobs, on |
@imjasonh I could contribute to this if someone can give me some hints on the code structure as well as the standards to follow, plus any other technical issues if exists, which will conflict against this behaviour |
Thanks, I think this seems like a reasonable addition, and would be happy to help you with it. First, what kind of API addition are you envisioning? What's the minimum addition we can add that we could extend later if we need to? Is there any precedent in other workflow tool we could borrow/steal from? Depending on the size of the change, we'd probably ask for a design doc answering those questions, and describing the use case (which you've done above, thanks for that!) |
At first glance, I think a property in Initial version for this feature can be just |
One way of handling this is by using a Pod quota for the Namespace. |
Of course, you have an option from Kubernetes resource limits as you mentioned, but not always practical. For example, In my namespace, I am not just using the only tekton. And configuration and usability point of view, using pod quota is much more complex than using a value of max parallel in |
@ysaakpr Configuring pipeline-wide concurrency limits definitely seems easiest, but I wonder if that's expressive/powerful enough to satisfy more complex use cases. We should explore other options, even if only to be able to dismiss them with well thought out reasons. Consider a pipeline that does two highly parallelizeable things (e.g., initializing a database, then later deploying code to multiple AZs), but each of those parallelizeable things have different concurrency caps -- it might make sense to kick off 1000 (To be clear, this example isn't reason enough by itself to discount the pipeline-wide One way to express the different concurrency levels would be to group tasks together, then express concurrency limits per-group. Is that worth the additional config required to express/understand/visualize this grouping? I'm not sure. Would it be possible to support group-wise limits and pipeline-wide limits side-by-side? I truly have nothing to offer but open-ended questions. :) |
@imjasonh that's a good thought. There are already two other tickets for Task grouping in a pipeline, which actually discussing the pipeline task grouping As you mentioned, the idea of concurrency should not be limited to just at the pipeline level. I agree/accept that for a more complex pipeline, configuring this at group of task-level would be an always amazing feature. Pipeline level concurrency will be the |
/kind feature |
/priority important-longterm |
How could I contribute on this, Are there any discussion forum? Where I can also be part of the design/implementation discussions. |
+1 for this feature. |
See also #2828. |
I think it would not be so difficult to add logic for this. e.g. right before we create a later when a |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
@vdemeester: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
/lifecycle frozen |
Hi all, Is there any stable solution available in Tekton to attain Concurrent build for pipelines? |
A google search for controlling the parallel jobs per pipeline brought me here :) |
what is the best approach for now ? |
I am checking on a shell script ... something on these lines
|
Another similar feature would be cancel pipelines based on other types of pipelines running. |
What is the way to control concurrency? The pipeline has 100 independent steps. But I don't want them to run all 100 together. For different pipeline run, I wish to adjust the concurrency as well.
The text was updated successfully, but these errors were encountered: