Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jenkins Remote Execution Custom Task #697

Open
imjasonh opened this issue Jan 22, 2021 · 7 comments
Open

Jenkins Remote Execution Custom Task #697

imjasonh opened this issue Jan 22, 2021 · 7 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@imjasonh
Copy link
Member

imjasonh commented Jan 22, 2021

Opening this issue to collect ideas, discussion, interest, etc., for a custom task controller that executes a Jenkins Job on a remote Jenkins installation, watches it to completion, and reports success/failure, and maybe emits some results.

This would be the reverse of Vibhav's Jenkins Plugin for Tekton, which starts and watches Tekton executions from Jenkinsland. This new controller would let Jenkins users slowly adopt Tekton, either by having their Jenkins workloads kick off Tekton workloads, or now, vice versa. Or perhaps both, horrifyingly. 😨

This custom task could define a new CRD type that defines the Jenkins Job to create, possibly with parameters (and workspaces? Maybe?), and define them in the pipeline spec:

apiVersion: tekton.dev/v1alpha1
kind: Pipeline
...
spec:
  tasks:
  - name: my-jenkins-job
    taskRef:
      apiVersion: example.dev/v0
      kind: JenkinsJob
      name: my-jenkins-job

When run, the custom task controller would look up a example.dev/v0 / JenkinsJob custom resource object named my-jenkins-job, which might look like:

apiVersion: example.dev/v0
kind: JenkinsJob
metadata:
  name: my-jenkins-job
spec:
  job:
    # something goes here, I don't know what exactly

...then send that config to a remote Jenkins installation using the Remote Access API. After submitting, the controller would update the Run with any information about the Job it created, and proceed to poll the Job by repeatedly calling EnqueueAfter like wait-task does, until the Job is complete (or timeout).


Now, the part where I plead for help: I have basically no experience with Jenkins, I've only read documentation, but this seems doable at least as far as I can tell. Input from someone with more experience here would be very useful.

@gabemontero
Copy link

@waveywaves - FYI in case you were unaware ^^

@waveywaves == Vibhav :-)

@gabemontero
Copy link

@imjasonh - in case this has not bubbled up in your RH onboarding - a possible historical reference for launching jenkins and jenkins pipelines from k8s (in this case k8s == openshift)

https://docs.openshift.com/container-platform/4.6/builds/build-strategies.html#builds-strategy-pipeline-build_build-strategies

@akram and @waveywaves now own ^^ but I was the original owner

I'm in no way trying to (at least yet) endorse any carry over from all that work to what your are trying to accomplish here, but perhaps we should have a more detailed voice to voice discussion

@akram
Copy link

akram commented Jan 28, 2021

Hi @imjasonh ,

can you PTAL at this https://github.com/tektoncd/catalog/blob/master/task/trigger-jenkins-job/0.1/README.md ?
It should probably do what you are looking for.

cc @chmouel

@chmouel
Copy link
Member

chmouel commented Jan 28, 2021

There is other jenkins task in catalog which is a bit more generic :

https://github.com/tektoncd/catalog/blob/master/task/jenkins/0.1/README.md

@imjasonh
Copy link
Member Author

imjasonh commented Jan 28, 2021

Yeah those catalog tasks are a great inspiration, and I think the custom task controller would likely do mostly the same thing.

The difference in this case is that instead of having the Job triggered and watched from one Pod for each ongoing Job, there would be one centralized controller responsible for starting and watching all ongoing Jobs. I would expect this to be more efficient, and more fault tolerant.

Instead of having N containers running effectively job = start(); while true { poll(job) || break; sleep(dur); }, there would be one controller watching for new requests for Jobs, starting them, then adding them to a global queue of requests to poll for job status. If the controller restarts, it can pick up where it left off, and we could even have multiple jobs consuming the queue.

I've previously prototyped a similar controller for running remote builds on Google Cloud Build, and I think this would operate in mostly the same way.

@tekton-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 28, 2021
@imjasonh
Copy link
Member Author

/lifecycle frozen

@tekton-robot tekton-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
Status: Todo
Development

No branches or pull requests

5 participants