Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support a Builder resource for managing builders via Pulumi #65

Open
AaronFriel opened this issue May 15, 2024 · 2 comments
Open

Support a Builder resource for managing builders via Pulumi #65

AaronFriel opened this issue May 15, 2024 · 2 comments
Labels
awaiting-feedback kind/enhancement Improvements or new features

Comments

@AaronFriel
Copy link

AaronFriel commented May 15, 2024

Hello!

  • Vote on this issue by adding a 👍 reaction
  • If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)

Issue description

As seen in #55, the docker-build.Image resource by default creates a builder, which may create an unmanaged compute resource (container, Kubernetes Deployment, etc.) that is managed outside of the lifecycle of the Docker provider. As these are unmanaged, they are persistent beyond the life of any Pulumi program and can pollute shared compute resources.

A second challenge is that the docker-build.Image resource provides no mechanism to create other kinds of builders. This means users need to use another provider (likely more than one) to both create the compute resources and to connect a Docker client to the shared builder.

Proposal

Support a docker-image.Builder resource for explicitly configuring a persistent, shared Builder, whose lifecycle is managed as a Pulumi resource. This resource could be created in the same stack and used as a dependency for docker-image.Image resources, or in a shared stack consumed via StackReference.

And modify docker-image.Image to support passing a more verbose configuration - the configuration for connecting a builder.

It may be that the docker-build.Image resource should be extended to take a "builder config" to add, if not present.

A pulumi destroy on the docker-build.Builder should be equivalent to docker builder rm ..., that is, the Builder becomes a lifecycle managed resource.

This would be useful whether the Builder is a Kubernetes builder, a Docker container builder, etc.

Background, example in Docker CLI

Suppose we have a kube cluster and configure a builder on it.

$ kubectl create ns buildkit # Docker won't do this for you
$ docker builder create --name kube --driver kubernetes --driver-opt=namespace=buildkit --bootstrap --use
[+] Building 5.6s (1/1) FINISHED
=> [internal] booting buildkit                                                                       5.6s
=> => waiting for 1 pods to be ready                                                                 5.5s
kube

This command, creating the builder, both changes the local docker engine state by creating a file here: ~/.docker/buildx/instances/kube (named the name of the builder).

$ cat ~/.docker/buildx/instances/kube | head -c45
{"Name":"kube","Driver":"kubernetes","Nodes":

And it creates a Deployment on the Kubernetes cluster. This is a durable, permanent change on the remote cluster:

$ kubectl get deployment -n buildkit
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
kube0   1/1     1            1           46s

Users may want to manage the durable part of this (the long-lived Kubernetes deployment, a container on a remote Docker Engine) separate from the local part (the local Docker configuration connecting to it.)

Such that they can declare a single builder in one stack and use it from multiple other stacks. That is, a stack can declare a docker-build.Builder which is used for zero or many docker-build.Image resources in that stack, or used via StackReference from other stacks.

Originally posted by @AaronFriel in #55 (comment)

@AaronFriel AaronFriel changed the title This issue and bug report is similar to a feature request for supporting lifecycle management of Builders. Suppose we have a kube cluster and configure a builder on it. Support a Builder resource for managing builders via Pulumi May 15, 2024
@blampe
Copy link
Contributor

blampe commented May 15, 2024

Support a docker-image.Builder resource for explicitly configuring a persistent, shared Builder

This needs significant clarification around how this would model docker-container and remote drivers, as well as how local configuration would actually work.

The docker-container driver only provisions a container on the host running the program -- how is that modeled by a Builder resource in shared state?

The remote driver simply configures SSH settings -- what would that provision? How does that apply configuration to my host if the resource was already created on another host?

A pulumi destroy on the docker-build.Builder should be equivalent to docker builder rm ..., that is, the Builder becomes a lifecycle managed resource.

How would my local docker-container builder get cleaned up if the destroy happens on your machine? Similarly for my local remote settings.

This would be useful whether the Builder is a Kubernetes builder, a Docker container builder, etc.

None of what you've described here seems relevant or even possible for docker-container or remote drivers, and the persistent Deployment behind the kubernetes driver can already be managed via Pulumi: bootstrap the builder, import it with pulumi import kubernetes:apps/v1:Deployment buildkit ns/builder0, and omit --bootstrap from docker buildx create.

The functionality gap exists when users need to run docker buildx create ... before they can run their program. Given that this needs to run per-host, it would be appropriate as provider configuration and not a cloud resource.

which may create an unmanaged compute resource (container, Kubernetes Deployment, etc.)

The provider does not create unmanaged Kubernetes Deployments. It will only create a docker-container builder if no builder was specified and a docker-container builder isn't already available. If no builder was specified but a docker-container builder already exists, it will re-use it. (If it doesn't, that's a bug.)

@blampe blampe added awaiting-feedback kind/enhancement Improvements or new features labels May 15, 2024
@kmosher
Copy link

kmosher commented Sep 6, 2024

I'm finding myself wanting something that wraps docker buildx create in resource-like semantics (create if doesn't exist, return if does, updates if changed, possibly clean up when I'm done using it).

Like, imagine setting up a k8s cluster to run cloud builds, passing that into builder = Builder('my-cool-build-cluster', {driver='kubernetes'}) and then doing Image(..., builder=builder.name). That'd be neat! And trying to manage the builder registration with just shelling out to docker would require logic to check if the named builder exists and if not create it, and we're like halfway to Resource behavior there.

Except, you're right that it makes absolutely no sense to store Resource state about these things in a way that should be shared among hosts. So you might be right that provider options that work in this resource-ish fashion might be the right pattern? Or just a provider helper function that does this resource-ish behavior? It's either that or a very wonky resource that always refreshes itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting-feedback kind/enhancement Improvements or new features
Projects
None yet
Development

No branches or pull requests

3 participants