Skip to content
This repository has been archived by the owner on Oct 29, 2019. It is now read-only.

[WIP] Use the registries operator to handle custom certificates for registries #702

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

[WIP] Use the registries operator to handle custom certificates for registries #702

wants to merge 1 commit into from

Conversation

ereslibre
Copy link
Contributor

@ereslibre ereslibre commented Dec 21, 2018

Depends on SUSE/velum#706

TODO:

  • Use the proper registries operator image (not from the docker hub)

Depends on SUSE/velum#706

Copy link
Contributor

@MalloZup MalloZup left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i got this clarified

@ereslibre
Copy link
Contributor Author

The point behind this work is to work on operators for the new stack, but at the same time, being able to "backport" these operators to the old CaaSP; this would mean treat operators as we treat other addons (such as tiller or kubedns).

This would help us in that the operator is the one actually managing the logic of handling these resources, while salt would just submit the resources (the YAML documents themselves). This would allow us for an easier integration with the new pieces on the old world, without having to introduce many changes on the way we manage everything. It would also allow us to incrementally reduce the salt code as soon as those operators are ready and we deploy the operators on the cluster (old caasp), so they can manage the resources created by salt (that come from the UI).

The neat part IMO is that salt would be a simple mapping from database -> YAML documents [external API] submitted to the cluster, and then operators handling these documents and performing changes to the cluster as we do in Kubic today. I see this is a good balance between the old and new worlds, and how we can start "backporting" operators to the old world, increasingly removing logic from salt.

@MalloZup
Copy link
Contributor

MalloZup commented Jan 8, 2019

@ereslibre thx for info 💌 i get anyways inof today, thx 🐉

secret_name = secret_name_prefix + secret_name
secret_file_path = temp_file_with_contents(secret_contents)
kubectl('create secret generic %s --from-file=%s=%s' % (secret_name, secret_key, secret_file_path), namespace=namespace)
os.remove(secret_file_path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of using this pattern for creating a temporary file, I think it would be better to have a kubectl_apply function that could accept maybe two different parameters, file= and string=. Then, when using a string, we could use the better Python idiom:

with tempfile.TemporaryFile() as fp:
    fp.write("the manifest contents")
    # run the kubectl apply now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, do you suggest to use kubectl apply to create the secrets too? We can do it that way, yes. In that case I could reuse the caasp_kubernetes_kubectl.kubectl logic that doesn't use a temporary file but stdin instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you could both use stdin (preferred) or a temporary file, but in case you use a temporary file it would be better to use a with context, as it does a cleanup of the file (that can contain sensistive data) when leaving the context.

return deserialize(current_resources)


def reconcile_desired_resources(custom_resource_name, namespace, desired_resources=[]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this function tries to be too smart. I would mimic many other Salt modules and have a caasp_kubernetes_crd.absent (equivalent to file.missing) and caasp_kubernetes_crd.managed (equivalent to file.managed). Then we could skip all this code for detecting the current CRD and if we want to delete it or not...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it tries to be too smart, however I don't see a way around it. When a user modifies the settings on the UI and we run an orchestration we know what settings we want, and for removing the extra settings we need to reconcile somehow with the real state of the cluster and that needs to happen somewhere. Am I missing an alternative approach?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have two cases:

  1. the cleanup of registries not present in the pillar: for each registry obtained with kubectl get registries, if it is not managed by us, do a caasp_kubernetes.crd.absent (a kubectl delete under the hood). This could be done with a Jinja loop.
  2. the add/modify registry. It would be handled by kubectl applys (I think it doesn't really matter if we are creating the CR for the first time or if we are modifying the resource: the k8s API will deal with updating it if that is the case). We could iterate for all the certificates in get_certs() and caasp_kubernetes.crd.manage each one. This could be done with a Jinja loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so moving a bit of the logic to the rendering. I'm not sure I'm very happy with that, because with the current solution the SLS look simpler and for future operators we will have less replicated logic in the templates and will be rendering less clutter.

If this was a state/module for others to consume I'd agree present/absent is the way to go, but since this is for us and for making it easier to integrate this pattern in future operators for this mixed world I think I prefer to have simpler SLS and the big chunk of the logic in the python module.

I don't oppose to what you propose either.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it would mean more code on the SLSs, but (and this is a completely personal view) I think it would be better because

  1. explicit is better than implicit, and it would be clearer what the code does.
  2. I think it would follow more Salt's philosophy, where things are declared as present, missing and so on instead of using deus ex machina states that reconcile desired things.

But, as I said, this is my personal take on this...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think personally that the saltstack way is to have managed, absent etc, logic.

I think yop, we we will have more rendundant code in SLS but if we gan simplicity is better to have it 🚀

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure in this case, because we would be trading the same "reconcile" logic from having it in one single place (inside the module), where all SLS's deploying and managing operators CR's would use it to duplicate the reconcile logic on every single SLS's managing operators resources, so I'm not sure at all we would ve gaining simplicity...

from __future__ import absolute_import


def reconcile(name, namespace='kube-system', desired_resources=[]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this tries to be too smart too. A caasp_kubernetes_secret.absent and caasp_kubernetes_secret.managed would be more explicit and clear in my view.



def kubectl(command, **kwargs):
return __salt__['caasp_kubernetes_kubectl.kubectl'](command, **kwargs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if caasp_kubernetes_kubectl.kubectl is already doing that, but maybe it would be a good idea to log eventual errors

@MalloZup
Copy link
Contributor

This GitHub PR is unactive since 30. Is this GitHub PR still needed? Please close or update it accordingly.
This reminder is autogenerated by https://github.com/MalloZup/blacktango

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants