-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different database password in Django and PostgreSQL containers #159
Comments
obvious, perhaps, but probably not trivial to implement since it means watching a separate resource for changes, when that resource is referenced by a primary resource.
You could also have a post-deployment hook on your django deploymentconfig that triggers the postgres deploymentconfig. or you could have your pipeline twiddle an otherwise unused value(dummy env var, or an annotation on the pod template) on the postgres deploymentconfig to force it to redeploy. |
@bparees I should've mentioned that I'm trying to do a Rolling update on the Django container to avoid any down time at all, but this doesn't seem possible with the current template (at least I don't know how). If I start the PostgreSQL container during a post-deployment hook of the Django DeploymentConfig, the old replication controller has already scaled to zero and the new replication controller is serving requests, so until the database container is ready, my application is not available. If I do it as a pre-deployment hook, the application becomes unavailable until my new replication controller has scaled up. I have 2 replicas for Django and 1 replica for PostgreSQL. I have disabled all triggers on my deployments, so the only way new deployments are rolled out is manually (or through my CI pipeline). If I run the following steps in my CI, I expected to have zero downtime, but I can confirm this is not the case.
Here, the database container is not started during a pre/post hook, but right after the oc rollout latest django has returned. What I expect to happen:
What happens instead is that not only do I see a 500 error from my django app (expected due to #3 above), but for a period of time, my application is simply unavailable (I get an OpenShift-generated page with the message "The application is currently not serving requests at this endpoint. It may not have been started or is still starting."). |
In the current example template, the Django and PostgreSQL containers are in separate DeploymentConfig's. I have a CI pipeline that processes the template, and since I leave the DATABASE_PASSWORD parameter blank (intentional), it is generated every time the pipeline runs. However, this doesn't trigger a ConfigChange in either DeploymentConfig because the environment variables are secret references (it seems like this would be the obvious thing to do). The Django container does get re-built as part of my CI pipeline and triggers an ImageChange, so the result is that I end up with a Django container that has a different database password in its environment variables from my PostgreSQL container.
Has anyone figured out how to solve this problem? I thought maybe if we put both containers in the same DeploymentConfig or have both the containers be restarted when the Django image changes... but neither seems ideal. Is there a way to specify an annotation that results in the Deployments being recreated when the Secret is updated?
The text was updated successfully, but these errors were encountered: