-
Related to: https://atlasgo.io/concepts/dev-database and https://atlasgo.io/concepts/url (for the Docker driver). I hope this is the right place since it seems more appropriate here than to raise this as an "issue". Is the dev database concept intended to be purely ephemeral, with the only reason to consider persistence for performance or startup time? If that's the case, is it acceptable or even appropriate to use the same dev database for multiple scenarios? Or should each environment/scenario have its own, dedicated dev database? For example: env "local" {
src = "schema.hcl"
url = "postgres://user:pass@localhost:5432/db_name?sslmode=disable"
dev = "postgres://user:pass@localhost:5432/db_name_shadow?sslmode=disable"
schemas = ["public"]
migration {
dir = "file://migrations"
format = "atlas"
}
}
env "test" {
src = "schema.hcl"
url = "postgres://user:pass@localhost:5432/db_name_test?sslmode=disable"
dev = "postgres://user:pass@localhost:5432/db_name_shadow?sslmode=disable"
schemas = ["public"]
migration {
dir = "file://migrations"
format = "atlas"
}
} Versus: env "local" {
src = "schema.hcl"
url = "postgres://user:pass@localhost:5432/db_name?sslmode=disable"
dev = "postgres://user:pass@localhost:5432/db_name_shadow?sslmode=disable"
schemas = ["public"]
migration {
dir = "file://migrations"
format = "atlas"
}
}
env "test" {
src = "schema.hcl"
url = "postgres://user:pass@localhost:5432/db_name_test?sslmode=disable"
- dev = "postgres://user:pass@localhost:5432/db_name_shadow?sslmode=disable"
+ dev = "postgres://user:pass@localhost:5432/db_name_test_shadow?sslmode=disable"
schemas = ["public"]
migration {
dir = "file://migrations"
format = "atlas"
}
} The next question I have that isn't clear in the documentation (at least to me), is whether a dev database approach would or should apply to a CI/CD or Production environment. My initial assumption is that the dev database is used for local schema validation checks that can't be caught statically, but at the point where you've reached a CI/CD pipeline (for localized, automated integration testing) or production (for a production migration), you're in a forward-only application state. The part that makes me unsure, and is a bit muddied by #1018, is whether the dev database is required for the output of What I envision for a database migration pipeline is something along the lines of:
If a dev database is required for those to run accurately, it's easy enough to run a service image with Postgres in the workflow - but I figured I would ask before finding out through trial and error 😄. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
When performing diff and check operations, the database will be locked. When the diff and check operations are complete, it will clean up after itself. Based on discussion with @masseelch, this means we can use a single dev/shadow database for multiple purposes and schemas, so long as they are not being used simultaneously.
There are still benefits, if for nothing more than a final "last chance" check before applying to a production environment. Following on from the above information, since the database itself only has ephemeral changes at a point in time, you can for example in GitHub Actions use a service image with your database of choice as the dev database, enabling the checks without requiring it to persist in production. |
Beta Was this translation helpful? Give feedback.
When performing diff and check operations, the database will be locked. When the diff and check operations are complete, it will clean up after itself.
Based on discussion with @masseelch, this means we can use a single dev/shadow database for multiple purposes and schemas, so long as they are not being used simultaneously.
There are still benefits, if for nothing more than a final "last chance" check before applying to a production environment.
Following on from the above information, since the database itself only has ephemeral chang…