You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I tried the Kuma Canary Deployments example, I got the following error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Synced 17m flagger podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less than desired generation
Warning Synced 16m flagger podinfo-primary.test not ready: waiting for rollout to finish: 0 of 2 (readyThreshold 100%) updated replicas are available
Warning Synced 16m (x3 over 17m) flagger Error checking metric providers: prometheus not avaiable: running query failed: request failed: Get "http://prometheus-server.mesh-observability:80/api/v1/query?query=vector%281%29": dial tcp 10.96.250.180:80: connect: connection refused
Normal Synced 16m flagger Initialization done! podinfo.test
Normal Synced 10m flagger New revision detected! Scaling up podinfo.test
Warning Synced 9m32s flagger canary deployment podinfo.test not ready: waiting for rollout to finish: 0 of 2 (readyThreshold 100%) updated replicas are available
Normal Synced 9m2s flagger Starting canary analysis for podinfo.test
Normal Synced 9m2s flagger Pre-rollout check acceptance-test passed
Normal Synced 9m2s flagger Advance podinfo.test canary weight 5
Warning Synced 6m32s (x5 over 8m32s) flagger Halt advancement no values found for kuma metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found
Warning Synced 6m2s flagger Rolling back podinfo.test failed checks threshold reached 5
Warning Synced 6m2s flagger Canary failed! Scaling down podinfo.test
the Flagger logs:
{"level":"info","ts":"2024-08-05T03:17:05.094Z","caller":"flagger/main.go:149","msg":"Starting flagger version 1.38.0 revision b6ac5e19aa7fa2949bbc8bf37a0f6c1e31b1745d mesh provider kuma"}
{"level":"info","ts":"2024-08-05T03:17:05.095Z","caller":"clientcmd/client_config.go:659","msg":"Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work."}
{"level":"info","ts":"2024-08-05T03:17:05.095Z","caller":"clientcmd/client_config.go:659","msg":"Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work."}
{"level":"info","ts":"2024-08-05T03:17:05.100Z","caller":"flagger/main.go:441","msg":"Connected to Kubernetes API v1.30.0"}
{"level":"info","ts":"2024-08-05T03:17:05.100Z","caller":"flagger/main.go:294","msg":"Waiting for canary informer cache to sync"}
{"level":"info","ts":"2024-08-05T03:17:05.100Z","caller":"cache/shared_informer.go:313","msg":"Waiting for caches to sync for flagger"}
{"level":"info","ts":"2024-08-05T03:17:05.202Z","caller":"cache/shared_informer.go:320","msg":"Caches are synced for flagger"}
{"level":"info","ts":"2024-08-05T03:17:05.202Z","caller":"flagger/main.go:301","msg":"Waiting for metric template informer cache to sync"}
{"level":"info","ts":"2024-08-05T03:17:05.202Z","caller":"cache/shared_informer.go:313","msg":"Waiting for caches to sync for flagger"}
{"level":"info","ts":"2024-08-05T03:17:05.302Z","caller":"cache/shared_informer.go:320","msg":"Caches are synced for flagger"}
{"level":"info","ts":"2024-08-05T03:17:05.302Z","caller":"flagger/main.go:308","msg":"Waiting for alert provider informer cache to sync"}
{"level":"info","ts":"2024-08-05T03:17:05.302Z","caller":"cache/shared_informer.go:313","msg":"Waiting for caches to sync for flagger"}
{"level":"info","ts":"2024-08-05T03:17:05.403Z","caller":"cache/shared_informer.go:320","msg":"Caches are synced for flagger"}
{"level":"error","ts":"2024-08-05T03:17:05.405Z","caller":"flagger/main.go:208","msg":"Metrics server http://prometheus-server.mesh-observability:80 unreachable running query failed: request failed: Get \"http://prometheus-server.mesh-observability:80/api/v1/query?query=vector%281%29\": dial tcp 10.96.250.180:80: connect: connection refused","stacktrace":"main.main\n\t/workspace/cmd/flagger/main.go:208\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:271"}
{"level":"info","ts":"2024-08-05T03:17:05.405Z","caller":"server/server.go:45","msg":"Starting HTTP server on port 8080"}
{"level":"info","ts":"2024-08-05T03:17:05.405Z","caller":"controller/controller.go:191","msg":"Starting operator"}
{"level":"info","ts":"2024-08-05T03:17:05.405Z","caller":"controller/controller.go:200","msg":"Started operator workers"}
{"level":"info","ts":"2024-08-05T03:18:08.802Z","caller":"controller/controller.go:312","msg":"Synced test/podinfo"}
{"level":"info","ts":"2024-08-05T03:18:15.426Z","caller":"router/kubernetes_default.go:175","msg":"Service podinfo-canary.test created","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:18:15.445Z","caller":"router/kubernetes_default.go:175","msg":"Service podinfo-primary.test created","canary":"podinfo.test"}
{"level":"error","ts":"2024-08-05T03:18:15.446Z","caller":"controller/events.go:39","msg":"Error checking metric providers: prometheus not avaiable: running query failed: request failed: Get \"http://prometheus-server.mesh-observability:80/api/v1/query?query=vector%281%29\": dial tcp 10.96.250.180:80: connect: connection refused","canary":"podinfo.test","stacktrace":"github.com/fluxcd/flagger/pkg/controller.(*Controller).recordEventErrorf\n\t/workspace/pkg/controller/events.go:39\ngithub.com/fluxcd/flagger/pkg/controller.(*Controller).advanceCanary\n\t/workspace/pkg/controller/scheduler.go:207\ngithub.com/fluxcd/flagger/pkg/controller.CanaryJob.Start.func1\n\t/workspace/pkg/controller/job.go:35"}
{"level":"info","ts":"2024-08-05T03:18:15.463Z","caller":"canary/deployment_controller.go:323","msg":"Deployment podinfo-primary.test created","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:18:15.464Z","caller":"controller/events.go:45","msg":"podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less than desired generation","canary":"podinfo.test"}
{"level":"error","ts":"2024-08-05T03:18:45.413Z","caller":"controller/events.go:39","msg":"Error checking metric providers: prometheus not avaiable: running query failed: request failed: Get \"http://prometheus-server.mesh-observability:80/api/v1/query?query=vector%281%29\": dial tcp 10.96.250.180:80: connect: connection refused","canary":"podinfo.test","stacktrace":"github.com/fluxcd/flagger/pkg/controller.(*Controller).recordEventErrorf\n\t/workspace/pkg/controller/events.go:39\ngithub.com/fluxcd/flagger/pkg/controller.(*Controller).advanceCanary\n\t/workspace/pkg/controller/scheduler.go:207\ngithub.com/fluxcd/flagger/pkg/controller.CanaryJob.Start.func1\n\t/workspace/pkg/controller/job.go:39"}
{"level":"info","ts":"2024-08-05T03:18:45.419Z","caller":"controller/events.go:45","msg":"podinfo-primary.test not ready: waiting for rollout to finish: 0 of 2 (readyThreshold 100%) updated replicas are available","canary":"podinfo.test"}
{"level":"error","ts":"2024-08-05T03:19:15.416Z","caller":"controller/events.go:39","msg":"Error checking metric providers: prometheus not avaiable: running query failed: request failed: Get \"http://prometheus-server.mesh-observability:80/api/v1/query?query=vector%281%29\": dial tcp 10.96.250.180:80: connect: connection refused","canary":"podinfo.test","stacktrace":"github.com/fluxcd/flagger/pkg/controller.(*Controller).recordEventErrorf\n\t/workspace/pkg/controller/events.go:39\ngithub.com/fluxcd/flagger/pkg/controller.(*Controller).advanceCanary\n\t/workspace/pkg/controller/scheduler.go:207\ngithub.com/fluxcd/flagger/pkg/controller.CanaryJob.Start.func1\n\t/workspace/pkg/controller/job.go:39"}
{"level":"info","ts":"2024-08-05T03:19:15.434Z","caller":"router/kubernetes_default.go:175","msg":"Service podinfo.test created","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:19:15.434Z","caller":"controller/scheduler.go:257","msg":"Scaling down Deployment podinfo.test","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:19:15.468Z","caller":"router/kuma.go:105","msg":"TrafficRoute podinfo created","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:19:15.484Z","caller":"controller/events.go:33","msg":"Initialization done! podinfo.test","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:25:15.439Z","caller":"controller/events.go:33","msg":"New revision detected! Scaling up podinfo.test","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:25:45.429Z","caller":"controller/events.go:45","msg":"canary deployment podinfo.test not ready: waiting for rollout to finish: 0 of 2 (readyThreshold 100%) updated replicas are available","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:26:15.436Z","caller":"controller/events.go:33","msg":"Starting canary analysis for podinfo.test","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:26:15.444Z","caller":"controller/events.go:33","msg":"Pre-rollout check acceptance-test passed","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:26:15.459Z","caller":"controller/events.go:33","msg":"Advance podinfo.test canary weight 5","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:26:45.435Z","caller":"controller/events.go:45","msg":"Halt advancement no values found for kuma metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:27:15.441Z","caller":"controller/events.go:45","msg":"Halt advancement no values found for kuma metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:27:45.431Z","caller":"controller/events.go:45","msg":"Halt advancement no values found for kuma metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:28:15.440Z","caller":"controller/events.go:45","msg":"Halt advancement no values found for kuma metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:28:45.438Z","caller":"controller/events.go:45","msg":"Halt advancement no values found for kuma metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:29:15.441Z","caller":"controller/events.go:45","msg":"Rolling back podinfo.test failed checks threshold reached 5","canary":"podinfo.test"}
{"level":"info","ts":"2024-08-05T03:29:15.458Z","caller":"controller/events.go:45","msg":"Canary failed! Scaling down podinfo.test","canary":"podinfo.test"}
Describe the bug
When I tried the Kuma Canary Deployments example, I got the following error:
the Flagger logs:
To Reproduce
The steps and definition are the same as in https://docs.flagger.app/tutorials/kuma-progressive-delivery?fallback=true
Additional context
The text was updated successfully, but these errors were encountered: