You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am having problems with metrics & prometheus plugin after bumping to the 3.7.1 release. (I already bumped Kong until 3.9.0 and the issue still persists)
I have the following entry in my logs: [lua] prometheus.lua:1020: log_error(): Error getting 'request_latency_ms_bucket{service="customer-support",route="customer-support_getcards",workspace="default",le="00080.0"}': nil, client: 10.145.40.1, server: kong_status, request: "GET /metrics HTTP/1.1", host: "10.145.12.54:8100"
Interesting facts:
it is always the same service and that route has that error. In my case it is always the same two routes for the same bucket.
When we revert back to 3.6.1 and the problem goes away
After a few months, we bumped Kong to version 3.9.0 and the problem started happening again after a couple of hours for the same routes + buckets.
Goes away with a pod rotation but comes back after a while.
I already tried:
nginx_http_lua_shared_dict: 'prometheus_metrics 15m' Memory now stands at +-20%
We are running our Kong in AWS EKS, upgraded from 3.6.1
Expected Behavior
The bucket should not disappear, but if it does for any reason I would expect Kong to be able to recover from an inconsistent state. (maybe metric reset?)
Steps To Reproduce
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered:
Is there an existing issue for this?
Kong version (
$ kong version
)3.7.1 / 3.9.0
Current Behavior
I am having problems with metrics & prometheus plugin after bumping to the 3.7.1 release. (I already bumped Kong until 3.9.0 and the issue still persists)
I have the following entry in my logs:
[lua] prometheus.lua:1020: log_error(): Error getting 'request_latency_ms_bucket{service="customer-support",route="customer-support_getcards",workspace="default",le="00080.0"}': nil, client: 10.145.40.1, server: kong_status, request: "GET /metrics HTTP/1.1", host: "10.145.12.54:8100"
Interesting facts:
I already tried:
One pod contains:
While another is missing the le "80"
We are running our Kong in AWS EKS, upgraded from 3.6.1
Expected Behavior
The bucket should not disappear, but if it does for any reason I would expect Kong to be able to recover from an inconsistent state. (maybe metric reset?)
Steps To Reproduce
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: