You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In #591 we determined that we could fix the Datadog verbose trace logging by setting DD_TRACE_LOG_STREAM_HANDLER=false. It's not clear whether this is a good long-term fix or whether we're using an unstable internal feature that's likely to change out from under us.
Background:
We added this change because when we enabled Datadog in Kubernetes, ddtrace was behaving as if DD_TRACE_DEBUG=true, even thought that should be false by default, and we're not setting it anywhere that we can find. The effect was large log volume on stdout or stderr, with one healthcheck request producing 136 lines totalling 22 kB of output. (Private ticket link: https://help.datadoghq.com/hc/en-us/requests/1643101.) Explicitly setting DD_TRACE_DEBUG to false didn't work, and none of the other trace-related configs that are documented seemed to help.
We made the fix in our Helm charts, setting DD_TRACE_LOG_STREAM_HANDLER to false: https://github.com/edx/helm-charts/pull/122 (or actually to no_thank_you, due to a Helm charts YAML-templating bool/string issue)
The flag is described as being useful for when ddtrace-run isn't in effect, but in our case, it is.
Alternatives:
Figure out why it's behaving this way in the first place, and fix that. Maybe the Datadog cluster agent's admission controller is setting DD_TRACE_DEBUG somehow?
Or we might be able to just dump the environment variables to log, although we'd have to be careful about sensitive information. Maybe just dump names.
Datadog support suggested calling logging.getLogger("ddtrace").disable() early in the server startup, although this would be a more invasive code change we'd have to propagate to all IDAs:
This works because ddtrace uses python's standard logging library. Additionally, investigating this makes me suspect that the root cause is ddtrace's logging library inheriting settings from other logging configurations in your code.
The text was updated successfully, but these errors were encountered:
Whether we determine on our own, or reopen the support ticket, is there a way to redirect the errors/warnings to another file so that we could set specific retention rules for this in DD, and so that we won't be blind to errors that may matter (and may be ongoing)?
In #591 we determined that we could fix the Datadog verbose trace logging by setting
DD_TRACE_LOG_STREAM_HANDLER=false
. It's not clear whether this is a good long-term fix or whether we're using an unstable internal feature that's likely to change out from under us.Background:
DD_TRACE_DEBUG=true
, even thought that should be false by default, and we're not setting it anywhere that we can find. The effect was large log volume on stdout or stderr, with one healthcheck request producing 136 lines totalling 22 kB of output. (Private ticket link: https://help.datadoghq.com/hc/en-us/requests/1643101.) Explicitly settingDD_TRACE_DEBUG
to false didn't work, and none of the other trace-related configs that are documented seemed to help.DD_TRACE_LOG_STREAM_HANDLER
to false: https://github.com/edx/helm-charts/pull/122 (or actually tono_thank_you
, due to a Helm charts YAML-templating bool/string issue)Why the fix may not be stable or appropriate:
Alternatives:
Figure out why it's behaving this way in the first place, and fix that. Maybe the Datadog cluster agent's admission controller is setting
DD_TRACE_DEBUG
somehow?Datadog support suggested calling
logging.getLogger("ddtrace").disable()
early in the server startup, although this would be a more invasive code change we'd have to propagate to all IDAs:The text was updated successfully, but these errors were encountered: