Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Report duration of client.response metric in microseconds #515

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from

Conversation

okushchenko
Copy link

Similar to palantir/witchcraft-go-server#652

Before this PR

client.response metric was reported with resolution in milliseconds, even though the convention with the Java implementation of Conjure is to report it in microseconds. This results in response latency being reported with values that are 1000x faster than they actually are.

After this PR

==COMMIT_MSG==
Report duration of client.response metric in microseconds.
==COMMIT_MSG==

You can also check the reasoning by running the following test locally:

func TestTimer_Update(t *testing.T) {
	timerWithConversion := metrics.NewRootMetricsRegistry().Timer("with-conversion")
	timerWithConversion.Update(time.Second / time.Microsecond)
	// The value is incorrectly reported as 1,000 microseconds
	assert.Equal(t, int64(1_000), timerWithConversion.Max())

	timerWithoutConversion := metrics.NewRootMetricsRegistry().Timer("without-conversion")
	timerWithoutConversion.Update(time.Second)
	// The value is correctly reported as 1,000,000 microseconds
	assert.Equal(t, int64(1_000_000), timerWithoutConversion.Max())
}

Possible downsides?

This might cause some confusion as the value will be reported correctly now. As a result, if looking at a longer time-frame, a user might think that the endpoint was responding much faster in the past, but got 1000x slower after this problem was fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants