Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Balanced channel measures RTT, hoping to stay within AZ (and save $) #794

Merged
merged 38 commits into from
Jun 1, 2020
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
5c39335
Balanced measures RTT using OPTIONS requests
iamdanfox May 26, 2020
bc5b31d
Compile
iamdanfox May 26, 2020
109b05d
Handl OPTIONS
iamdanfox May 26, 2020
4661549
Update simulations
iamdanfox May 26, 2020
17c248e
Updatee tests for rtt sampling
iamdanfox May 27, 2020
a5e7531
Accumulate RTT average
iamdanfox May 27, 2020
e398af0
Sample all channels, with two sequential calls
iamdanfox May 27, 2020
0701a64
Also log accumulated ones
iamdanfox May 27, 2020
bef11a4
Just store the min
iamdanfox May 27, 2020
2284f62
Compute score based on the range of observed rtts
iamdanfox May 27, 2020
961b240
Delete misleading method
iamdanfox May 27, 2020
3c2dc94
Fix unit tests
iamdanfox May 28, 2020
9514f91
Update simulations
iamdanfox May 28, 2020
cb3b440
streams are fine here
iamdanfox May 28, 2020
9398d9a
test for rate limiter
iamdanfox May 28, 2020
b17e311
Simulations
iamdanfox May 28, 2020
3beac0f
Smaller diff
iamdanfox May 28, 2020
c40950f
Add generated changelog entries
iamdanfox May 28, 2020
c913334
Don't allow multiple samples to run at the same time
iamdanfox May 28, 2020
7005529
Return the min of the last 5 measurements
iamdanfox May 28, 2020
4c4ce91
Move sorting
iamdanfox May 28, 2020
7f9db17
Refactor
iamdanfox May 28, 2020
64a955b
Pull everything out to a dedicated 'RttSampler' class
iamdanfox May 28, 2020
2389a06
Use OptionalLong instead of Long.MAX_VALUE as special value
iamdanfox May 28, 2020
3adc16c
be more immutable, reduce diff
iamdanfox May 28, 2020
eb76abb
Feature flag it off
iamdanfox May 28, 2020
4f73a90
Allow servers to enable it with BALANCED_RTT
iamdanfox May 29, 2020
f364205
Re-run simulations
iamdanfox May 29, 2020
6f9804a
Fix tests
iamdanfox May 29, 2020
d81eb17
Merge remote-tracking branch 'origin/develop' into dfox/rtt
iamdanfox May 29, 2020
c0a90b4
Merge remote-tracking branch 'origin/develop' into dfox/rtt
iamdanfox May 29, 2020
384cf55
Merge branch 'dfox/rtt' of ssh://github.com/palantir/dialogue into df…
iamdanfox May 29, 2020
0128c4e
Ensure we send a good user agent with these OPTIONS requests
iamdanfox May 29, 2020
1af6a22
More CR
iamdanfox May 29, 2020
f156637
Move logic to RttSampler
iamdanfox May 29, 2020
f62429d
Appease errorprone
iamdanfox May 29, 2020
d873cd1
Minimise diff
iamdanfox May 29, 2020
f6f6917
Remember to close the response!
iamdanfox Jun 1, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions changelog/@unreleased/pr-794.v2.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
type: feature
feature:
description: Balanced channel now biases towards whichever node has the lowest latency,
which should reduce AWS spend by routing requests within AZ.
links:
- https://github.com/palantir/dialogue/pull/794

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -381,6 +381,7 @@ private static boolean safeToRetry(HttpMethod httpMethod) {
// in theory PUT and DELETE should be fine to retry too, we're just being conservative for now.
case POST:
case PATCH:
case OPTIONS:
return false;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@

import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
Expand Down Expand Up @@ -76,8 +77,8 @@ void when_one_channel_is_in_use_prefer_the_other() {
for (int i = 0; i < 200; i++) {
channel.maybeExecute(endpoint, request);
}
verify(chan1, times(199)).maybeExecute(any(), any());
verify(chan2, times(1)).maybeExecute(any(), any());
verify(chan1, times(199)).maybeExecute(eq(endpoint), any());
verify(chan2, times(1)).maybeExecute(eq(endpoint), any());
}

@Test
Expand All @@ -88,8 +89,8 @@ void when_both_channels_are_free_we_get_roughly_fair_tiebreaking() {
for (int i = 0; i < 200; i++) {
channel.maybeExecute(endpoint, request);
}
verify(chan1, times(99)).maybeExecute(any(), any());
verify(chan2, times(101)).maybeExecute(any(), any());
verify(chan1, times(99)).maybeExecute(eq(endpoint), any());
verify(chan2, times(101)).maybeExecute(eq(endpoint), any());
}

@Test
Expand All @@ -98,8 +99,8 @@ void when_channels_refuse_try_all_then_give_up() {
when(chan2.maybeExecute(any(), any())).thenReturn(Optional.empty());

assertThat(channel.maybeExecute(endpoint, request)).isNotPresent();
verify(chan1, times(1)).maybeExecute(any(), any());
verify(chan2, times(1)).maybeExecute(any(), any());
verify(chan1, times(1)).maybeExecute(eq(endpoint), any());
verify(chan2, times(1)).maybeExecute(eq(endpoint), any());
}

@Test
Expand All @@ -111,13 +112,13 @@ void a_single_4xx_doesnt_move_the_needle() {
clock.read() < start + Duration.ofSeconds(10).toNanos();
incrementClockBy(Duration.ofMillis(50))) {
channel.maybeExecute(endpoint, request);
assertThat(channel.getScores())
assertThat(channel.getScoresForTesting().map(c -> c.getScore()))
.describedAs("A single 400 at the beginning isn't enough to impact scores", channel)
.containsExactly(0, 0);
}

verify(chan1, times(99)).maybeExecute(any(), any());
verify(chan2, times(101)).maybeExecute(any(), any());
verify(chan1, times(99)).maybeExecute(eq(endpoint), any());
verify(chan2, times(101)).maybeExecute(eq(endpoint), any());
}

@Test
Expand All @@ -127,25 +128,118 @@ void constant_4xxs_do_eventually_move_the_needle_but_we_go_back_to_fair_distribu

for (int i = 0; i < 11; i++) {
channel.maybeExecute(endpoint, request);
assertThat(channel.getScores())
assertThat(channel.getScoresForTesting().map(c -> c.getScore()))
.describedAs("%s %s: Scores not affected yet %s", i, Duration.ofNanos(clock.read()), channel)
.containsExactly(0, 0);
incrementClockBy(Duration.ofMillis(50));
}
channel.maybeExecute(endpoint, request);
assertThat(channel.getScores())
assertThat(channel.getScoresForTesting().map(c -> c.getScore()))
.describedAs("%s: Constant 4xxs did move the needle %s", Duration.ofNanos(clock.read()), channel)
.containsExactly(1, 0);

incrementClockBy(Duration.ofSeconds(5));

assertThat(channel.getScores())
assertThat(channel.getScoresForTesting().map(c -> c.getScore()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

peanut gallery: I want an error-prone rule that can refactor this automatically

Suggested change
assertThat(channel.getScoresForTesting().map(c -> c.getScore()))
assertThat(channel.getScoresForTesting().map(BalancedNodeSelectionStrategyChannel::getScore))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.describedAs(
"%s: We quickly forget about 4xxs and go back to fair shuffling %s",
Duration.ofNanos(clock.read()), channel)
.containsExactly(0, 0);
}

@Test
void rtt_is_measured_and_can_influence_choices() {
incrementClockBy(Duration.ofHours(1));

// when(chan1.maybeExecute(eq(endpoint), any())).thenReturn(http(200));
when(chan2.maybeExecute(eq(endpoint), any())).thenReturn(http(200));

SettableFuture<Response> chan1OptionsResponse = SettableFuture.create();
SettableFuture<Response> chan2OptionsResponse = SettableFuture.create();
BalancedNodeSelectionStrategyChannel.RttEndpoint rttEndpoint =
BalancedNodeSelectionStrategyChannel.RttEndpoint.INSTANCE;
when(chan1.maybeExecute(eq(rttEndpoint), any())).thenReturn(Optional.of(chan1OptionsResponse));
when(chan2.maybeExecute(eq(rttEndpoint), any())).thenReturn(Optional.of(chan2OptionsResponse));

channel.maybeExecute(endpoint, request);

incrementClockBy(Duration.ofNanos(123));
chan1OptionsResponse.set(new TestResponse().code(200));

incrementClockBy(Duration.ofNanos(456));
chan2OptionsResponse.set(new TestResponse().code(200));

assertThat(channel.getScoresForTesting().map(c -> c.getScore()))
.describedAs("The poor latency of channel2 imposes a small constant penalty in the score")
.containsExactly(0, 3);

for (int i = 0; i < 500; i++) {
incrementClockBy(Duration.ofMillis(10));
channel.maybeExecute(endpoint, request);
}
// rate limiter ensures a sensible amount of rtt sampling
verify(chan1, times(6)).maybeExecute(eq(rttEndpoint), any());
verify(chan2, times(6)).maybeExecute(eq(rttEndpoint), any());
}

@Test
void when_rtt_measurements_are_limited_dont_freak_out() {
incrementClockBy(Duration.ofHours(1));

// when(chan1.maybeExecute(eq(endpoint), any())).thenReturn(http(200));
when(chan2.maybeExecute(eq(endpoint), any())).thenReturn(http(200));

BalancedNodeSelectionStrategyChannel.RttEndpoint rttEndpoint =
BalancedNodeSelectionStrategyChannel.RttEndpoint.INSTANCE;
when(chan1.maybeExecute(eq(rttEndpoint), any())).thenReturn(Optional.empty());
when(chan2.maybeExecute(eq(rttEndpoint), any())).thenReturn(Optional.empty());

channel.maybeExecute(endpoint, request);

assertThat(channel.getScoresForTesting().map(c -> c.getScore())).containsExactly(0, 0);
}

@Test
void when_rtt_measurements_havent_returned_yet_dont_freak_out() {
incrementClockBy(Duration.ofHours(1));
// when(chan1.maybeExecute(eq(endpoint), any())).thenReturn(http(200));
when(chan2.maybeExecute(eq(endpoint), any())).thenReturn(http(200));

BalancedNodeSelectionStrategyChannel.RttEndpoint rttEndpoint =
BalancedNodeSelectionStrategyChannel.RttEndpoint.INSTANCE;
when(chan1.maybeExecute(eq(rttEndpoint), any())).thenReturn(Optional.of(SettableFuture.create()));
when(chan2.maybeExecute(eq(rttEndpoint), any())).thenReturn(Optional.of(SettableFuture.create()));

for (int i = 0; i < 20; i++) {
incrementClockBy(Duration.ofSeconds(5));
channel.maybeExecute(endpoint, request);
}

assertThat(channel.getScoresForTesting().map(c -> c.getScore())).containsExactly(0, 0);
verify(chan1, times(1)).maybeExecute(eq(rttEndpoint), any());
verify(chan2, times(1)).maybeExecute(eq(rttEndpoint), any());
}

@Test
void rtt_returns_the_min_of_the_last_5_measurements() {
BalancedNodeSelectionStrategyChannel.RttMeasurement rtt =
new BalancedNodeSelectionStrategyChannel.RttMeasurement();
rtt.addMeasurement(3);
assertThat(rtt.getNanos()).describedAs("%s", rtt).isEqualTo(3);
rtt.addMeasurement(1);
rtt.addMeasurement(2);
assertThat(rtt.getNanos()).describedAs("%s", rtt).isEqualTo(1);

rtt.addMeasurement(500);
assertThat(rtt.getNanos()).describedAs("%s", rtt).isEqualTo(1);
rtt.addMeasurement(500);
rtt.addMeasurement(500);
rtt.addMeasurement(500);
assertThat(rtt.getNanos()).describedAs("%s", rtt).isEqualTo(2);
rtt.addMeasurement(500);
assertThat(rtt.getNanos()).describedAs("%s", rtt).isEqualTo(500);
}

private static void set200(LimitedChannel chan) {
when(chan.maybeExecute(any(), any())).thenReturn(http(200));
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,11 @@ public ListenableFuture<Response> execute(Endpoint endpoint, Request request) {
case PATCH:
okRequest = okRequest.patch(toOkHttpBody(request.body()));
break;
case OPTIONS:
okRequest = okRequest.method(
"OPTIONS",
request.body().isPresent() ? toOkHttpBody(request.body().get()) : null);
break;
}

// Fill headers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,5 @@ public enum HttpMethod {
PATCH,
POST,
PUT,
OPTIONS
iamdanfox marked this conversation as resolved.
Show resolved Hide resolved
}
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
import com.google.common.util.concurrent.MoreExecutors;
import com.palantir.dialogue.Channel;
import com.palantir.dialogue.Endpoint;
import com.palantir.dialogue.HttpMethod;
import com.palantir.dialogue.Request;
import com.palantir.dialogue.Response;
import com.palantir.dialogue.TestResponse;
Expand Down Expand Up @@ -74,6 +75,10 @@ public static Builder builder() {

@Override
public ListenableFuture<Response> execute(Endpoint endpoint, Request request) {
if (endpoint.httpMethod() == HttpMethod.OPTIONS) {
return Futures.immediateFuture(new TestResponse().code(204));
}

Meter perEndpointRequests = MetricNames.requestMeter(simulation.taggedMetrics(), serverName, endpoint);

activeRequests.inc();
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 6 additions & 6 deletions simulation/src/test/resources/report.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@
<!-- Run SimulationTest to regenerate this report. -->
```
all_nodes_500[CONCURRENCY_LIMITER_PIN_UNTIL_ERROR].txt: success=79.3% client_mean=PT5.81342S server_cpu=PT20M client_received=2000/2000 server_resps=2000 codes={200=1586, 500=414}
all_nodes_500[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=73.7% client_mean=PT3.039455S server_cpu=PT20M client_received=2000/2000 server_resps=2000 codes={200=1474, 500=526}
all_nodes_500[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=73.6% client_mean=PT2.91032S server_cpu=PT20M client_received=2000/2000 server_resps=2000 codes={200=1472, 500=528}
all_nodes_500[UNLIMITED_ROUND_ROBIN].txt: success=50.0% client_mean=PT0.6S server_cpu=PT20M client_received=2000/2000 server_resps=2000 codes={200=1000, 500=1000}
black_hole[CONCURRENCY_LIMITER_PIN_UNTIL_ERROR].txt: success=59.2% client_mean=PT0.600655114S server_cpu=PT11M49.8S client_received=1183/2000 server_resps=1183 codes={200=1183}
black_hole[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=91.5% client_mean=PT0.6S server_cpu=PT18M17.4S client_received=1829/2000 server_resps=1829 codes={200=1829}
black_hole[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=91.4% client_mean=PT0.6S server_cpu=PT18M16.8S client_received=1828/2000 server_resps=1828 codes={200=1828}
black_hole[UNLIMITED_ROUND_ROBIN].txt: success=91.4% client_mean=PT0.6S server_cpu=PT18M16.8S client_received=1828/2000 server_resps=1828 codes={200=1828}
drastic_slowdown[CONCURRENCY_LIMITER_PIN_UNTIL_ERROR].txt: success=100.0% client_mean=PT2.947028083S server_cpu=PT41M8.862333314S client_received=4000/4000 server_resps=4000 codes={200=4000}
drastic_slowdown[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=100.0% client_mean=PT0.251969999S server_cpu=PT16M47.879999984S client_received=4000/4000 server_resps=4000 codes={200=4000}
Expand All @@ -17,16 +17,16 @@
fast_503s_then_revert[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=100.0% client_mean=PT0.120107163S server_cpu=PT1H30M0.0000004S client_received=45000/45000 server_resps=45040 codes={200=45000}
fast_503s_then_revert[UNLIMITED_ROUND_ROBIN].txt: success=100.0% client_mean=PT0.120107163S server_cpu=PT1H30M0.0000004S client_received=45000/45000 server_resps=45040 codes={200=45000}
live_reloading[CONCURRENCY_LIMITER_PIN_UNTIL_ERROR].txt: success=94.3% client_mean=PT6.987708S server_cpu=PT1H56M8.59S client_received=2500/2500 server_resps=2500 codes={200=2357, 500=143}
live_reloading[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=84.0% client_mean=PT4.6409416S server_cpu=PT1H53M13.43S client_received=2500/2500 server_resps=2500 codes={200=2101, 500=399}
live_reloading[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=84.0% client_mean=PT4.5915112S server_cpu=PT1H52M50.27S client_received=2500/2500 server_resps=2500 codes={200=2101, 500=399}
live_reloading[UNLIMITED_ROUND_ROBIN].txt: success=86.9% client_mean=PT2.802124S server_cpu=PT1H56M45.31S client_received=2500/2500 server_resps=2500 codes={200=2173, 500=327}
one_big_spike[CONCURRENCY_LIMITER_PIN_UNTIL_ERROR].txt: success=100.0% client_mean=PT2.569766928S server_cpu=PT2M30S client_received=1000/1000 server_resps=1000 codes={200=1000}
one_big_spike[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=100.0% client_mean=PT1.521339525S server_cpu=PT2M30S client_received=1000/1000 server_resps=1000 codes={200=1000}
one_big_spike[UNLIMITED_ROUND_ROBIN].txt: success=99.9% client_mean=PT1.000523583S server_cpu=PT7M37.35S client_received=1000/1000 server_resps=3049 codes={200=999, 429=1}
one_endpoint_dies_on_each_server[CONCURRENCY_LIMITER_PIN_UNTIL_ERROR].txt: success=64.9% client_mean=PT8.1950896S server_cpu=PT25M client_received=2500/2500 server_resps=2500 codes={200=1623, 500=877}
one_endpoint_dies_on_each_server[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=65.4% client_mean=PT4.0117872S server_cpu=PT25M client_received=2500/2500 server_resps=2500 codes={200=1636, 500=864}
one_endpoint_dies_on_each_server[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=64.2% client_mean=PT3.9523312S server_cpu=PT25M client_received=2500/2500 server_resps=2500 codes={200=1605, 500=895}
one_endpoint_dies_on_each_server[UNLIMITED_ROUND_ROBIN].txt: success=65.1% client_mean=PT0.6S server_cpu=PT25M client_received=2500/2500 server_resps=2500 codes={200=1628, 500=872}
server_side_rate_limits[CONCURRENCY_LIMITER_PIN_UNTIL_ERROR].txt: success=100.0% client_mean=PT12M44.762591546S server_cpu=PT9H55M40.4S client_received=150000/150000 server_resps=178702 codes={200=149989, 429=11}
server_side_rate_limits[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=100.0% client_mean=PT0.290099498S server_cpu=PT9H9M39.2S client_received=150000/150000 server_resps=164896 codes={200=150000}
server_side_rate_limits[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=100.0% client_mean=PT0.262702651S server_cpu=PT9H5M50.6S client_received=150000/150000 server_resps=163753 codes={200=150000}
server_side_rate_limits[UNLIMITED_ROUND_ROBIN].txt: success=100.0% client_mean=PT0.217416858S server_cpu=PT8H45M29.2S client_received=150000/150000 server_resps=157646 codes={200=150000}
simplest_possible_case[CONCURRENCY_LIMITER_PIN_UNTIL_ERROR].txt: success=100.0% client_mean=PT0.834469696S server_cpu=PT3H3M35S client_received=13200/13200 server_resps=13200 codes={200=13200}
simplest_possible_case[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=100.0% client_mean=PT0.785757575S server_cpu=PT2H52M52S client_received=13200/13200 server_resps=13200 codes={200=13200}
Expand All @@ -35,7 +35,7 @@ one_endpoint_dies_on_each_server[CONCURRENCY_LIMITER_PIN_UNTIL_ERROR].txt: succe
slow_503s_then_revert[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=100.0% client_mean=PT0.088828705S server_cpu=PT4M22.364666657S client_received=3000/3000 server_resps=3031 codes={200=3000}
slow_503s_then_revert[UNLIMITED_ROUND_ROBIN].txt: success=100.0% client_mean=PT0.088828705S server_cpu=PT4M22.364666657S client_received=3000/3000 server_resps=3031 codes={200=3000}
slowdown_and_error_thresholds[CONCURRENCY_LIMITER_PIN_UNTIL_ERROR].txt: success=100.0% client_mean=PT2M28.388896928S server_cpu=PT8H48M18.546665835S client_received=10000/10000 server_resps=10899 codes={200=10000}
slowdown_and_error_thresholds[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=100.0% client_mean=PT3M24.308654421S server_cpu=PT14H3M44.353333281S client_received=10000/10000 server_resps=13383 codes={200=9999, 500=1}
slowdown_and_error_thresholds[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=100.0% client_mean=PT3M18.314817484S server_cpu=PT13H50M38.666666634S client_received=10000/10000 server_resps=13148 codes={200=9995, 500=5}
slowdown_and_error_thresholds[UNLIMITED_ROUND_ROBIN].txt: success=3.6% client_mean=PT21.551691121S server_cpu=PT54H45M57.899999949S client_received=10000/10000 server_resps=49335 codes={200=360, 500=9640}
uncommon_flakes[CONCURRENCY_LIMITER_PIN_UNTIL_ERROR].txt: success=99.0% client_mean=PT0.000001S server_cpu=PT0.01S client_received=10000/10000 server_resps=10000 codes={200=9900, 500=100}
uncommon_flakes[CONCURRENCY_LIMITER_ROUND_ROBIN].txt: success=99.0% client_mean=PT0.000001S server_cpu=PT0.01S client_received=10000/10000 server_resps=10000 codes={200=9900, 500=100}
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -1 +1 @@
success=73.7% client_mean=PT3.039455S server_cpu=PT20M client_received=2000/2000 server_resps=2000 codes={200=1474, 500=526}
success=73.6% client_mean=PT2.91032S server_cpu=PT20M client_received=2000/2000 server_resps=2000 codes={200=1472, 500=528}
Loading