Skip to content
formanojhr edited this page May 24, 2015 · 5 revisions

**Brief History: ** A couple years ago(almost there!! ), I joined Mashery, a leader in API management(think mobile apps like Starbuck making calls to buy coffee on a daily basis!) which is an Intel company. One of the first cool things I got to do when I joined Mashery is to work on their traffic management cloud hosted SaaS layer which handles about a billion calls per day now. On a layer like this, every hashmap.put and get matters as it affects the JVM’s worst case behavior. I got to investigate some intricate and complex treading as well as caching related stability and performance issues which affected the throughout of the traffic coming into the layer. As part of the investigation, I had the do the same sequence of things I do when running into a performance issue:

  1. Drive a load which closely as I understand simulated the production characteristics when the server layer starts to tip over into unknown lands. This to me stands as the toughest past since observation and intuition matter the most in my experience.

  2. Profile the JVM or look at metrics/logs to understand what is happening to the server(in Mashery’s case a HTTP protocol proxy server or traffic routing layer) beast.

For driving the load, I had to have corner cases which are not available in most famous load test tools like the JMeter for instance, output streaming of POST data or input streaming to GET HTTP requests. Hence, became this ad hoc developed concurrent http tool.

Introducing ZenShiner : https://github.com/formanojhr/zenshiner I hope more source contributions and grow this project for the benefit of the world.

This project’s objective is to contain tools that could stress test any HttpServer with ConcurrentHttpRequests in a non blocking* way and responses are captured and also test corner cases for a http server like response/ request streaming.

Why I wrote this tool ? This project’s objective is to contain tools that could stress test any HttpServer with ConcurrentHttpRequests in a non blocking way and responses are captured as they come back to understand characteristics of a Http Server.

The HTTP server application that I was working on was seeing lots of symptoms of I/O errors which looked like after-effect symptoms of timeouts on request response interruptions on the threads on which the request response cycle was carried on. This was specifically happening for target backends which were exhibiting certain slow latencies from the target backend routed by the HttpServer . 1. Concurrent Http Request: To simulate this, I had a bunch of patterns of concurrent requests simulated through the tool’s command line. So, I added different options for patterns of requests: 2. number of concurrent requests targeted towards a URI batches of number of concurrent requests which can shoot concurrent requests with timed waits between the batches. concurrent requests testing over a fixed duration of time 3. Slow Input/Output streaming: Next, I extended the tool to support input and output request/response streaming which simulates slowness based on command line parameters. 4. Command line: All of these options are configurable with command like parameters.

Caution: This tool is yet to be non-blocking in some ways. The threading model is as below: 1. Each request is created and called in a separate thread pool. 2. The request calls are created as a future and called by exectutor pool. 3. The threads in the pool as and when they become available make the http calls through ApacheHttpClient library as a syncronous call. Since the threads making the calls to the http server are blocking until response comes back with a timeout based on the Apache Http client library, a slow HttpServer can make tool less concurrent.

Clone this wiki locally