-
Saturation experiments: with varying client request rate.
-
Security experiments: attacks are simulated via network partition.
-
DoNothing experiments: where transactions return right away.
The experiments results are in `/data/dinhtta/blockbench_exps_results/hyperledger' in the head node.
There are a number of global variables that are to be set in env.sh
:
$HL_HOME
: NFS-mounted directory containing the scripts.$HL_DATA
: non-NFS directory for Hyperledger dataHL_SOURCE
: non-NFS directory containing the Hyperledger installationSHOSTS
: contains IP addresses of all nodes used for running validating peers. The example file contains 32 hosts$CLIENTS
: containing IP addresses of all nodes used by the clients. There are 2 clients per node. The example file contains 16 hosts$LOG_DIR
: directory where the logs are stored. NFS-mounted directories are recommended.$EXE_HOME
: containing the executable for driving the workloads (ycsb or smallbank).- YCSB:
$ETH_HOME/../src/ycsb
- Smallbank:
$ETH_HOME/../src/smallbank/ycsb
- YCSB:
$BENCHMARK
: name of the benchmark (ycsb or smallbank)
Since PBFT is used for consensus, each network size can tolerate only up to a certain number of failures. This
configuration is set in $HL_SOURCE/consensus/pbft/config.yaml
. There are a set of pre-defined settings for
each different network sizes:
$HL_HOME/hl_consensus_<nservers>.yaml
: contains the genesis block for experiments usingnservers
.- Number of peers
N
. - Number of maximum failures
f
(must be no bigger thanN/3
).
- Number of peers
There are 3 steps in running an experiment: peer startup, client startup, and finally
cleaning up. Scripts for these steps are included in $HL_HOME
. The top-level script:
run-bench.sh <nservers> <nthreads> <nclients> <txrate> [-drop]
will start nservers
peers, nclients
clients each running with nthreads
, each thread issuing txrate
transactions
per second. The outputs are stored in
$LOG_DIR
. If -drop
is specified, 4 servers will be killed after the clients are running for about 250s.
start-all.sh <nservers>
: start a network ofnservers
peers in the following steps:-
Copy the correct consensus configuration to each of the
nservers
peers. -
Start the bootstrapping peer at the first host in
HOST
, by runningstart-root.sh
-
Start the remaining peers by running
start-slave <bootstrapping_host> <peer_index>
-
Sleep for a few seconds in order for the peers to connect to each other (which is very fast).
start-multi-clients.sh <nclients> <nservers> <nthreads> [-drop]
: takes as input a number of clients and launch2.nclients
clients connecting to2.nclients
servers. It then performs the following:-
Go to each client node in
clients
and invokestart-client.sh
start-client.sh <nthreads> <client_index> <nservers>
: startdriver
process at the client nodeclient_index^{th}
to connect to the server node of index2.(client_index)
and2.(client_index)+1
** Different benchmark may expect different command line arguments for the
driver
process.** -
Let the clients run for
M
(seconds), a sufficiently long duration to collect enough data. Then kill all client processes. Particularly:M = 320 + 10*<nservers>
-
When
-drop
is specified:- It starts the clients and sleep for 250 seconds
- It then kills off the last 4 servers
- It then continues to run (the remaining clients and servers) for another
(M-250)
seconds.
-
stop-all.sh
: kill all server and client (driver) processes in all hosts listed in$HOSTS
and$CLIENTS
.
When the experiment exits cleanly (normal case), the client processes are already killed in
start-multi-client.sh
and server processes in run-bench.sh
. But if interrupted (Ctrl-C), both server and
client processes should be killed explicitly with stop-all.sh
.
-
Running with the same number of clients and servers:
-
Change
LOG_DIR
inenv.sh
to correct location, sayresult_same_s_same_c
-
Start it (e.g. X=8):
. run-bench.sh 8 32 8 10
This will start 8 peers (on first 8 nodes in
$HOSTS
) and 8 clients (on 4 first 4 nodes in$CLIENTS
). Each client runsdriver
process with 32 threads, each thread issuing 10 transactions per seconds. The clients output logs toresult_same_s_same_c/exp_8_servers
directory, with the file formatclient_<peer_host>_16
. -
-
Running with fixed number of clients and varying number of servers:
-
Change
LOG_DIR
inenv.sh
to correct location, sayresult_fixed_c
-
Start it (X=16):
. run-bench.sh 16 32 8 10
This will start 16 peers (on first 16 nodes in
$HOSTS
) and 8 clients (on 4 first 4 nodes in$CLIENTS
). Each client runsdriver
process with 32 threads. The clients output logs toresult_fixed_c/exp_16_servers
directory, with the file formatclient_<peer_host>_16
. -
-
Drop off nodes:
-
Change
LOG_DIR
inenv.sh
to correct location, sayresult_fixed_c_drop_4
-
Start it (X=16):
. run-bench.sh 16 32 8 10 -drop
This will start 16 peers (on first 16 nodes in
$HOSTS
) and 8 clients (on 4 first 4 nodes in$CLIENTS
). Each client runsdriver
process with 32 threads. At 250th second, 4 peers are killed off, while the rest continues to run. The clients output logs toresult_fixed_c_drop_4/exp_16_servers
directory, with the file formatclient_<peer_host>_16
. -
-
Running with different workloads:
Simply change $EXE_HOME
and $BENCHMARK
variables in env.sh
. Then repeat the steps like in other
examples.