-
Notifications
You must be signed in to change notification settings - Fork 13
Benchmarks
This page gives information on our benchmarking results from our Technical Report (TR) and from our conference submission and provides all tools to reproduce these results.
We keep all raw result files and scripts in a benchmark repository.
To produce dynamic call graphs, use our JVMTI agent. To produce static call graphs, use Soot with ProBe, using the main class probe.soot.CallGraph
. To compare both call graphs, use the main class probe.CallGraphDiff
.
We provide all 10 iterations of log files here. You can reproduce them using the Play-Out Agent (see Downloads).
The computed intersections of call graphs are available here (in Probe's call-graph format). This is the Java Program that we used to create these intersections. (An extension to Probe.)
You can see the impact of the input size on the log files by inspecting the different log files here. For TR: The number of phantom classes can be validated through the soot.log
files in this directory.
The log files for the interactive programs can be found here.
To measure the performance overhead of the agents, just run the DaCapo benchmarks with both agents.
To measure the runtime of Soot, just run Soot on the directory produced by the Plaz-Out Agent.
We provide scripts that automate these tasks.
You can find out runtime logs in the same place as the call-graph results (the soot.log
files).