You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Quite a number of these tests have a high garbage collection component. It's well known that allocation-heavy benchmarks will run faster with larger heap sizes, and different implementations may be tuned to different heap sizes relative to live data. For consistency, it would be good to tune all implementations to have the same heap size for each benchmark -- i.e. for each benchmark, determine the minimum heap size at which the benchmark runs on any implementation, and then run all implementations at, say, 2.5x that heap size.
For Guile you can do this by setting the GC_INITIAL_HEAP_SIZE and GC_MAXIMUM_HEAP_SIZE environment variables. Like, let's say you want to determine the minimum heap size for chudnovsky; then you do GC_INITIAL_HEAP_SIZE=3m GC_MAXIMUM_HEAP_SIZE=3m ./bench guile chudnovsky to try at 3 megabytes, and you vary the 3m until you find a heap size at which the benchmark doesn't run. You record that size for chudnovsky, then do it for all the others. For chudnovsky for example I find it to be 2700k or so. So let's say we run at 2.5 heap size, then then when running the tests you do GC_INITIAL_HEAP_SIZE=6750k GC_MAXIMUM_HEAP_SIZE=6750k ./bench guile chudnovsky. But, better to set GC_INITIAL_HEAP_SIZE only when running the compiled artifact and not the compiler!
Anyway, a thought, just if you were interested :) I will probably do this for Guile at some point for our internal benchmarks.
The text was updated successfully, but these errors were encountered:
Quite a number of these tests have a high garbage collection component. It's well known that allocation-heavy benchmarks will run faster with larger heap sizes, and different implementations may be tuned to different heap sizes relative to live data. For consistency, it would be good to tune all implementations to have the same heap size for each benchmark -- i.e. for each benchmark, determine the minimum heap size at which the benchmark runs on any implementation, and then run all implementations at, say, 2.5x that heap size.
For Guile you can do this by setting the
GC_INITIAL_HEAP_SIZE
andGC_MAXIMUM_HEAP_SIZE
environment variables. Like, let's say you want to determine the minimum heap size for chudnovsky; then you doGC_INITIAL_HEAP_SIZE=3m GC_MAXIMUM_HEAP_SIZE=3m ./bench guile chudnovsky
to try at 3 megabytes, and you vary the 3m until you find a heap size at which the benchmark doesn't run. You record that size for chudnovsky, then do it for all the others. For chudnovsky for example I find it to be 2700k or so. So let's say we run at 2.5 heap size, then then when running the tests you doGC_INITIAL_HEAP_SIZE=6750k GC_MAXIMUM_HEAP_SIZE=6750k ./bench guile chudnovsky
. But, better to set GC_INITIAL_HEAP_SIZE only when running the compiled artifact and not the compiler!Anyway, a thought, just if you were interested :) I will probably do this for Guile at some point for our internal benchmarks.
The text was updated successfully, but these errors were encountered: