-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathintroduction.tex
17 lines (17 loc) · 3.29 KB
/
introduction.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Virtual machines (VMs) like the Java Virtual Machine (JVM) are used as the execution environment of choice for many modern programming languages.
VMs interpret a suitable intermediate language (e.g. Java byte code for the JVM) and provide the runtime system for application programs. VMs usually include a garbage collector, a thread scheduler, and interfaces to the host operating system.
As interpretation of intermediate code is time-consuming, most VMs contain a \textit{Just-in-time} (JIT) compiler that translates frequently executed functions or methods to \textit{native} machine code.
\\\\
The JIT compiler executes in parallel to the interpreted program and, as a result, compilation speed is a critical issue in the design of a JIT compiler.
Unfortunately, it is difficult to design a compiler that produces good or excellent code while limiting the resource demands of this compiler. The compiler requires storage, CPU cycles and even on a multi-core processor, compilation may slow down the execution of the application program.
\\\\
Consequently, VMs usually adopt a multi-tier compilation system.
%The first tier usually is the interpretation of a method. If this method is executed frequently, the Tier-1 compiler translates this method into native code.
%The Tier-1 compiler implements only a small set of the know optimization techniques and as result, it had good compilation speed but the generated code is far from the output of an optimizing compiler. Such a compiler is usually the Tier-2 compiler, which takes a longer amount of time and produces optimized native code. To determine which methods should be compiled by the Tier-1 (or Tier-2) compiler, the VM profiles the execution of all application programs to identify “hot” methods.
At program startup, all methods are interpreted by the virtual machine (execution at Tier 0). The interpreter gathers execution statistics called \textit{profiles} and if a method is determined to be executed frequently, this method is compiled by the Tier 1 compiler. Methods compiled to Tier 1 are then profiled further and based on this profiling information, some methods are eventually compiled at higher tiers.
One of the drawbacks of this setup is that for all programs, all methods start at Tier 0, with interpretation and profiling by the VM. However, for many programs the set of the most used methods does not change from one execution to another and there is no reason to gather profiling information again.
\\\\
The main idea of this thesis is to cache profiles from a prior execution to be used in further runs of the same program.
Having these \textit{cached profiles} available avoids the JIT compiler to gather the same profiling information again. As well as allow the compiler to use more sophisticated profiles early in program execution and prevent recompilations when more information about the method is available. While this in general should not significantly influence the peak performance of the program, the hope is to decrease the time the JVM needs to achieve it, the so called \textit{warmup}.
\\\\
This thesis proposes a design and an implementation of a profile caching feature for \textit{HotSpot}, an open source Java virtual machine maintained and distributed by Oracle Corporation as well as a profound performance analysis using state-of-the-art benchmarks.