The Mission Control 3.0 Latency Analyser (Migrated from the old BEA blog)

The JRockit JVM incorporates technology (the JRockit Deterministic Garbage Collector) which can guarantee an upper bound for worst case pause times for garbage collections. This greatly helps in reducing unexpected latencies. When writing Java applications, however, it is still possible to introduce latencies through non-optimal use of Java Synchronization, I/O and so on. These latencies can cause poor performance and/or response times in your application and can be quite hard to find. The latency analyzer provides a simple way to analyze where such latencies are caused.

The JRockit Latency Analyzer is built into JRA (JRockit Runtime Analyzer) in Mission Control 3.0. When starting a JRA recording it is now possible to select a Latency Recording Profile. 

 start_recording

When such a profile is selected, or Enable Latency Recording is selected from the advanced options, the recorded profile will contain latency information. When a recording contains such information, three additional tabs (Latency Log, Latency Graph, Latency Traces), two views (Event Types, Properties) and a new Latency Perspective will be at your disposal.

The Latency Log tab includes a log which simply lists all the events in a table and lets you filter them. 

The Latency Graph tab includes a graph, showing for each thread which latency events that have happened during the duration of the recording.

The Latency Traces tab includes a tree table view showing you stack traces that have been involved in causing these latencies, and how much latency has been caused by the traces leading to a certain node.

In the example below we have found the response times of our latency sensitive application somewhat disappointing. The CPU isn't saturated, so we've decided to make a JRA recording with latency information in it. We're now viewing the recording in the latency graph (below). At the top of each view is the range selection tool, where you can zoom and pan to filter the events over time. I've zoomed in on an arbitrary part of the recording to show how the range selector looks when zoomed in. This particular recording shows the same problem all over the recording, so panning to another part would yield a very similar image.

latency_graph

From the graph we can see that we seem to do quite a lot Java Synchronization. In the log view (screen shot below) we can look at each of the individual events. In the screen shot I've filtered on the Java Synchronization Events and added the first bunch of them to the Operative Set. They are the ones highlighted in the Range Selector. Seems a good portion of them are indeed the Java Synchronization Events. Another way to select only the Java Synchronization Events would be to deselect the other Event Types in the Event Type view.

operative_set 

If we switch to the trace view, we can see that almost all the latency is caused from calls to a java.util.logging Logger.

casus_latencium

Now, if we remove those logging calls, almost no Java Synchronization events remain, and thus the latency caused by those events have been eliminated. See below for a picture of a recording with the logging calls eliminated.

logging_removed

JRockit Mission Control 3.0 was recently made generally available. For more information about the features in Mission Control 3.0, please see the R27.3 release notes and the Mission Control Documentation.

For information about what was introduced with Mission Control 2.0, please see my previous blog entry - Mission Control 2.0 Sneak Peak

Comments:

Post a Comment:
Comments are closed for this entry.
About

A blog focused on JVM technology and Mission Control

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today