Monitoring Web Applications in GlassFish

In two of my previous entries [1, 2], I discussed how to monitor and tune http-service in GlassFish V2. In this entry, I discuss how to monitor the service times of web applications. In one customer engagement a couple of months ago, this feature came in very handy in isolating their performance problem so that I could tell them, "This is not a GlassFish problem". Since then, I have been using this as a first level debugging tool to ensure that any reported performance problem is caused by the application/container/database.

Performance statistics for individual applications can be obtained by enabling the monitoring level of the Web Container to be LOW or HIGH (./asadmin set server.monitoring-service.module-monitoring-levels.web-container=LOW).

Note: Unlike the EJB container, there is no difference in the displayed output between the LOW and HIGH values.

The monitoring framework provides a variety of application level statistics including the response times of individual Servlets as well as  details regarding HTTP sessions. The number and size of the session affect the performance of the server and thus are important parameters to monitor. I'll cover this in more detail in a later blog. Here, I'll stick to monitoring service times.

The asadmin list -m command can be used to obtain a list of all available Servlets as shown in the example below.
# asadmin list -m "server.applications.TestWebapp\*"

By default, the time statistics for all the  JSPs are listed under the JspServlet (identified as jsp) and the delivery of static content under the 'default' Servlet. Combining the service times for all the JSPs under a single node may not be useful for a variety of applications. Unfortunately, this is a limitation of the current implementation and the only available work around is to redeploy the application with a modified web.xml file that maps the JSP to a servlet and then specifying the appropriate URL pattern as shown in the example web.xml snippet below.

Once the application is redeployed, the JSP of interest (/elTester.jsp) can be monitored as a Servlet (server.applications.TestWebapp.server.ElTesterJsp).  The following command shows how to get a few interesting request processing statistics for a Servlet in the web application named 'TestWebapp'.

# asadmin get -m server.applications.TestWebapp.server.ControlServlet.maxtime-count server.applications.TestWebapp.server.ControlServlet.processingtime-count server.applications.TestWebapp.server.ControlServlet.requestcount-count

server.applications.TestWebapp.server.ControlServlet.maxtime-count = 112
server.applications.TestWebapp.server.ControlServlet.processingtime-count = 1173395
server.applications.TestWebapp.server.ControlServlet.requestcount-count = 3746651

The data of interest include the number of serviced requests, the maximum time taken for a request, and the cumulative processing time (in milliseconds). The average response time for the servicing of a request can be obtained by dividing the cumulative processing time by the number of serviced requests. Since all values are cumulative from the time monitoring was enabled, some number crunching (storing the baseline values for both number of requests and service times and subtracting it from the observed values) is required to evaluate the response time characteristics for a specific period. (I have an AMX client that I use for this purpose, I'm thinking about cleaning it up and putting it out).

Even though application service time monitoring is useful in ensuring that the response times are within the required limits, it is more useful as a debugging tool in investigating performance problems. In the customer scenario that I mentioned earlier, I was called in to investigate why they were seeing large spikes in response times every so often. They were convinced that these were full GC pauses and wanted me to tune the GC. A quick look at the jstat data showed that GC was not the problem (in fact, full GC did not even occur during our tests). So the next step was to confirm the fact that the spike in response time was caused by the web application (either in the app server or in the database). This is where application monitoring came in very handy. I monitored the max service time of the Servlet of interest and found  the value to be reasonable and much less than the response time spikes seen at the client. Now I could confidently tell the customer that this was not an application or application server problem. The most likely cause was some sort of network latency issue and some investigation did prove this to be the case. So the point of the story is - if you are experiencing high latency problems, application level monitoring is a quick way to check whether the problem is indeed in the application, the application server or the database.

In my next blog, I'll talk about how to monitor the application server so that we can identify database latency problems.


Post a Comment:
Comments are closed for this entry.



« February 2016