Ensuring Performance During the Quality Assurance Phase
The first step in identifying performance issues is to establish monitoring capabilities in your integration and production staging environments, and record the application behavior while under load. This record lists service requests that can be sorted by execution count, average execution time, and total execution time. These service requests are then tracked back to use cases to validate against predefined SLAs. Any service request whose response time exceeds its SLA needs to be analyzed to determine why that's the case. Figure 1 shows a breakdown of service requests running inside the MedRec application. In this 30-second time slice, two service requests spent an extensive amount of time executing: GET /admin/viewrequests.do was executed 12 times, accounting for 561 seconds, and POST /patient/register.do was executed 10 times, accounting for 357 seconds.
Figure 1. Breakdown of service requests running inside the MedRec application
As shown in Figure 2, looking at the average exclusive time for each method that satisfies the POST /patient/register.do service request, the HTTP POST at the WebLogic cluster consumed on average 35.477 seconds of the 35.754 total service request average, which is important because the request passed quickly from the Web server to the application server, but then waited at the application server for a thread to process it. The remainder of the request processed relatively quickly.
Figure 2. Breakdown of response time for the POST /patient/register.do service request for each method in a hierarchical request tree
Figure 3 shows a view of the performance metrics for the application server during this recorded session. This screen is broken into three regions: the top region shows the heap behavior, the middle shows the thread pool information, and the bottom shows the database connection pool information.
Figure 3 confirms our suspicions: the number of idle threads during the session hit zero, and the number of pending requests grew as high as 38. Furthermore, toward the end of the session, the database connection usage peaked at 100 percent and the heap was experiencing significant garbage collection.
This level of diagnosis requires insight into the application, application server, and external dependency behaviors. With this information, you are empowered to determine exactly where and why your application is slowing.
Figure 3. Performance metrics for the application server during this recorded session
As components are integrated and tested in a production staging environment, application bottlenecks are identified and resolved.
About the Author
Steven Haines is the author of three Java books: The Java Reference Guide (InformIT/Pearson, 2005), Java 2 Primer Plus (SAMS, 2002), and Java 2 From Scratch (QUE, 1999). In addition to contributing chapters and coauthoring other books, as well as technical editing countless software publications, he is also the Java Host on InformIT.com. As an educator, he has taught all aspects of Java at Learning Tree University as well as at the University of California, Irvine. By day he works as a Java EE 5 Performance Architect at Quest Software, defining performance tuning and monitoring software as well as managing and performing Java EE 5 performance tuning engagements for large-scale Java EE 5 deployments, including those of several Fortune 500 companies.
Source of this materialPro Java EE 5 Performance Management and Optimization
By Steven Haines