October 27, 2016
Hot Topics:

Building in J2EE Performance during the Development Phase

  • July 7, 2006
  • By Steven Haines
  • Send Email »
  • More Articles »

Code Profiling

The purpose of code profiling is to identify sections of your code that are running slowly and then determine why. The perfect example I have to demonstrate the effectiveness of code profiling is a project that I gave to my Data Structures and Algorithm Analysis class—compare and quantify the differences among the following sorting algorithms for various values of n (where n represents the sample size of the data being sorted):

  • Bubble sort
  • Selection sort
  • Insertion sort
  • Shell sort
  • Heap sort
  • Merge sort
  • Quick sort

As a quick primer on sorting algorithms, each of the aforementioned algorithms has its strengths and weaknesses. The first four algorithms run in O(N2) time, meaning that the run time increases exponentially as the number of items to sort, N, increases; specifically, as N increases, the amount of time required for the sorting algorithm to complete increases by N2. The last three algorithms run in O( N log N ) time, meaning that the run time grows logarithmically: as N increases, the amount of time required for the sorting algorithm to complete increases by N log N. Achieving O( N log N ) performance requires additional overhead that may cause the last three algorithms to actually run slower than the first four for a small number of items. My recommendation is to always examine both the nature of the data you want to sort today and the projected nature of the data throughout the life cycle of the product prior to selecting your sorting algorithm.

With that foundation in place, I provided my students with a class that implements the aforementioned sorting algorithms. I really wanted to drive home the dramatic difference between executing these sorting algorithms on 10 items as opposed to 10,000 items, or even 1,000,000 items. For this exercise, I think it would be useful to profile this application against 5,000 randomly generated integers, which is enough to show the differences between the algorithms, but not so excessive that I have to leave my computer running overnight.

Figure 6 shows the results of this execution, sorting each method by its cumulative run time.

Figure 6. The profiled methods used to sort 5,000 random integers using the seven sorting algorithms

We view the method response times sorted by cumulative time, because some of the algorithms make repeated calls to other methods to perform their sorting (for example, the quickSort() method makes 5,000 calls to q_sort()). We have to ignore the main() method, because it calls all seven sorting methods. (Its cumulative time is almost 169 seconds, but its exclusive method time is only 90 milliseconds, demonstrating that most of its time is spent in other method calls—namely, all of the sorting method calls.) The slowest method by far is the bubbleSort() method, accounting for 80 seconds in total time and 47.7 percent of total run time for the program.

The next question is, why did it take so long? Two pieces of information can give us insight into the length of time: the number of external calls the method makes and the amount of time spent on each line of code. Figure 7 shows the number of external calls that the bubbleSort() method makes.

Figure 7. The number of external calls that the bubbleSort() method makes

This observation is significant—in order to sort 5,000 items, the bubble sort algorithm required almost 12.5 million comparisons. It immediately alerts us to the fact that if we have a considerable number of items to sort, bubble sort is not the best algorithm to use. Taking this example a step further, Figure 8 shows a line-by-line breakdown of call counts and time spent inside the bubbleSort() method.

Figure 8. Profiling the bubbleSort() method

By profiling the bubbleSort() method, we see that 45 percent of its time is spent comparing items, and 25 percent is spent managing a for loop; these two lines account for 56 cumulative seconds. Figure 8 clearly illustrates the core issue of the bubble sort algorithm: on line 15 it executes the for loop 12,502,500 times, which resolves to 12,479,500 comparisons.

To be successful in deploying high-performance components and applications, you need to apply this level of profiling to your code.

Coverage Profiling

Identifying and rectifying memory issues and slow-running algorithms gives you confidence in the quality of your components, but that confidence is meaningful only as long as you are exercising all—or at least most—of your code. That is where coverage profiling comes in; coverage profiling reveals the percentage of classes, methods, and lines of code that are executed by a test. Coverage profiling can provide strong validation that your unit and integration tests are effectively exercising your components.

In this section, I’ll show a test of a graphical application that I built to manage my digital pictures running inside of a coverage profiler filtered according to my classes. I purposely chose not to test it extensively in order to present an interesting example. Figure 9 shows a class summary of the code that I tested, with six profiled classes in three packages displayed in the browser window and the methods of the JThumbnailPalette class with missed lines in the pane below.

Figure 9. Coverage profile of a graphical application

The test exercised all six classes, but missed a host of methods and classes. For example, in the JThumbnailPalette class, the test completely failed to call the methods getBackgroundColor(), setBackgroundColor(), setTopRow(), and others. Furthermore, even though the paint() method was called, the test missed 16.7 percent of the lines. Figure 10 shows the specific lines of code within the paint() method that the test did not execute.

Figure 10 reveals that most lines of code were executed 17 times, but the code that handles painting a scrolled set of thumbnails was skipped. With this information in hand, the person needs to move the scroll bar, or configure an automated test script to move it, to ensure that this piece of code is executed.

Coverage is a powerful profiling tool, because without it, you may miss code that your users will encounter when they use your application in a way that you do not expect (and rest assured, they definitely will).

Figure 10. A look inside the JThumbnailPalette’s paint() method


As components are built, performance unit tests are performed alongside functional unit tests. These performance tests include testing for memory issues, and code issues, and the validation of the coverage of tests to ensure that the majority of component code is being tested.

About the Author

Steven Haines is the author of three Java books: The Java Reference Guide (InformIT/Pearson, 2005), Java 2 Primer Plus (SAMS, 2002), and Java 2 From Scratch (QUE, 1999). In addition to contributing chapters and coauthoring other books, as well as technical editing countless software publications, he is also the Java Host on InformIT.com. As an educator, he has taught all aspects of Java at Learning Tree University as well as at the University of California, Irvine. By day he works as a Java EE 5 Performance Architect at Quest Software, defining performance tuning and monitoring software as well as managing and performing Java EE 5 performance tuning engagements for large-scale Java EE 5 deployments, including those of several Fortune 500 companies.

Source of this material

Pro Java EE 5 Performance Management and Optimization
By Steven Haines

Published: May 2006, Paperback: 424 pages
Published by Apress
ISBN: 1590596102
Retail price: $49.99
This material is from Chapter 5 of the book.

Page 4 of 4

Comment and Contribute


(Maximum characters: 1200). You have characters left.



Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

Sitemap | Contact Us

Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel