September 18, 2014
Hot Topics:
RSS RSS feed Download our iPhone app

Portal Performance Planning

  • September 11, 2008
  • By Scott Nelson
  • Send Email »
  • More Articles »

Almost every time I think "This doesn't need to be brought up; it is obvious to everyone" during the planning stages of a portal project, I end up bringing it up during the QA or production stage shortly after mentally kicking myself for doing it again. I bring this up now because there are (hopefully) going to be many points in this article you will be saying to yourself (or out loud, if you have my silly habit) "I already knew that". When this happens, you may shake your head in amazement of the obvious, but please read on. If there is only one new performance tip you learn from this article, it will be worth it to not have to work some weekend trying to figure out what you missed. This is no guarantee that you won't be fixing some pitfall not covered here, but at least then you can be mad at me instead of yourself. What else are consultants for?

The ideas here are also geared mainly to J2EE, though many cross over well to C# and PHP. And, while the majority of these approaches were developed while working with the WebLogic portal, almost all are applicable to any web application. So, if you are using (or evaluating) something other than WLP for your project, the WLP-specific tips have been grouped into a single section for you to blatantly skip with hurting my feelings or your applications performance.

Three Common Performance Mistakes

Performance issues are almost inevitable in new portal applications. This is because portals are usually developed to aggregate access to systems that don't already co-exist, which means they probably weren't built with the intention to work with a portal. The difference between a well-planned portal project and one that people try to forget as soon as it is over is isn't determined by how many performance issues they have. Portal performance planning is rated on how quickly problems are solved when they present themselves. Every portal project where the mere mention of the application elicits a groan and a shudder from those involved suffered from at least one of the following traits.

Over-Planning for Performance

There are multiple reasons that planning for performance is the first mistake brought up here. One is that if it were last, you might never get to it (something that happens on projects, too). Another is that it is totally counter-intuitive, so it is often missed as a mistake. Further (and there are others that I won't bore you with), it is one of those mistakes I have lived through because I didn't think it was worth mentioning.

Over-planning has three serious drawbacks to it. The first is that, when too much focus is put into performance in the planning and design phases, a great deal of effort will be spent on tasks that will have an infinitesimal impact on performance. Although the idea that every millisecond saved is an improvement is valid, it should also be prioritized accordingly. That is, if you have some time after all the development is complete, go back and do those little tweaks.

The second drawback is that over-planning frequently leads to over-confidence. If you are positive you have killed every performance hole in your design phase, you are going to have a hard time figuring out where to start when a performance issue occurs.

The last drawback that you will examine here (there are others, but they are not as common and the parameters to specify when they apply could take more exposition than most are willing to read) is a direct result of the second, which is that no matter how well you plan, something will be missed. It is often a case of missing the forest while focusing on the trees.

A perfect example of over-planning and over-confidence worked to undermine both was an extremely large and complicated portal project that was managed with a big-bang waterfall approach. Detailed designs were integrated with an excellent model-driven-architecture tool and were considered to be bullet-proof by everyone from the architects and project managers to the developers and QA teams, all of who reviewed them prior to development beginning. The first integration release, with minimal functionality deployed, ran at a crawl. Debuggers didn't pinpoint anything obvious and the logs simply showed a slowdown occurring on every call to the back end. Days were spent looking for a network issue because it always occurred during a very simple call for user information that couldn't possible be (in the opinion of the designers and developers) the problem.

To make a long story short (something usually said just a little too late), the problem was that the more efficient StringBuffer was used rather than String to concatenate the necessary parameters for the request. Because the design was meticulously detailed prior to development and development began with the fully documented stubs generated by the MDA tool, the incredibly inefficient process of re-initializing the StringBuffer 20 times for each call because it only had the default constructor was the bottle-neck that no one found until a junior developer who had just read about how the StringBuffer behaves in such situations pointed it out to much the more senior team.

Under-Planning for Performance

Okay, so now that you all know not to put too much into performance planning up front, it is time for the first round of "well everyone knows that" as you look at under-planning. I'm hopefully going to make this worthwhile for those who already know that under-planning is a bad idea by throwing in a few specific approaches that are often not include in performance planning.

The first piece (and probably the most obvious) is to use a logging API to help pinpoint performance issues while making sure that it doesn't become one. Most logging APIs include a check for logging levels. This allows the developer to include logging statements generously during development, and preface those that will not often be needed with a check for the debug level. This leads to a so-obvious-it-doesn't-need-to-be-mentioned (a term we will use often, abbreviated as SOIDNTBM) mention that logging should be set at error level in production.

If your infrastructure doesn't have a pre-production environment that is identical to production to turn on debugging for trouble-shooting, you can still take the short-term performance hit of putting production into debug mode while you debug performance issues. This is a clue of when to log an event. Any call to an external system should have a debug log statement. Any internal algorithms that can take longer than a few milliseconds should log at the beginning, the end, and during any heavy calls (all with the debug check).

Most logging APIs include a configuration where the log entries include the class and a time stamp. This means that creating timers to measure the length of a call will be nothing but a performance hit because the timestamps can be evaluated with simple math to determine the length of the call.

All exceptions should be logged, and they should be logged all the time, without the debug flag check. Leading to another SOIDNTBM: Only use exceptions for exceptions. Every time I see a catch block that has logic making it obvious that the developer expects the exception to be thrown by their code and are trying to use the exception as a return value when they can anticipate it occurring, I know I will be spending a lot of time tracking down performance issues.

The fact that calling new on an object is a performance hit is definitely SOIDNTBM. Yet, time and time again I see the same String Literals used repeatedly in applications. What is even more dismaying is when the first attempt of reducing this overhead is taken by declaring them as static finals at a class level when the same string is used by multiple classes. I once audited an application where I noticed in the first JSP that I picked at random were several static final String declarations. It occurred to me these strings were probably used elsewhere. I picked one (again at random) and found the same static final declaration in 78 objects. I realize that many virtual machines are optimized to catch this sort of thing, but that optimization generally occurs only between objects that will occur in the same process. A trick I was taught on my very first Java project was to have a singleton (or interface) that contains all such strings. Another performance benefit of this approach is that in most code blocks you can use the higher performance "==" comparison rather than the much higher overhead involved with .equals(). Granted, this may be hard to maintain for very large applications, at which point such static constant objects can be spread throughout applications or packages, though as much care as practical should be given to not repeating the same declarations.





Page 1 of 2



Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 


Sitemap | Contact Us

Rocket Fuel