Sample Chapter: Efficient PL/SQL, Page 3
What is Efficient PL/SQL?
If (or hopefully, after you have read this book, when) you are convinced of the benefits of PL/SQL, the next logical task is to ensure that any PL/SQL code you develop runs efficiently.
There are a myriad of books on the market that espouse best practices for efficient PL/SQL. However, many of these texts propose that efficiency comes from good standards and consistent coding styles, for example, a set of naming standards for such constructs as variables, cursor definitions, and so on. It certainly is true that adhering to such standards will improve the consistency and readability and thus reduce the maintenance effort required during the coding life-cycle. However, it takes a leap of faith to assume that this will magically result in PL/SQL code that is efficient. Having well-structured methodologies and standards gives you an efficient environment for coding PL/SQL, but this by no means guarantees that you will create efficient code. So let's state in advance that this chapter (indeed, this book) is not about having good naming standards or consistent coding practices within your development team. These are important components of a successful development project (for any language, not just PL/SQL), but building efficient PL/SQL is a related but different topic.
- Performance: It runs within an acceptable elapsed time.
- Impact: It does not break (or degrade in an unacceptable fashion) any other components in your system.
- Demonstrability: You can prove (or demonstrate) that under all reasonable conditions, the code will not violate the Performance or Impact conditions. Put simply, you tried to break it and you couldn't.
The order here is important. You must prove that your code can meet the elapsed time requirement before you move on to the impact analysis. Let's take a look at these broad guidelines in a bit more detail.
PerformanceAsk a PL/SQL developer what efficient code is, and the standard response will be "it runs fast." After all, this seems like common sense—if your code runs as fast as possible, what more could be asked of you? But how fast is fast enough?
The problem with this philosophy is that it is impossible to define an end goal. How do you know that the program can't run just a little bit faster? Given the vast number of features that appear in each new release of Oracle, for any given PL/SQL program, there is probably always some adjustment you could make to squeeze a little more performance out of it. A perfect example of this is the native compilation features in 9i. It is probably true that any PL/SQL program that contains some procedural logic (as opposed to being just a wrapper around some simple SQL) would run faster if it were natively compiled as opposed to the interpreted nature of PL/SQL (which is the default). But is it worth it? Natively compiled PL/SQL means that you have an external file system directory that needs to be backed up as part of your standard backup procedures. Also, there are overheads when mixing interpreted and natively compiled PL/SQL units, so you may need to look at natively compiling every PL/SQL unit in the database including those delivered in the standard database installation scripts (catproc.sql and so on). And after the effort of converting to a natively compiled database, did the gains justify the additional license costs of that C compiler you had to install? Can you quantify the gains you achieved anyway?
Trying to get PL/SQL to run as fast as it can is a waste of resources, namely your own! The question you have to ask yourself is not "Is it running as fast as possible?" but "Does it run within an acceptable elapsed time?" There is a subtle difference between the two. You will notice that we have not quantified what "acceptable" is because only you or your users can define the appropriate limits for each PL/SQL unit. If the code is designed for processing the nightly set of batch interfaces, it may not need to run sub-second; it simply needs to finish within the boundaries defined by the nightly batch window. However, if the code needs to (for example) return a description for a product item to a field on the user's data entry screen, a quick turnaround time is vital. Even more vital is that these limits are defined before the code is written.
Once the elapsed time criteria have been defined and accepted, you cannot consider your code to be performance efficient until that target has been reached. This is true even if the reason the target wasn't reached is not "your fault." Oracle professionals (both DBAs and developers) sometimes have a tendency to "pass the performance buck." For example, consider a PL/SQL program that has to complete in three minutes but part of its function is to query data from a remote Oracle database, which can only be accessed via very slow network. As a result, the PL/SQL program takes seven minutes to run. The typical response to this sort of performance problem is "The program is fine, it's a network issue." We argue that because this program fails the elapsed time requirement, it is therefore inefficient. It's unfortunate for the developer that there is a network problem, but that's just plain tough luck! Maybe the importance of the functionality of the program will yield a network upgrade, or alternatively, it might take a whole suite of other mechanisms (snapshots, queuing, materialized views, and so on) to yield a solution in which the program can run in its required three minutes. The fact remains that until it can be demonstrated that the proposed solution runs in three minutes, the solution is inefficient and should not be implemented. The definition of performance is not how much CPU a process uses or how many disk operations are required—it is a measure of acceptable response-time.
Page 3 of 19