Sample Chapter: Efficient PL/SQL, Page 5
Voilà! We can insert 100,000 customers in 30 seconds, and because it's unlikely that this many people are all going to try to register at once, we can be pretty confident that the program is fine from a performance perspective. However, the linchpin of demonstrability is not trying to get code to work, but trying to break it. Of course, in this example, it doesn't take a rocket scientist to see where the problems are. Assuming that pin numbers are always positive, the column definition of pin_number tells us a maximum of 999,999 customers is expected. Table 1-1 shows what happens when we reach our maximum customer mark.
Table 1-1. Benchmark for Customer Registration
|ROW INSERTION TEST||TIME TAKEN TO GENERATE|
|Rows 100,000 to 200,000||38 seconds|
|Rows 300,000 to 400,000||53 seconds|
|Rows 600,000 to 700,000||123 seconds|
This makes sense—as more and more customer rows are created, the likelihood of a randomly generated pin number already being in use becomes higher, so we need to iterate more and more times through our random number generator before finding a unique pin number. With each additional customer, the process will continue to consume more and more CPU per customer and get slower and slower. Consider the poor customer who happens to be the one-millionth person to register (or at least attempt to). They will wait forever trying to obtain a pin number. Randomness and uniqueness are mutually exclusive—if the occurrence of something (in this case a pin number) is genuinely random, we can never be sure of uniqueness. We did not even have to extend the test to ensure lots of concurrent customers can register—we demonstrated that the system starts to exhibit performance issues with just one customer trying to register (presuming many others have already done so).
This is what I like to call the "Let's hope it's not a success" approach to coding. That is, it's quite possible that the solution just implemented will work well as long as you don't attract too many customers. But of course, we implemented this system in the hope of doing precisely the opposite. It's a no-win situation. The more customers you get, the worse the system will treat them.
The solution was to compose a longer customer ID, the first six digits generated by a sequence (an object that is inherently designed to be associated with uniqueness), and the second six digits generated randomly.
First we need a sequence that contains six digits, so we will make it start from 100,000. Our table will also need to be able to house a larger pin number.
SQL> alter table CUSTOMERS modify PIN_NUMBER number(12); Table altered. SQL> create sequence cust_seq cache 1000 2 start with 100000; Sequence created.
Then we amend our procedure to insert a pin number that is comprised of the next value from our sequence, appended to a random six-digit number. The code is smaller as well because by using a sequence, uniqueness is guaranteed, thus obviating the need for the dup_val_on_index checking.
SQL> create or replace 2 procedure gen_customer is 3 v_new_cid customers.pin_number%type; 4 begin 5 insert into customers 6 values (cust_seq.nextval*100000+ 7 round(dbms_random.value(100000,999999))); 8 end; 9 / Procedure created.
Repeating our tests show that we sacrificed a little of the initial performance (due to the larger amount of data being inserted) for the sake of consistent and predictable results right up to the maximum number of expected customers, as shown in Table 1-2.
Table 1-2. Repeated Benchmark for Customer Registration
|ROW INSERTION TEST||TIME TAKEN TO GENERATE|
|Rows 0 to 100,000||55 seconds|
|Rows 100,000 to 200,000||55 seconds|
|Rows 300,000 to 400,000||59 seconds|
|Rows 600,000 to 700,000||59 seconds|
|Rows 900,000 to 999,999||61 seconds|
With this simple change, we have definitively proved that one million customers can be handled, and that the millionth customer will be able to register just as efficiently as any other. We have thus demonstrated that the program works under the expected customer volume conditions. The next test ensures that if we have many customers concurrently trying to register, performance does not suffer. We can simulate concurrent users with the dbms_job scheduling facility. For example, consider the following anonymous PL/SQL block (which creates 10,000 customers):
begin for i in 1 ..10000 loop gen_customer; end loop; commit; end;
If we want to execute this simultaneously across 20 concurrent sessions, we simply submit it 20 times using dbms_job. First, we ensure that at least 20 concurrent job processes are allowed.
SQL> alter system set job_queue_processes =20; System altered.
Because we cannot directly view the output from database jobs using dbms_output, we will create a custlog table to hold the elapsed time for each of the job executions.
SQL> create table CUSTLOG (elapsed_centiseconds number); Table created.
We then submit our PL/SQL job within the dbms_job framework. We have altered the anonymous block that is submitted to capture the elapsed time into the custlog table.
SQL> declare 2 j number; 3 job_string varchar2(1000):= 4 'declare 5 s number :=dbms_utility.get_time; 6 begin 7 for i in 1 ..10000 loop 8 gen_customer; 9 end loop; 10 insert into custlog values (dbms_utility.get_time-s); 11 commit; 12 end;'; 13 begin 14 for i in 1 ..20 loop 15 dbms_job.submit(j,job_string); 16 end loop; 17 end; 18 / PL/SQL procedure successfully completed.
The jobs will not commence until a commit is issued. Once the jobs have completed, we can view the custlog table to check the elapsed time of each submitted job.