Faster Fetching
May 20, 2009 Hey, Ted
When using embedded SQL to read data, I have the option of retrieving one row or multiple rows with one fetch. Does fetching more than one row at a time improve program performance? –Neil I’d like to think it does, Neil. It seems to me it should. Here are the comments of John, a reader who claims that a multiple-row fetch is appreciably faster. I have used fetch with multiple-occurrence data structures in order to read multiple records at once and minimize the use of FETCH in the program. The execution of a FETCH is quite resource intensive and slow. I have found 50 to 100 rows quite effective. Trust me. It makes a massive difference in execution time. In programming terms I wrote a read subroutine that managed the data structure occurrence and executed FETCH only when I had run out of occurrences. It hid the mechanics of end of file and data handling from the main routines. I ran a little test, just to see what I might discover. First, here’s a simple single-fetch program. H option(*srcstmt:*nodebugio) Fqsysprt o f 132 printer D Transaction ds qualified inz D Batch 3p 0 D Code 2a D Order 7a D Item 15a D Warehouse 3a D Quantity 11p 3 D SqlEof c const('02000') D DetailLine s 132a D Print pr D Data 132a value D/copy prototypes,assert /free *inlr = *on; exec sql declare Input cursor for select batch, tcode, ordno, itnbr, house, trqty from imhist order by prqoh desc; exec sql open Input; assert (sqlstt=*zeros: 'Open error'); dow '1'; exec sql fetch next from Input into :Transaction; assert (sqlstt <= SqlEof: 'Fetch error'); if sqlstt >= SqlEof; leave; endif; Print (%editc(Transaction.Batch:'3') + ' ' + Transaction.Code + ' ' + Transaction.Order + ' ' + Transaction.Item + ' ' + Transaction.Warehouse + ' ' + %editc(Transaction.Quantity:'J')); enddo; return; /end-free Oqsysprt e PrintLine 1 O DetailLine * ========================================================== P Print b D pi D Data 132a value /free DetailLine = Data; except PrintLine; /end-free P e Here’s the same program, with a multiple-row fetch into an array data structure. H option(*srcstmt:*nodebugio) Fqsysprt o f 132 printer D ArraySize s 10i 0 inz(100) D Transaction ds qualified inz dim(100) D Batch 3p 0 D Code 2a D Order 7a D Item 15a D Warehouse 3a D Quantity 11p 3 D Ndx s 10i 0 D SqlEof c const('02000') D DetailLine s 132a D Print pr D Data 132a value D/copy prototypes,assert /free *inlr = *on; assert (%elem(Transaction) = ArraySize: 'Invalid data structure definition'); exec sql declare Input cursor for select batch, tcode, ordno, itnbr, house, trqty from imhist order by prqoh desc; exec sql open Input; assert (sqlstt=*zeros: 'Open error'); dou sqlstt >= SqlEof; exec sql fetch next from Input for :ArraySize rows into :Transaction; assert (sqlstt <= SqlEof: 'Fetch error'); for Ndx = 1 to SqlEr3; Print (%editc(Transaction(Ndx).Batch:'3') + ' ' + Transaction(Ndx).Code + ' ' + Transaction(Ndx).Order + ' ' + Transaction(Ndx).Item + ' ' + Transaction(Ndx).Warehouse + ' ' + %editc(Transaction(Ndx).Quantity:'J')); endfor; enddo; return; /end-free Oqsysprt e PrintLine 1 O DetailLine * ========================================================== P Print b D pi D Data 132a value /free DetailLine = Data; except PrintLine; /end-free P e Last, I tested a multiple-row fetch into a multiple-occurrence data structure. H option(*srcstmt:*nodebugio) Fqsysprt o f 132 printer D ArraySize s 10i 0 inz(100) D Transaction ds qualified inz occurs(100) D Batch 3p 0 D Code 2a D Order 7a D Item 15a D Warehouse 3a D Quantity 11p 3 D Ndx s 10i 0 D SqlEof c const('02000') D DetailLine s 132a D Print pr D Data 132a value D/copy prototypes,assert /free *inlr = *on; assert (%elem(Transaction) = ArraySize: 'Invalid data structure definition'); exec sql declare Input cursor for select batch, tcode, ordno, itnbr, house, trqty from imhist order by prqoh desc; exec sql open Input; assert (sqlstt=*zeros: 'Open error'); dou sqlstt >= SqlEof; exec sql fetch next from Input for :ArraySize rows into :Transaction; assert (sqlstt <= SqlEof: 'Fetch error'); for Ndx = 1 to SqlEr3; %occur(Transaction) = Ndx; Print (%editc(Transaction.Batch:'3') + ' ' + Transaction.Code + ' ' + Transaction.Order + ' ' + Transaction.Item + ' ' + Transaction.Warehouse + ' ' + %editc(Transaction.Quantity:'J')); endfor; enddo; return; /end-free Oqsysprt e PrintLine 1 O DetailLine * ========================================================== P Print b D pi D Data 132a value /free DetailLine = Data; except PrintLine; /end-free P e The file with which I tested had 1.3 million records. The first two programs used 11 seconds of CPU time. However, the last one used only seven seconds of CPU time. Using an array data structure was no faster than the single-row fetch. The multiple-occurrence data structure was faster than the array data structure. –Ted RELATED STORY
|
Hi thanks for this, but I’m not sure SqlEr3 is the correct variable to use for number of rows returned from the block fetch. On a successful fetch this value is always zero. Should we not be using: “exec sql GET DIAGNOSTICS :Rows = ROW_COUNT” ?
Actually, in my testing, I couldn’t figure out how it was duping some of the data until I changed it to SqlEr3.