Admin Alert: Reorganizing IBM i Files To Improve Disk Performance, Part 1
June 11, 2014 Joe Hertvik
One of the easiest ways to improve IBM i performance is to make sure all your critical files are reorganized on a regular basis. Files with excessive deleted records take longer to read because each returned record block contains deleted as well as active records. The next two columns, I’ll present a template for detecting and reorganizing your most active files to improve disk processing efficiency. Why We Reorg There are several good reasons for reorganizing files with large numbers of deleted records, including: 1. Conserving disk space–If your disk is filling up, you can retrieve a fair amount of unproductive disk space by squeezing the deleted records out of large files. Using regular file reorganizations, I’ve seen shops recover between 5 to 10 percent of their usable disk space. But don’t count on getting much of that 5 to 10 percent of deleted record space back on a permanent basis. File reorganization tends to be a temporary disk recovery technique. Depending on how your applications work, you may recover 5 to 10 percent of usable space only to have your applications fill up that space again with more deleted records the next day. Regular file reorganizations can help you reclaim deleted record space but my experience is that after mounting an aggressive reorganization campaign, you’ll establish a new lower baseline for storage. But you’ll only be able to increase usable disk space by so much using file reorganization. You will hit a limit on how much disk space you can recapture and you’ll have to use other techniques to retrieve more disk space, including:
Reorganization is just one piece of the puzzle to retrieving disk space. For more information on retrieving IBM i disk space, see the Related Stories section at the bottom of this article. 2. Improve performance–Files with large numbers of deleted records take longer to process. The disk activity for reading every record in an active file with a large number of deleted records (several million) or a deleted-to-active record ratio of 1:1 or greater, can slow down processing every time you read that file. This is because when an application requests a record, IBM i returns a block of records that may contain the next records to be read, therefore avoiding the need for multiple disk reads every time a record is requested. If a file contains a large number of deleted records, IBM i can’t return as many active records each time a disk read is performed. This means it takes longer to read a file with higher numbers of deleted records than it does to read a file that is reorganized on a regular basis. Regular reorganizations can help disk read performance and batch job performance for files that are constantly used. Steps To Effectively Reorganizing Files To benefit from regular disk reorganizations, you need to follow these four steps for an effective disk reorganization strategy.
This issue, I’ll cover step 1. Next time, I’ll discuss steps 2 through 4 including several ways you can reorganize files to increase application efficiency. Step 1: Set up a procedure for Identifying files and members that need reorganization. Being a mature operating system, IBM has given us a number of tools for quickly identifying which files contain a large number of deleted records. These tools have been available for so long that way back in 2003 I wrote a utility called REORGMAP that collates all the files on an IBM i system and lists the number of active and deleted records in each file. It outputs this information in a work file called qgpl/reorgmap that can be mined for deleted record information. A few weeks after I first wrote the utility, I modified REORGMAP to show active and deleted record information for each member in a physical file. The techniques described in these two articles are still valid for i 7.1/6.1 in 2014 and I still use this utility. Taken together, these two articles can form a template for creating a file information file that you can use to determine which of your IBM i files need reorganization. To identify which files need reorganization to get rid of excessive deleted files, do the following. 1. Set up a scheduled job to automatically create the qgpl/reorgmap file information file. Check the links in the Related Stories section below to find my articles that show you how to create this file. Set up the job to run on whatever schedule you’d like. 2. After your qgpl/reorgmap file is created, use the following SQL statement to identify which files have the largest number of deleted records. SELECT * FROM qgpl/reorgmap ORDER BY mlndtr desc Because you’re using SQL, you can run this command by using the Start SQL (STRSQL) command or by executing the statement from an outside application, such as Microsoft Access or Microsoft Excel or from within a custom-written application on another server. 3. The returned SQL list will display which of your system files have the largest number of deleted records, showing all files with deleted records in descending order, where the file with the largest amount of deletes is listed first, the file with the second largest number of deleted records is second, etc. From here, you can identify which files need to be reorganized. Coming Soon Next issue, I’ll look at the various ways you can reorganize IBM i physical files in at least four different ways, including:
Joe Hertvik is an IBM i subject matter expert (SME) and the owner of Hertvik Business Services, a content strategy company that provides white papers, case studies, blogging, and social media services for B2B software companies. Joe also runs a data center and help desk for several companies. Joe has written the Admin Alert column for IT Jungle since 2002. RELATED STORIES Admin Alert: Corralling i/OS Storage Hogs, Part 2 Admin Alert: Corralling i/OS Storage Hogs, Part 1 Admin Alert: Determining Which OS/400 Files Need Reorganizing, Part 2 Admin Alert: Determining Which OS/400 Files Need Reorganizing
|