Reader Feedback on IBM Adds New SSD and Fat SFF Disk to Power Systems
September 20, 2010 Hey, TPM
We are in the process of configuring a Power 720 to replace an aging Power 520, and need to increase both CPW and I/O. It turns out that two mirrored 2054/1996 features will give us about 20K IOPS, versus about 3K IOPS for a traditional disk setup. And both end up costing about the same amount. The disk setup requires a feature code 5615 GX++ card, 30 feature 3677 139 GB hard disk drives, one feature 5796 12X expansion drawer with a feature 6446 interface, three feature 5886 12S disk drawers drawers, and a feature 5908 1.5 GB caching RAID controller (not including cable features). The total I/O drawer stack is $47,000 list, 10U high, and requires 220-volt power for the expansion drawers. By using two feature 2054 SAS flash controllers and eight feature 1996 flash drives, we get six times the I/O performance for just about the same cost. And the system is contained entirely within the 4U, 120-volt CEC. The minimum monthly maintenance savings come in at about $200 per month. There are two things to watch out for here. First is capacity–770 GB versus 3,753 GB. The second is that there is no write cache on the feature 2054 controllers. I am still researching performance before we make a final decision, but capacity will be partially addressed by using feature 1888 small form factor hard disk drives in the CEC for low use data. Also, these features do work in a Power 780, according to TPC-C configuration information published April 13, 2010, and revised July 19, 2010. The server was a 9179-MHB base MTM, and it contained three feature 4367s, which is IBM’s part number for five PCI-Express adaptors and 20 SSD memory modules. These are the same CCINs used in the 2054/1996 features we are considering. (By the way, it set the TPC-C record for performance!) I just found a document–IBM i 7.1 Performance Capabilities Reference–that I expect to be useful as I continue to research the SSD versus HDD w/write cache question. Hope this info helps, keep up the great work you do! –Dave Thanks for the insight, Dave. You just stole my next story! I am ginning up some comparisons for various sized systems to show how much it might cost for a given I/O rate or storage. As with past SSDs, I think for most customers, having a few SSDs and a few fat disks is probably the best course, both for economic and performance reasons. I did mention the TPC-C test in the story I wrote last week, but you must have been so excited by hardware (as I get) that you missed it. While the SSD/controller combo features can be used in with Power 770 and Power 780 servers, they do not go into the CECs, but the expansion drawers as far as I know. –TPM RELATED STORIES IBM Adds New SSD and Fat SFF Disk to Power Systems SandForce SSDs Help Push TPC-C Performance for Power 780 IBM Makes the Case for Power Systems SSDs Sundry Spring Power Systems Storage Enhancements Power Systems Finally Get Solid State Disks New Power6+ Iron: The Feeds and Speeds IBM Launches Power6+ Servers–Again IBM Adds New SAS, SSD Disks to Servers Sundry October Power Systems Announcements IBM Doubles the Cores on Midrange Power Systems Various System i and Power Systems i Nips and Tucks Sundry July Power Systems Announcements
|