Direct Attached Storage Gets Massive NVM-Express Expansion
April 12, 2023 Timothy Prickett Morgan
The storage area network – a giant wonking storage server with lots of lossless Fibre Channel switching between servers that share access to virtual storage partitions – has been around for several decades now. And despite all of the talk of its ubiquity, and usefulness in driving up storage utilization across a bunch of servers and therefore helping drive down the cost of that storage while also enhancing its manageability and shareability, for many IBM i shops, using directly attached storage is what they want to do.
There are reasons for this, of course. In many cases, the IBM i system is a silo unto itself, a database server or a hybrid database and application server that stands on its own and is so precious that it is purposefully isolated from all of the other servers that might otherwise share SAN storage with it. Sometimes, especially with SMBs, the IBM i system represents the totality of the compute that gets done, and a SAN is overkill given the simplicity of the IT infrastructure. (Which is an obvious benefit.)
The world started adopting flash in the datacenter, first among the hyperscalers and cloud builders seeking to accelerate their database workloads (Facebook was a pioneer here, as was Google), but because of the high cost of enterprise-grade flash, it took quite a while to become at least a thinkable addition to an IBM i system, often complementing local disk arrays. With the advent of the NVM-Express protocol, which treats flash as its own thing with its own drivers and does not have it run as an emulated disk drive as was the case with early enterprise flash, the latency of flash dropped bigtime and I/O operations per second soared.
And so, given the performance of the Power10 processor, which debuted in the high-end Power E1080 in September 2021 and which rolled out across entry and midrange machines in July 2022, it was perfectly understandable that IBM made NVM-Express flash the only option for internal storage – what is often called direct attached storage – on these systems. There was still legacy support for external EXP24 drawers loaded up with disk drives or older non-NVM-Express flash (which link to the system over a gussied up InfiniBand bus) and of course customers can also use Fibre Channel links out to SAN arrays and Ethernet links out to NAS arrays.
Now, thanks to a new NVM-Express expansion drawer, which links to the system using point-to-point PCI-Express adapter cards and which was revealed in announcement letter 123-031, customers deploying storage can do so top to bottom with NVM-Express flash.
The new NVM-Express Expansion Drawer, which of course goes by the name NED24 and which is known as feature #ESRO, is what we presume is a 1U expansion drawer (there are no pictures available as yet as far as we can tell) that can support up to 24 15 millimeter NVM-Express flash devices. The 15 millimeter Gen3 flash carriers can support either 15 millimeter or 7 millimeter U.2 flash devices. The expansion drawer is connected to a Power machine through dual CXP Converter adapters (features #EJ24 or #EJ2A) that in turn require a pair of the following cable features:
- #ECLR – 2.0 M Active Optical Cable x16 Pair for PCIe4 Expansion Drawer
- #ECLS – 3.0 M CXP x16 Copper Cable Pair for PCIe4 Expansion Drawer
- #ECLX – 3.0 M Active Optical Cable x16 Pair for PCIe4 Expansion Drawer
- #ECLY – 10 M Active Optical Cable x16 Pair for PCIe4 Expansion Drawer
- #ECLZ – 20 M Active Optical Cable x16 Pair for PCIe4 Expansion Drawer
The resulting PCI-Express setup allows for each NVM-Express drive in the NED24 enclosure to be directly linked to the server it is attached to. We don’t know the number of NED24 enclosures that can be attached to each machine, but obviously it will scale with PCI-Express connectivity at the most and IBM may feel compelled to crimp it on smaller machines, as it has done in the past. (We will look into it.) Steve Sibley, vice president and global offering manager for Power Systems at IBM, says that the Power E1080 will scale to 1.2 petabytes of direct attached NVM-Express flash storage, which is a lot of capacity for a database engine.
“Most people will not need quite that much scale,” Sibley tells The Four Hundred. “But this NVM-Express expansion does give them more flexibility. We are also addressing some of the gaps we had with other direct-attached storage, and tape specifically, which is an IBM i issue. So we have two new options, a new SAS adapter, and a new Fibre Channel adapter that allows both high speed adapters to tape but also lower speed Fibre Channel attached to tape if clients still have old tape drives. This fills important backup and recovery gaps for clients.”
The NED24 expansion drawer is supported on the Power L1022 and Power L1024 Linux-only servers as well as the plain vanilla Power S1022, Power S1022s, Power S1024, and Power S1050, and Power E1080 servers. And as you will note, the Power S1014, which is a P05-class IBM i machine and which will be popularly sold, does not support the NED24 drawer, so I am immediately annoyed on your behalf.
The NED24 drawer has a pair of AC power supplies and is supported on IBM i, AIX, and Linux operating systems as well as the Virtual I/O Server (VIOS) that is commonly used to virtualize I/O on Power Systems iron. But it is supported natively, on IBM i 7.4 with the new Technology Refresh 8 and on IBM i 7.5 with the new Technology Refresh 2 to be precise, so you don’t have to use VIOS if you don’t want to. And many, many IBM i shops, particularly the small ones, certainly do not want to.
Pricing for the NED24 enclosure and its cables were not announced, which is also annoying.
Let’s talk about those tape features for a second, which are also outlined in the same announcement letter. Feature #EJ2B is the high profile and feature #EJ2C is the low profile version of the PCI-Express 3.0 x8 12 Gb/sec SAS tape adapter card for the Power10 servers. This adapter can drive up to four tape drives, and supports The adapter supports external LTO-7, LTO-8, and LTO-9 tape drives, which are available in IBM’s 7226-1U3 multimedia drawers or standalone tape units such as the TS2270 and TS2280 as well as single external tape drives such as the TS2900, TS3100, TS3200, or TS4300.
IBM has also announced four-port 32 Gb/sec Fibre Channel optical adapters (features #EN2L and #EN2M) that plug into a PCI-Express 4.0 x16 slot and two-port 64 Gb/sec Fibre Channel optical adapters (features #EN2N and #EN2P) that also plug into a PCI-Express 4.0 x16 slot.
The new NVM-Express drawer and the various adapter cards will all be available on May 19.
RELATED STORIES
IBM Tweaks Some Power Systems Prices Down, Others Up
Some Power Systems Hardware Tweaks
How Much Does NVM-Express Flash Really Boost IBM i Performance?
Tweaks To Power System Iron Complement TR Updates
IBM Revamps Entry Power Servers With Expanded I/O, Utility Pricing
IBM Doubles Up Memory And I/O On Power Iron To Bend The Downturn
The Skinny On NVM-Express Flash And IBM i
Power Systems Refreshes Flash Drives, Promises NVM-Express For IBM i
Personally I run native IBMi (no VIOS) and internal nvme for production/dev/test data and also have fibrechannels adapter for external tape library and utility IASPs on external storage that can be used for copies, day by day stuff, scratchpads etc.
IMHO best to have internal and external storage access for max flexibility.
Production ASP are on internal nvme storage: the native access it is pretty fast, and it reduces external dependencies and it is in control by the same persons that run the i.
For people that needs to buy a new IBMi without an external existing SAN to leverage (fibrechannel works well, but indeed is more complex and costly) just buy a server with lots of internal nvme and stop. And it works great and fast and all in control by the i.
Many times I see solutions proposed to small customers from VARs made by a single Power server single i partition but with a couple of SAN switches plus a external storage array (!!!!!!).
Just, as an automotive engineer once said, please “add lightness” 😉 … just propose a good single power server with a lot of nmve drives.
Want to sell more stuff to the customer? just propose a single power with powerha to the mirror the primary… on the less expensive solution providing a good deal of redundancy.