Take A Peek Inside PurePower Converged Systems
June 29, 2015 Timothy Prickett Morgan
When IBM sold off the System x division to Lenovo Group, one of the things that went with it was the modular Flex System chassis. Yes, IBM said it would source the machines from Lenovo and, yes, IBM said that it would continue to make and sell Power-based nodes for the Flex Systems to make its PureApplication converged systems for those customers who want them. But the fact that there are, a year after the Power8 chips first came to market, no Flex System nodes based on the Power8 speaks volumes about just how IBM is really thinking about converged infrastructure since the divestiture, and the fact that it quietly launched the PurePower System as part of the May 11 announcements basically shouts it from a megaphone that, going forward, IBM intends for its Power-based converged systems to be based on clusters comprised of more traditional rack-based, scale-out systems. The PurePower clusters were detailed a bit in announcement letter 115-068, and after some digging around, you will discover in announcement letter 215-222 a statement of direction from Big Blue that the future PureApplication V2.1 systems coming out in the second half of the year will be based on Power8 machinery and specifically on the PurePower architecture. That’s another way of confirming, beyond a shadow of a doubt, that PurePower System is IBM’s future for converged systems and Flex System is not. So what is this PurePower System, and what does it have going for it? The initial PurePower Systems are made up of Power S822 nodes (which support AIX or Linux) or Power S822L nodes (which only support Linux). The initial setup is available using Red Hat Enterprise Linux. But back in May, Steve Sibley, director of worldwide product management for IBM’s Power Systems line, told us that SUSE Linux Enterprise Server and Canonical Ubuntu Server will eventually be supported on these pre-configured clusters. For the moment, on the storage front, IBM is using its Storwize V7000 disk arrays in the PurePower System clusters, but it would not be surprising to see variants with low-end Storwize V5000 or V3500 arrays and FlashSystem all-flash arrays in setups. AIX 7.1 will be an option for customers who want to build on IBM’s Unix, or run a portion of their workloads on AIX. Both AIX and RHEL were available when the PurePower Systems started shipping on June 19, and the other Linuxes and–lo and behold–IBM i will be coming sometime in the future. IBM has not made any decisions yet, but the IBM i version could be based on the Power S824 nodes and Storwize V3500 arrays. This is not a particularly dense configuration and it is clearly initially aimed at Linux customers who want to deploy automated clusters to create private clouds. IBM has woven together a stack of management software based on the OpenStack cloud controller, the PowerVM hypervisor, the PowerVC security software, and the open source Nagios configuration management tool. IBM’s Hardware Management Console (HMC) is also part of the stack. Big Blue is not using the Flex System Manager, the tool from its former Flex System modular system that is now owned by Lenovo. IBM has plans to migrate customers from PureFlex and PureApplication setups to the PurePower System clusters, which we will discuss in a future story. The PurePower System setup has up to a dozen server nodes per rack, plus storage and all of the switching to link it all together. IBM is adamant that it is not going to run Linux on X86 nodes in this system, but the funny bit is that the PurePower System Management Node, outlined in announcement letter 115-067 and which manages this cluster is actually based on a pair of redundant nodes based on Intel Xeon processors. At some point, all of this management software should be ported to Power so it can truly be PurePower. (Perhaps the Storwize controller, too.) But I digress. In a base configuration, The PurePower System has two compute nodes, a Storwize V7000 array, a Brocade SAN48B-5 Fibre Channel switch to link servers to storage, a pair of Lenovo RackSwitch G8052R 48-port Ethernet switches for managing servers and storage and a pair of Mellanox Technologies SX1710 Ethernet switches, which have 36 ports that can run at 10 Gb/sec, 40 Gb/sec, or 56 Gb/sec. These are used to link server nodes to each other. The base configuration includes a rack and the pair of management nodes. IBM has ginned up full configurations that are based on compute or storage as well as a mixed configuration. The maximum compute configuration has a dozen Power S822 or Power S822L nodes and one Storwize V7000 array, while the maximum storage configuration has two compute nodes, one Storwize V7000 and ten V7000 expansion enclosures. The mixed configuration has nine compute nodes, one V7000, and three V7000 expansion boxes. A few observations. First, as I said, this is not a particularly dense configuration, and for true scale-out customers like the enterprises and service providers that IBM is targeting, it will need two-socket Power8 systems that do not take up quite so much room in the rack. This will be a bit of a challenge, given how much heat the Power8 chips throw off, but that is the way this game is played. IBM will also have to add FlashSystem all-flash storage to the clusters, and the wonder is why it did not do this to begin with as an option. There are many workloads that run best on all-flash storage, or at least a hybrid. And when it comes to IBM i customers, this box is clearly aimed more at service providers than customers themselves. Only a minority of the IBM base needs more than one Power S824 machine, much less a dozen of them. It would be interesting to know if IBM will be aggressively pricing the IBM i hardware for service providers, much as it has recently done with IBM i licenses. Keeping the IBM i version in spitting distance, in terms of performance and pricing, is key to the stability of the IBM i ecosystem–and surely Big Blue knows this and will think long-term instead of short term. In the longer run, I expect to see much denser Power8+ and Power9 servers with the PurePower System brand on them, and these machines may in fact be made by IBM’s OpenPower partners and not Big Blue itself. Whether or not such machines will support IBM i remains to be seen, but they should if IBM wants partners to be able to build IBM i clouds. RELATED STORIES New Power8 Midrange, PurePower Kicker To PureSystems IBM Upgrades High-End And Low-End Power8 Machines The Remaining Power8 Systems Loom Entry Power8 Systems Get Express Pricing, Fat Memory OpenPower Could Take IBM i To Hyperscale And Beyond What’s Up In The IBM i Marketplace? OpenPower Builds Momentum With New Members, Summit IBM Reorganizes To Reflect Its New Business Machine Aiming High, And Low, With Power Chips Power Chips To Get A GPU Boost Through Nvidia Partnership IBM Will Fill The Hole In The Power8 Line IBM Rolls Out The Big Power8 Iron Power Chips To Get A GPU Boost Through Nvidia Partnership Plotting Out A Power Systems Resurgence Partners Need To Get Certified–For Power8 And IBM i Power8 Packs More Punch Than Expected IBM Readies More Power8 Iron For Launch Counting The Cost Of Power8 Systems Four-Core Power8 Box For Entry IBM i Shops Ships Early Thanks For The Cheaper, Faster Memories Threading The Needle Of Power8 Performance Lining Up Power7+ Versus Power8 Machines With IBM i IBM i Shops Pay The Power8 Hardware Premium As The World Turns: Investments In IBM i Doing The Two-Step To Get To Power8 IBM i Runs On Two Of Five New Power8 Machines
|