The Data Center Is the Computer
March 16, 2009 Timothy Prickett Morgan
This week, networking giant Cisco Systems is going to be jumping into the server business. And everybody is all kerfluffle and kerflooey about what this does and does not mean. If you draw boxes around pieces of electronic gadgetry called servers, storage, and networking, then you can make a big deal about what Cisco becoming a server vendor may or may not mean. And it is fun, and I must admit, I have been amusing myself as I cover this story from my second life over at The Register. But if you instead draw the box around the data center, and only look at what is coming out of the Ethernet cables, then perhaps you can see the world the way the end user does. And end users have other things on their mind, as do presidents, chief executive officers, bean counters, and small business owners. While these people care a great deal about their applications and the quality of service their IT provides as it runs those applications, they really don’t give a hoot how it all happens as long as it costs less than last year. For most of us not engaged in the day to day battle that is IT, you see, the data center is the computer. As the Web front end for the Internet has so beautifully proven, and will prove even more so as online applications and cloud computing take off in earnest in the coming years, what people really want is for the data center to go back to being where computing happens and for it to appear as magic, as it did before the minicomputer and client/server and PC revolutions all gave us some autonomy, but a hell of a lot more responsibility and aggravation. As Windows Update, Adobe update, and my antivirus software updates all argue about who is in charge of my machine as I try to write this, I know that I am, like many end users, pretty sick of the current state of computing. And a vendor who comes into the data center and that offers an integrated, complete stack of server computing that embraces virtualization of servers, storage, and networks, that has ease of use, and that allows for legacy applications to run as well as for new kinds of applications to be created and supported, can do well. Can change the nature of computing as we know it, in fact, as Cisco, it appears, is going to try to do starting today with its Unified Computing products. Let’s take a look at two examples as I like to make the point that big statements easily ignored can make profound changes in the IT landscape. The database is the computer. IBM changed everything with the “Pacific” architecture that eventually became the System/38 and that was launched in August 1979. This box had so many neat and smart features, it is hard to know where to begin. Let’s start with the 48-bit memory addressing (which was just light years ahead of anything); its distributed, asymmetrical multiprocessing design that offloaded jobs to cheap co-processors allowing the central processor to do the hard work; single-level storage, which allowed programmers to stop worrying about moving data in and out of disk and memory memories, thereby simplifying programming; and integrated relational database management system that was the file system of the box, tightly integrated with the RPG and COBOL compilers; a hardware abstraction layer that virtualized all the underlying hardware in the box, called the Machine Interface in the System/38 and the Technology Independent Machine Interface with the System/38’s kicker born in June 1988, the AS/400, which stressed the fact that this virtualization layer allowed IBM to change the underlying hardware and the RPG and COBOL applications would compile automagcially for the new iron. To put it plain: the System/38 was the virtual machine of its time. It virtualized everything you could do with a computing architecture of the time. And if IBM had had some real forward-thinking people in its vaunted Research Division, they would have long-since extended this virtualization to Ethernet networks and Web interfaces. But what do I know? When the System/38 was announced, it was wickedly more expensive in terms of dollars per MIPS than mainframes of the time–I am talking about orders of magnitude. Which made it a niche product. But by the time the AS/400 was launched nearly a decade later, the resulting minis were much less expensive than mainframes, much easier to program, use, and administer. And it is no surprise that the AS/400 basically killed off minis from Hewlett-Packard, Digital Equipment, and myriad others and that the installed base of customers grew to 275,000 by the market’s peak in 1998. These days, a Power Systems i machine is many, many orders of magnitude cheaper, on a per MIPS basis, than a mainframe, and while the base has shrunk as Windows has taken a big piece of the data center, those AS/400, iSeries, and System i shops that remain usually wish the rest of machines in the data center behaved as well as these boxes still do. The network is the computer. That was a prescient phrase coined by John Gage, the fifth employee of Sun Microsystems, who came in right behind the three founders of the company in 1982–Vinod Khosla, Andy Bechtolsheim, and Scott McNealy–and Bill Joy, who did a lot of the work on the Berkeley Unix distribution at the university of that same name. Sun may have been started as a workstation vendor, and may have been profitable from day one selling workstations to people who needed more oomph and graphics than PCs of the time could deliver, but the world really changed when people started taking Sun workstations equipped with Unix and its TCP/IP networking stack (which Joy did a lot of the work for in BSD Unix) and this funky new thing launched in 1985 called Network File System, created by Sun and open sourced for the world to use. TCP/IP plus NSF made any machine on the network look like your own local storage. From that moment, the ARPAnet stopped having nodes and started being the Internet and systems started being servers of that network. And Sun made a fortune as the Internet went commercial in the mid-1990s. Of course, Sun could not lock down the idea of Internet servers and keep it all to itself any more than IBM could keep the idea of an integrated database management system and ease of use all to itself. And Cisco is by no means going to be able to create a hardware and software stack all by itself that will take over the data centers of the world. But the entry of Cisco into the server space is no doubt going to shake things up and make all server and storage makers really focus. I expect to see a lot of reactionary partnerships in the wake of the Unified Computing announcements today. The smart vendors will not do exclusive partnerships and will play as far and wide as they can afford to, and thereby face the largest addressable market. But there will be combinations of products that work best, and it will take a keen eye to sort this out because companies are not going to want to give away any competitive edge in this down economy. So what is Unified Computing? Well, Cisco has kept a pretty tight hold on the data, and not much is really known. Cisco is expected to launch its own blade servers, code-named “California,” that have beefier memory than competitive products, thereby making them suitable for heavy server virtualization. These are expected to be X64 boxes and they are expected to run VMware‘s ESX Server hypervisor and related management tools. The Cisco stack will also include the Nexus 1000V virtual switch, which takes the Cisco IOS switch and router operating system and drops it into a virtual machine, presumably running on the blades. This virtual switch is not virtual because it runs in a virtual machine, but because it is a master virtual switch that all VMs on the servers talk to instead of directly to Ethernet switches that are linked to physical servers. The VMs link to the virtual switch, and the virtual switch links to the physical switches, linking both the physical and virtual servers. It is yet another abstraction layer, and one that is needed, because when you live migrate a server from one physical machine to another today, it breaks the network link unless you buy some I/O virtualization appliance. That’s dumb. The Cisco stack will include its high-end Nexus 5000 switches and reportedly some management software from BMC Software. Now, notice what this doesn’t include? No Power Systems and PowerVM logical partitions. No IBM mainframes and LPARs. No HP Integrity or Sun Sparc servers. Whatever the Unified Computing strategy is, it will not truly be unified until it can take control of these assets and integrate them as well. Otherwise, the California server is going to end up being a niche product, even if the strategy is useful. Cisco surely knows all of this, but may be trying to just go for the volume X64 market. And among shops that just use X64 iron these days, Cisco and its partners will probably get its foot in the door at the very least. But the opportunity is still there to do a true unified computing–servers, storage, and networking–that spans all architectures, all switch products, and all storage. That is what we really need today. We’ll see if someone has the courage and the skills to pull such a feat off. Maybe that’s Cisco’s long-term plan. Maybe not. We’ll see. RELATED STORIES DellHPSunIBM unmoved by Cisco blades (The Register) VMware takes EMC ‘beyond virtual servers’–The 21st century software mainframe (The Register) IBM not worried about Cisco blades (The Register) Getting Dizzy from Dynamic Infrastructure IBM’s Dynamic Infrastructure Announcement Blitz Cisco ‘California’ blade server launch imminent? (The Register) Cisco To Get Into Blade Servers?
|