InfiniBand Finds Its Place in the Data Center
May 29, 2007 Timothy Prickett Morgan
It is tough for any new networking or peripheral interconnect to break into the computer market, and it is particularly tough for a new technology that seeks to be all kinds of connectivity for all kinds of devices. So it has been for InfiniBand. But, InfiniBand has found its niches in the data centers of the world and has become an established technology for workloads that demand the highest bandwidth and the lowest latency. When the Future I/O spec from IBM, Hewlett-Packard, and the then-independent Compaq was merged with the Next Generation I/O project from Intel, Sun Microsystems, and Microsoft to create what became InfiniBand, it certainly did look like InfiniBand would become the default technology for linking servers to each other, to storage arrays, and maybe even to end users working at their PCs. It has been nearly a decade since the foundations of InfiniBand were laid, and while InfiniBand did not take over the world, the InfiniBand Trade Association says that the proponents of the technology are committed to keeping InfiniBand ahead of the competition in terms of bandwidth and low latency. But the members of the organization are also keenly aware that other technologies evolve and persist, too. “Despite the fact that you can do storage connectivity in many different ways, Fibre Channel persists,” explains Jim Pappas, who co-chairs the IBTA with IBM and who is director of initiative marketing at Intel’s Enterprise Platforms Group during his day job. “There will be co-existence of different technologies. But when it comes to latency, I don’t think any technology will touch what InfiniBand does.” InfiniBand adapters and switches have been able to provide 10 GB/sec links for years, and have been providing 20 GB/sec bandwidths for about a year. The InfiniBand roadmap calls for 40 GB/sec single links–so-called Quad Data Rate or QDR InfiniBand–being available from vendors by the second half of 2008; some vendors are already demonstrating these products, in fact. At around the same time, InfiniBand vendors will be able to gang up multiple links to provide 120 GB/sec connectivity. This high-speed connection is expected to be mostly used for linking InfiniBand switches to each other, with 40 GB/sec links being used to link into servers. Pappas says that InfiniBand is being used predominantly in high performance parallel server clusters these days, running the usual suspects of supercomputing workloads for modeling physical, chemical, and mechanical phenomena. But increasingly, companies are deploying clusters in their data centers for financial modeling, data warehousing, and other parallel applications relating to the business, and they are looking for high-bandwidth, low-latency links to improve the performance of these clusters. Because of this expanded use of InfiniBand beyond academic and government supercomputing centers, the analysts at IDC reckon that sales of InfiniBand host channel adapters will rise from $62.3 million in 2006 (about 124,000 HCAs) to $224.7 million by 2011 (935,600 HCAs), which works out to a compound annual growth rate of 29.3 percent over those five years. Sales of InfiniBand switches are expected to be larger and grow faster, too, according to IDC. The analysts say that the relatively few InfiniBand switch providers sold $94.9 million in gear in 2006, but switch demand will grow so much that by 2011, sales will hit $612 million. That’s a 45.2 percent compound growth rate. Of course, people being essentially stingy and all, Ethernet is going to still be the volume networking connection medium for system to PC and system to system links, and Ethernet and Fibre Channel are going to fight over storage rather aggressively. But, for those companies who need the most oomph and who do not mind paying the premium, InfiniBand is going to be the connection of choice. RELATED STORIES InfiniBand Gets iSCSI Tweaks to Support Storage OpenIB Alliance Expands Beyond InfiniBand Protocol to RDMA Cisco Buys InfiniBand/Virtualization Specialist Topspin for $250 Million Linux Gets Native InfiniBand I/O Support InfiniBand Backers Take Protocol Stack Open Source
|