Tape Backup: Obviously, a Whole Lot Greener than Disk Backup
June 2, 2008 Timothy Prickett Morgan
Some technologies are very hard to kill. Long before there was disk, there was tape, and the ridiculously cheap disk capacity that is available has allowed many companies to do their archiving on disk-based archiving systems that look like tape to servers even if they are not. But using disk-based archiving comes at a very high operational cost. Disks need to spin to be useful, but once a tape has data archived on it, it pretty much stops using power until it is specifically required to get a bit of data for an application or to archive another data set. The death of tape has been predicted many times in the past, probably as much as the mainframe that it was initially married to. But tape still has its uses, and if history is any guide, will continue to. The Ultrium consortium, the group of tape drive and array makers that is backing the high-density Linear Tape Out (LTO) tape technology that pretty much owns the data center these days, is touting a new report put out by The Clipper Group that compares the cost of disk-based archiving versus tape archiving, including not only the cost of the media itself but also the energy costs associated with using it. (You can read that report here for yourself at the Ultrium consortium site.) The bottom line, according to analysts David Reine and Mike Kahn, is that the cost of long-term storage for an array of SATA disks is 24 times as high compared to LTO-4 tape arrays, and the energy costs for the disk backup is 290 times higher. You might think that with disk drives getting fatter and arrays getting less expensive, that a disk-to-disk (D2D) backup setup would be more appealing that a tape library. “We began this study to see if the decreasing costs of disk subsystems and the increasing capacity of disk drives, especially second-tier SATA storage, might have made the TCO for disk more attractive versus tape in the long-term storage of data,” the two analysts explained in the report. “It did not. We thought that the cost of energy would be a noticeable factor in favor of tape. And it is.” Clipper Group took an initial data set of 50 TB and assumed that the capacity requirements for the midrange shop being simulated would grow by 50 percent per year. The data center did daily incremental backups to disk and tape, retaining these on disk for a quarter and then moved out to disk array archives or a tape library. The cost comparison that Clipper Group did only examined the cost of these archives, not including the initial disk arrays, over a five-year term. While electricity in major metro centers is now in the range of 20 cents per kilowatt-hour, Clipper Group chose 12 cents per kilowatt-hour, which is closer to the prevailing cost in rural areas for corporations. (But certainly not the residential rate we pay in our homes, which is higher.) To simplify things, the cost of electricity was held constant over the five years, but it seems unlikely that this will happen out here in reality. The price differences between the two solutions–D2D or tape library–are jaw dropping. The five-year cost for the D2D approach is $51.7 million, with $49.9 million of that coming from the hardware and software, $1.2 million coming from electricity costs, and just under $600,000 coming from space costs. (Machines have to pay rent, too.) The tape library to do the same backups costs $1.79 million, with only $344,250 coming from space costs and a ridiculously small $3,416 in energy costs over five years. What disk-based archiving solution providers have figured out, and what disk array makers are thinking about too, is to design a disk array that can quiesce disk arms and platters within the array when they are not needed. Of course, with random access to data, this is something of a challenge, and probably not the right engineering solution. That is why I suspect that flash memory arrays will be making waves as a third option for nearline archiving as soon as reasonably fat flash memory chips are available. Last week, for instance, Intel announced a 32 gigabit NAND flash memory chip that uses the company’s most advanced 34 nanometer chip making processes. These chips, as well as similar 32 gigabit flash chips from Toshiba and Samsung, are going to allow disk makers to offer solid state storage disks in a 1.8-inch form factor with a 256 GB capacity. Let that sink in for a while. And now consider that flash uses a lot less power, is more reliable than disks thanks to the lack of moving parts, and has anywhere from one to two orders of magnitude higher I/O operations per second (IOPs) compared to regular disk drives. So on I/O bound workloads–like transaction processing–a mix of relatively expensive flash drives backed up by tape for archiving could be very energy efficient and cost-effective compared to a disk array setup with disk-based archiving. Flash provides the performance and tape provides the price/performance. Flash-based disk drives have to go volume and the price has to come down before this can happen, but with the world moving to laptops, the transition away from disks to flash drives could happen pretty quickly, bringing the costs way down as the volumes go way up. RELATED STORIES IT Shops Consume 2 Million LTO Tape Drives IBM Introduces Half-Height LTO 3 Tape Drive IBM Rolls Out LTO 4 Tape Drives and Libraries Asigra Debuts Remote, Agent-Less Backup for iSeries HP Buys Clustering Software Maker, Launches D2D Backup Solution LTO Tape Drives a Smashing Success Idealstor Adds CDP to Backup Repertoire That Includes ‘Ejectable’ Disks Unitrends Adds OS/400 Support to D2D Backup Appliances
|