Sandy Shows State Of IT Resilience Has Room For Improvement
December 10, 2012 Alex Woodie
Superstorm Sandy rearranged parts of the Eastern Seaboard a month and a half ago, but she is still remaking the business landscape. Some companies were ruined by Sandy, while some industries in the area, such as construction and car sales, are now flourishing. The National Hurricane Center changed how it categorizes storms as a result of Sandy, which is the model for a new type of storm powered by global warming. Sandy also poked holes in the disaster recovery (DR) plans of many IT organizations, and highlighted where improvement can be made. Some would say that it was inevitable that Superstorm Sandy would cause huge disruptions in the lives and business of those on the East Coast. It is true that any people or objects in the coastal strip that became Sandy’s floodplain would be inundated with water, or fire in the case of Breezy Point. The near total destruction that Sandy wrought on the coast is a compelling reminder that while humans can build extravagantly for a time, Mother Nature is firmly in charge of the physical world. Software, on the other hand, does not have to play by the same rules. Thanks to networks, data replication, and virtualization technology, data and applications can be moved anywhere in the world. A disaster in one region doesn’t have to spell disaster for an organization that has sufficiently virtualized, cloned, and replicated its IT resources to multiple locations. Don’t think for a moment that the data owned by the Atlantic City casinos were tethered to Power Systems servers. While Sandy destroyed some businesses, she also rewarded those companies that had planned for disasters and protected their IT assets. Businesses in the Northeast were generally more prepared for Sandy than businesses along the Gulf Coast were for Hurricane Katrina in 2005. The Northeast typically sees bad weather over more of the year than the Gulf Coast does, and Hurricane Irene’s impact on the Northeast in October 2011 seemed to have spurred businesses to be more prepared.
But Superstorm Sandy was not your “typical” hurricane, and brought new dangers that will impact DR planning. For starters, the sheer size of Sandy–with hurricane-force winds extending out 1,100 miles from her center–was unprecedented in modern meteorological history. At the same time that lower Manhattan was flooded and the West Virginia mountains were covered in snow, Sandy was blowing up huge waves on Lake Michigan. Companies with a secondary server or data center 500 to 1,000 miles away from their primary server or data center may want to rethink the topic, and increase the geographic separation if they want to survive another super-regional event like Sandy (see the graphic above, courtesy of uptime monitoring firm Pingdom, for other places you may not want to put your server.) The power outages and subsequent run on generators and fuel was another “Sandy Surprise.” Nobody expected the electricity to stay on throughout the storm, but many areas were without power for several weeks. And when the power went out, businesses expected their Internet service providers (ISPs) and managed service providers (MSPs) to turn on the generators to keep the data centers running. But when the ISPs and MSPs ran out of fuel–or worse, when the generators themselves became flooded due to poor locating decisions–then the services crashed. For companies that were using an MSP as their secondary site for DR purposes, the MSP’s lack of planning became their own problem. Sandy will spur more organizations to adopt technologies such as high availability to cope with the next disaster, predicts Simon O’Sullivan, a vice president with IBM i high availability software company Maxava. “It’s been a huge wakeup call for a lot of business up that way,” he told IT Jungle recently. “I think a lot of businesses are looking at themselves and saying, ‘I might be out for two or three or four days. But in a situation like this, can I be out for two weeks, and lose my systems and my data?'” State of Resilience The impact of Sandy is also being looked at by Vision Solutions, which assisted with more than 70 customer failovers in the immediate aftermath of Sandy. The company tomorrow will unveil its fifth annual State of Resilience report, which highlights how big disasters like Superstorm Sandy are spurring a migration away from older technologies like tape to newer, more resilient technologies, such as logical replication and hardware-based mirroring. Tape remains the most popular way to protect data, with a use rate of 74 percent in Vision’s 2012 survey, which was conducted online in October and November and involved 513 participants. The affordability and mobility of tape has made it the de facto DR standard for decades. However, use of tape has declined by 6 percent since 2010, according to Vision’s surveys. The surveys suggest that tape’s decline is due to more widespread use of newer technologies that can offer faster recovery times, simpler administration, and better protection against regional disasters. The survey points to software-based replication being the big beneficiary of tape’s decline over the past two years. Use of software replication plus failover–what’s generally referred to as high availability software and what powers Vision’s MIMIX, ODS, and iTera brands–jumped from about 35 percent in 2011 to just under 50 percent in 2012. Similarly, software replication without failover–what’s often referred to as continuous data protection (CDP)–increased too, from just under 30 percent to about 35 percent, Vision’s survey shows. Doug Piper, vice president of product strategy for Vision Soloutions, said that a conclusion could be drawn from the report that in 2012 more companies recognized the value of high availability. “Survey results showed that IT professionals are being propelled to implement 24/7 access to data and applications,” he says via email. “This directive puts additional pressure on IT to provide continuous access to servers, data, and applications, even in the event of disasters, which can take a whole data center offline for days or even weeks. The jump in the use of logical replication plus failover is consistent with this directive, since the failover component is essential to delivering fast RTO (recovery time objective)–and it’s the RTO that enables them to deliver on the promise of 24/7 access, even in the face of crippling disasters that can result in unpredictable, lengthy outages.” Storage-based replication has increased too, although only by a couple of percentage points over the last three years. Clustering has increased by a greater amount–from just over 31 percent in 2010 to about 40 percent this year. Clustering is more widely used in Windows and Linux environments, although it is also used in IBM i environments, where it’s incorporated into storage-based replication offerings such as IBM‘s PowerHA, which replicates iASP contents in IBM storage arrays using cross-site mirroring (XSM), Metro Mirror, Geo Mirror, and Flash Copy technology. Virtual tape library (VTL) use remained steady at about 20 percent. The future of DR lies in hybrid setups, Vision says. Organizations increasingly are looking at a blended approach that combines physical, virtual, and cloud technologies. The big challenge is to get these three phases of IT to work together in an integrated, non-complicated manner, according to the company. Vision makes a note in its survey about the special role that virtualization technologies play in DR. As the gateway technology that allows organizations to move their IT assets to the cloud, virtualization also provides a way for organizations to create backup server environments in the cloud. However, this capability to create an alternative DR configuration in a MSP’s cloud data center is “underplayed,” Vision says. “Disaster recovery in the cloud could be the next big push for IT organizations,” the company states in its report. That was a change from the report that Vision issued a year ago, when it said that the data protection schemes that organizations were putting in place to protect the data residing in virtual servers and in the cloud was complicating and compromising overall DR preparedness. Perhaps this signals a change in strategy for Vision, which up to this point has not pushed strongly into cloud-based HA and DR, instead leaving it to business partners. Vision will present its report, The State of Resilience: Navigating the New Landscape for IT Systems: Physical, Virtual, and Cloud, in a webcast tomorrow at 11 a.m. ET. For more information or to register, see the company’s website at www.visionsolutions.com. RELATED STORIES Superstorm Sandy Puts DR Plans To The Ultimate Test Cloud and Virtualization Hurting State of Resiliency, Vision Study Finds
|