Cloud and Virtualization Hurting State of Resiliency, Vision Study Finds
December 6, 2011 Alex Woodie
Organizations need to do a better job of understanding the potential downsides that cloud computing and virtualization can have on their ability to maintain the resilience and availability of their applications and data, Vision Solutions concluded in its State of Resilience 2011 report, which it unveils today in a Webcast at 10 a.m. CST. The overall state of resilience has suffered as the result of increased level of complexity due to the different IT protection schemes put in place to cover virtualization and cloud. For the fourth year in a row, Vision Solutions and its Information Availability Institute have published a State of Resiliency report that showcases the findings of hundreds of surveys conducted with Vision customers and prospects over the last year. Almost 6,300 people participated in this year’s survey, which represents organizations from more than 100 countries. Vision sliced and diced its findings by company size, physical machines versus virtual machines, cloud versus on-premise, and by all the most popular platforms in the server rainbow, including IBM i, AIX, Linux, HP-UX, Solaris, and Windows. Compared to past surveys, this year saw little change in the technologies IT shops are employing to protect their data and applications. Tape backup and offsite storage continues to be the predominant data protection strategy, with 35 percent of all respondents saying they use tape. Storage-based replication (such as IBM‘s Power HA for IBM i and some of Vision’s Double-Take products) is the second-most implemented data protection scheme, used by about 15 to 20 percent of respondents. Logical (software-based) replication plus failover, such as Vision’s MIMIX and iTera HA offerings, was the next most popular data protection category, and is used by approximately 15 percent of respondents. Logical replication without failover was used by about 10 percent of companies, clustering saw a range of 8 to 18 percent (with larger companies using it more), while virtual tape libraries (VTLs) were reported to be used by about 5 percent of smaller companies, but about 12 percent of large enterprises. The report covers a lot of ground in 35 pages, but what stands out the most is the slipping level of application and data resiliency across all customer segments. Compared to last year, about three to five percent fewer respondents reported having the highest levels of confidence in their disaster recovery (DR) strategy (90 to 100 percent confidence level). More respondents reported having a 25 to 50 percent confidence level in their DR plans–which is to say they aren’t confident at all. This also occurred in last year’s survey. Interestingly, more people reported having a recovery time objective (RTOs) of six hours or less in this year’s study, while at the same time there was a slight decline in the most aggressive recovery point objectives (RPOs) of “no data loss,” and more overall tolerance for losing minutes and hour’s worth of data. Bad Cloud Rising Two of the biggest trends hurting business resiliency today are virtualization and cloud computing, says Vision chief technology officer Alan Arnold. Many organizations are being heavily pushed to adopt cloud computing to cut costs and increase the company’s adaptability, but in too many cases they’re failing to understand the negative repercussions that cloud computing and virtualization can have on their systems. “Everybody’s talking about cloud and virtualization, and how do you implement that as part of resiliency strategy,” he tells IT Jungle. “They get the calls from all the vendors telling them it’s best thing you could ever get, it’s the cheapest thing you could ever plug in, so why wouldn’t you do this? But they need to separate the facts from the FUD.” The fact is that cloud computing and virtualization–which Arnold perceives as a continuation of the decade-long server consolidation trend–often make IT systems more complicated. Consider the case of a typical company that has a mix of interconnected platforms, and adds a cloud provider. “Most people have a server at their company that they’re doing certain stuff on, and then they have a cloud provider, and the reality is information is flying back and forth all the time and needs to be managed.” But when one of the pieces breaks, the interdependency can complicate the recovery effort. “How do you know about that particular system, that instance, and the interdependencies with all the other machines that are out there?” he asks. “Has the data transformed along the way, and what was the snapshot at the point of failure? How do you recover that? Add virtualization and cloud on top of that, and it makes it even that much more complicated.” Arnold emphasizes the critical importance of having hardware resources at the ready to step in and run workloads when the primary system goes down. If a customer has consolidated its major apps onto a handful of virtualized Wintel servers that are running at mainframe-like capacities of 70 or 90 percent, they don’t have the option of simply spinning up another VM on that machine when the primary partition hiccups. A second similarly outfitted machine, ideally located at a geographic distance from the primary, is still the gold standard for true operational resiliency, and no amount of virtualization can change that. Further complicating the matter is the way that some cloud computing providers actually promote the resiliency aspect of their solution. When Amazon‘s cloud service went down earlier this year, two companies that relied on Amazon to run their apps went out of business when Amazon lost their data Arnold says. “How did that happen? Amazon is one of biggest cloud providers in the world today. How did they not have a backup and not be able to recover?” Organizations need to separate the wheat from chaff when it comes to resiliency claims made by cloud providers. “You need to make sure you’re balancing and understanding everything you’re getting,” Arnold says. “If you think you signed up for a cloud type of solution that’s going to give you something, all you may have done is signed up for moving data to another location, but you may not be able to run from there. And how long will it take you to move data back? Will you expose your data on a public network, and are you allowed to do that? The cloud brings up a lot of questions that people need to start asking.” To register for today’s Webcast on Vision’s State of Resilience report, go to www.visionsolutions.com/WebForms/EventRegistration.aspx?ScheduleId=a036000000FeFNA. RELATED STORIES Virtualization is Hurting DR Preparedness, Vision Says Companies Take a Step Back in DR Readiness, Symantec Report Finds Vision Sees Positive Trends for HA/DR in Second ‘State of Resilience’ Report Midrange Shops Not As Protected from Disaster As They Think, Vision Finds
|