Software-defined storage does not have to be a rollercoaster ride

Photo Credit:  Chuck Quigley
Software-defined storage does not have to be a rollercoaster ride

Thanks to VMware’s vision on software-defined data center (SDDC), software-defined anything is one of the leading buzzwords today. Software-defined storage (SDS) is not an exception. SDS feels like a big shift in storage world, but good news is that the transition is much smoother than it sounds. Let us take a closer look. What is SDS? There are many vendor specific definitions and interpretations for SDS. Industry analysts have their own versions too. Hence let us focus on what matters the most. The characteristics and benefits expected in general from SDS. There are the four pillars.

  1. Abstraction:Data pane and control pane are separated. In other words, the storage management is decoupled from the actual storage itself. Customer benefit: flexibility to solve for storage and management needs independently
  2. Backend heterogeneity: Storage is served by any kind of storage from any vendor including commodity storage. Customer benefits: Freedom of choice for storage platforms, avoid lock-ins.
  3. Frontend heterogeneity:Storage is served to any kind of consumers (operating systems, hypervisors, file services etc.) Customer benefit: Freedom of choice for computing platforms, avoid lock-ins.
  4. Broker for storage services: SDS brokers storage services, no matter where data is placed and how it is stored, through software that in turn will translate those capabilities into storage services to meet a defined policy or SLA. Customer benefits: Simplified management, storage virtualization, and value-added data services through vendor or customer innovations.

Three out of four pillars are needed to qualify as a software-defined storage solution. Pillars 1 & 4 are must haves. Once you have those two you need 2 or 3. The reality is that SDS movement had started long time ago. Let us use some examples to understand the SDS implementations.

Oracle Automatic Storage Management (ASM): Although Oracle seldom markets ASM as an SDS solution, it happened to be a great example of SDS. It is purpose-built for Oracle databases. It features all the four pillars and storage is entirely managed by the application owner (Oracle DBA). The pillar 2 is questionable here. It does have that pillar because the solution runs on multiple OS platforms. However it serves just one type of workload, hence that pillar is not truly delivering frontend heterogeneity.

Veritas InfoScale: Formerly known as Veritas Storage Foundation, Veritas InfoScale is perhaps the most successful, heterogeneous, and general purpose SDS solution. While it is still widely in use, it is a host based SDS solution (the pillars are built on top of the operating system) and hence not a good fit for virtualized world.

VMware Virtual Volumes (VVOLs): VMware VVOL is purpose-built for VMware vSphere. Hence it lacks pillar 3. VVOLs shine well with the other three pillars. A virtual infrastructure admin could manage everything from a single console.

Now that we covered the characteristics of SDS, let us look at the bigger picture as an IT architect. The great thing about SDS solutions is the interoperability to build the right solution for workloads so as to solve for constantly changing storage needs. You can be quite creative (and of course, even go crazy!) with the type of things you can build with SDS Lego blocks.

You can deploy Oracle ASM on top of Veritas InfoScale so that DBAs could benefit from both. ASM enables Oracle DBAs to manage their storage while InfoScale brings centralized management for storage administrators.

How about that virtual server environment where Veritas InfoScale is falling short? Bring in storage LUNs directly into vSphere hosts for a VMFS experience where they enjoy the benefits provided by VMware. Are virtual machine infrastructure administrators getting ready to manage storage on their own? Give them the array plugin for VMware vSphere Web Client. Or prepare them for VASA provider from storage vendor to get ready for VVOLs!

The main takeaway is simply this. SDS is a blessing for IT architects to solve storage puzzles elegantly. It had been here for long time, it is also constantly evolving with market inspired innovations. Transitioning to SDS is relatively smooth.

Disclaimer: The opinions here are my own. It does not reflect those of my current or previous employers.

Did Rubrik make Veeam’s Modern Data Protection a bit antiquated?

Veeam Antiquated?
Veeam Antiquated?

Modern Data Protection™ got a trademark from Veeam. No, I am not joking. It is true! Veeam started with a focused strategy. It will do nothing but VMware VM backups. Thankfully VMware had done most of the heavy lifting with vStorage APIs for Data Protection (VADP) so developing a VM-only backup solution was as simple as creating a software plugin for those APIs and developing a storage platform for keeping the VM copies. With a good marketing engine Veeam won the hearts of virtual machine administrators and it paid off.

As the opportunity to reap the benefits as a niche VM-only backup started to erode (intense competition, low barrier to entry on account of VADP), Veeam is attempting to re-invent its image by exploring broader use cases like physical systems protection, availability etc. Some of these efforts make it look like its investors are hoping for Microsoft to buy Veeam. The earlier wish to sell itself to VMware shattered when VMware adopted EMC Avamar’s storage to build its data protection solution.

Now Rubrik is coming to market and attacking the very heart of Veeam’s little playground while making Veeam’s modern data protection a thing of past. Rubrik’s market entry is also through VMware backups using vStorage APIs but with a better storage backend that can scale out.

Both Veeam and Rubrik have two high level tiers. The frontend tier connects to vSphere through VMware APIs. It discovers and streams virtual machine data. Then there is a backend storage tier where virtual machine data is stored.

For Veeam the front-end is a standalone backup server and its possible backup proxies. The proxies (thanks to VMware hot-add) enable limited level of scale-out for the frontend, but this approach leeches resources from production and increases complexity. The backend is one or more backup repositories. There is nothing special about the repository; it is a plain file system. Although Veeam claims to have deduplication built-in, it is perhaps the most primitive in the industry and works only across virtual machines from the same backup job.

Rubrik is a scale-out solution where the frontend and backend are fused together from users’ perspective. You buy Rubrik bricks where each brick consists of four nodes. These are the compute and storage components that cater to both frontend in streaming virtual machines from vSphere via NBD or SAN transport (kudos to Rubrik for ditching hot-add!) and backend, which is a cluster file system that spans nodes and bricks. Rubrik claims to have global deduplication across all its cluster file system namespace.

Historically, the real innovation from Veeam was the commercial success of powering on virtual machines directly from the backup storage. Veeam may list several other innovations (e.g. they may claim that they ‘invented’ agentless backups, but it was actually done by VMware in its APIs) in their belt but exporting VMs directly from backup is something every other vendor followed afterwards and hence kudos go to Veeam on that one. But this innovation may backfire and may help Veeam customers to transition to Rubrik seamlessly.

Veeam customers are easy targets for Rubrik for a few reasons.

  • One of the cornerstones of Veeam’s foundation is its dependency on vStorage APIs from VMware; it is not a differentiator because all VMware partners have access to those APIs. Unlike other backup vendors, Veeam didn’t focus on building application awareness and granular quiescence until late in the game
  • Veeam is popular in smaller IT shops and shadow projects within large IT environments. It is a handy backup tool, but it is not perceived as a critical piece in meeting regulatory specs and compliance needs. It had been marketed towards virtual machine administrators; hence higher-level buying centers do no have much visibility. That adversely affects Veeam’s ‘stickiness’ in an account.
  • Switching from one backup application to another had been a major undertaking historically. But that is not the case if customers want to switch from Veeam to something else. Earlier days, IT shops needed to standup both solutions until all the backup images from the old solution would hit the expiration dates. Or you have to develop strategies to migrate old backups into the new system, a costly affair. When the source is Veeam with 14 recovery points per VM by default, you could build workflows that spin up each VM backup in a sandbox and let the new solution back it up as if it is a production copy. (Rubrik may want to work on building a small migration tool for this)
  • Unlike Veeam that started stitching support for other hypervisors and physical systems afterwards, Rubrik has architected its platform to accommodate future needs. That design may intrigue customers when VMware customers are looking to diversify into other hypervisors and containers.

The fine print is that Rubrik is yet to be proven. If the actual product delivers on the promises, it may have antiquated Veeam. The latter may be become a good case study for business schools on not building a product that is dependent too much on someone else’s technology.

Thanks to #VFD5 TechFieldDay for sharing Rubrik’s story. You can watch it here: Rubrik Technology Deep Dive

Disclaimer: I work for Veritas/Symantec, opinions here are my own.

Getting to know the Network Block Device Transport in VMware vStroage APIs for Data Protection

When you backup a VMware vSphere virtual machine using vStorage APIs for Data Protection (VADP), one of the common ways to transmit data from VMware data store to backup server is through Network Block Device (NBD) transport. NBD is a Linux-like module that attaches to VMkernel and makes the snapshot of the virtual machine visible to backup server as if the snapshot is a block device on network. While NBD is quite popular and easy to implement, it is also the least understood transport mechanisms in VADP based backups.

NBD is based on VMware’s Network File Copy (NFC) protocol. NFC uses VMkernel port for network traffic. As you already know, VMkernel ports may also be used by other services like host management, vMotion, Fault Tolerance logging, vSphere Replication, NFS, iSCSI an so on. It is recommended to create specific VMkernel ports that attach to dedicated network adapters if you are using a bandwidth intensive service. For example, it is highly recommended to dedicate an adapter for Fault Tolerance logging.

Naturally, the first logical solution to drive high throughput from NBD backups would be to dedicate a bigger pipe for VADP NBD transport. Many vendors put this as the best practice but that alone won’t give you performance and scale.

Let me explain this using an example. Let us assume that you have a backup server streaming six virtual machines from an ESXi host using NBD transport sessions. The host and backup server are equipped with 10Gb adapters. In general a single 10Gb pipe can deliver around 600 MB/sec. So you would expect that each virtual machine would be backed up at around 100 MB/sec (600 MB/sec divided into 6 streams for each virtual machine), right? However, in reality each stream would have access to much lower share of bandwidth because VMkernel automatically caps each session for stability. Let me show you the actual results from a benchmark that we conducted where we measured performance as we increased the number of streams.

NBD Transport and number of backup streams
NBD Transport and number of backup streams

As you can see, by the time the number of streams has reached 4 (in other words, four virtual machines were simultaneously getting backed up), each stream is able to deliver just 55 MB/sec and the overall throughput is 220 MB/sec. This is nowhere near the available bandwidth of 600 MB/sec.

The reasoning behind this type of bandwidth throttling is straightforward. You don’t want VMkernel to be strained by serving this type of copy operations while it has better things to do. VMkernel’s primary function is to orchestrate VM processes. VMware engineering (VMware was also a partner in this benchmark, we submitted the full story as a paper for VMworld 2012) confirmed this behavior as normal.

This naturally puts NBD as a second-class citizen in backup transport world, doesn’t it? The good news is that there is a way to solve this problem! Instead of backing up too many virtual machines from the same host, just make your backup policy/job configuration to distribute the load over multiple hosts. Unfortunately, in environments with 100s of hosts and 1000s of virtual machines, it may be difficult to do it manually. Veritas NetBackup provides VMware Resource Limits as part of its Intelligent Policies for VMware backup where you can limit the number of jobs at VMware vSphere object levels, which is quite handy in this type of situations. For example, I ask customers to limit number of jobs per ESXi host to 4 or less using such intelligent policies and resource limit setting. Thus NetBackup can scale-out its throughput by tapping NBD connections from multiple hosts to keep its available pipe fully utilized while limiting the impact of NBD backups on production ESXi hosts.

Thus Veritas NetBackup moves NBD to first class status in protecting large environments even when the backend storage isn’t on Fiber Channel SAN. For example, NetBackup’s NBD has proven its scale in NetApp FlexPod, VCE VBLOCK, Nutanix and VMware EVO (VSAN). Customers could enjoy the simplicity of NBD and scale-out performance of NetBackup in these converged platforms.

References:

Taking VMware vSphere Storage APIs for Data Protection to the Limit: Pushing the Backup Performance Envelope; Rasheed, Winter et al. VMworld 2012

Full presentation on Pushing the Backup Performance Envelope

VMware EVO: The KFC of SDDC

EVO is the KFC of SDDC
EVO is the KFC of SDDC

VMware EVO is bringing to software-defined data centers the same type of business model that Kentucky Fried Chicken had brought to restaurants decades ago. VMware is hungry to grow and is expanding its business to new territories. Colonel Sanders’s revolutionary vision to sell his chicken recipe and brand through franchise model is now coming to IT infrastructure as ready-to-eat value meals.

Most of the press reports and analyst blogs are focused on VMware’s arrival into converged infrastructure market. Of course, vendors like Nutanix and SimpliVity will certainly lose sleep as the 800-pound gorilla has set its eyes on converged infrastructure market. However, VMware’s strategy is much deeper than taking over the converged infrastructure market from upstarts, it is a bold attempt to disrupt the business model of selling IT infrastructure stacks while keeping public cloud providers away from enterprise IT shops.

Bargaining power of supplier: Have you noticed the commanding power of VMware in EVO specifications? Partners like Dell and EMC are simply the franchisees of VMware’s infrastructure recipe and brand. It is no secret that traditional servers and storage are on the brink of disruption because buyers wouldn’t pay premium for brand names much longer. It is the time for them to let go of individuality and become delivery model for a prescriptive architecture (franchise model) from a stronger supplier in the value chain.

Software is now the king, no more OEM: In the old world where hardware vendors owned brand power and distribution chains, software vendors had to make OEM deals to get their solutions to the market in those hardware vehicles. Now the power is shifting to software. The software vendor prescribes (a softened term that actually stands for ‘dictates’) how infrastructure stacks should be built.

Short-term strategy, milk the converged infrastructure market: This is the most obvious hint VMware has given; reporters, bloggers and analysts have picked up this obvious message. As more and more CIOs are looking to reduce capital and operational costs, the demand for converged systems is growing rapidly. Even the primitive assembled-to-order type solutions from VCE and NetApp-Cisco are milking the current demand for simplified IT infrastructure stacks. Nutanix leads the pack in relatively newer and better hyper-convergence wave. VMware’s entry into this market validates that convergence is a key trend in modern IT.

Long-term strategy, own data center infrastructure end-to-end while competing with public clouds: The two of three key pillars of VMware strategy are enabling software-defined data centers and delivering hybrid clouds. Although SDDC and hybrid cloud would look like two separate missions, the combination is what is needed to fight Amazon and other public cloud solutions from taking over the workloads from IT shops. The core of VMware’s business is selling infrastructure solutions for on-prem data centers. Although VMware positions itself as the enabler of service providers, it understands that the bargaining power of customers would continue to stay low if organizations stick to on-prem solutions. This is where SDDC strategy fits. By commoditizing infrastructure components (compute, storage and networking) and shifting the differentiation to infrastructure management and service delivery, VMware wants to become the commander in control for SDDCs (just like how Intel processors dictated direction for PCs in the last two decades). EVO happens to be that SDDC recipe it wants to franchise to partners so that customers could taste the same SDDC no matter who their current preferred hardware vendors are. Thus EVO is the KFC of SDDC. It is not there as a Nutanix killer, VMware also wants to take shares from Cisco (Cisco UCS is almost #1 in server market, Cisco is #1 in networking infrastructure), EMC Storage (Let us keep the money in the family, the old man’s hardware identity is counting its days) and other traditional infrastructure players. At the same time, VMware wants to transform vCloud Air (the rebranded vCloud Hybrid Service) as the app store for EVO based SDDCs to host data services in cloud. It is a clever plan to keep selling to enterprises and hide them away from the likes of Amazon. Well played, VMware!

So what will the competitive action from Amazon and other public cloud providers? Amazon has resources to build a ready-to-eat private Fire Cloud for enterprises that can act as the gateway to AWS. All this time, Amazon focused mainly on on-prem storage solutions that extend to AWS. We can certainly expect the king of public clouds do something more. It is not a question of ‘if’; rather it is the question of ‘when’.

EMC’s Hardware Defined Control Center vs. VMware’s Software Defined Data Center

EMC trying to put the clock back from software-defined storage movement
EMC trying to put the clock back from software-defined storage movement

EMC’s storage division appears to be in old yeller mode. It knows that customers would eventually stop paying a premium for branded storage. The bullets to put branded storage out of its misery are coming from software defined storage movement led by its own stepchild VMware. But the old man is still clever and pretending to hangout with the cool kids to stay relevant while trying to survive as long as there are CIOs willing to pay premium for storage with a label.

Software-defined storage is all about building storage and data services on top of commodity hardware. No more vendor locked storage platforms on proprietary hardware. This movement offers high performance at lower cost by bringing storage closer to compute. Capacity and performance are two independent vectors in software-defined storage.

TwinStrata follows that simplicity model and had helped customers extend the life of existing investments with true software solutions. The data service layer offers storage tiering where that last tier could be a public cloud. EMC wants the market to believe that its acquisition of TwinStrata is an attempt to embrace software-defined storage movement. But the current execution plan is a little backward. EMC’s plan is a bolted-on type of integration for TwinStrata IP on top of legacy VMAX storage platform. That means EMC wants to keep the ‘software-defined’ IP closer to its proprietary array itself. The goal is, of course, to prolong the life of VMAX in the software-defined world. While it defeats the rationale behind software-defined storage movement, it may be the last straw to pull the clock back a little.

Hopefully there is another project where EMC will seriously consider building a true software-defined storage solution from the acquired IP without the deadweight of legacy platforms. Perhaps transform ViPR from vaporware to something that really rides the wave of software-defined movement?

NetBackup Accelerator vs. Simpana DASH Full

I want to start this blog with a note.

I mean no disrespect to CommVault as a company or its engineers innovating its products. Being an engineer myself by trade, I do understand that innovations are triggered by market demands and there is always room for improvements in any product. This blog is entirely my own opinions.

As most of you guys reading this blog know, I also write for official Symantec blogs. I recently got an opportunity to take readers of Symantec Connect on a deep dive into one of the major features in NetBackup 7.6 for VMware vSphere and vCloud environments. It is primarily targeted for users of NetBackup who knows its nuts and bolts. A couple of employees from a CommVault read the blog. It is natural in competitive intelligence world to look for weak spots or things that can be selectively pointed out to show parity. It is part of their job and I respect it. However it appeared that they wanted to claim parity for Simpana with NetBackup Accelerator for VMware based on two statements (tweets, to be precise!). While asking to elaborate, the discussion went on a rat hole with statements made out of context and downright unprofessional. Hence here I go with an attempt to compare Simpana 10 with NetBackup 7.6 on the very topic discussed in official blog.

Claims to equate parity with NetBackup Accelerator for VMware

  1. (Not explicitly stated) Simpana supports CBT
  2. Simpana had ‘block detection’ for over a year
  3. Simpana does synthetics

The attempt here is to check all the boxes to claim parity while at times people do miss the big picture! At times they were equating apples to oranges. Hence I am going to attempt to clarify this as much as possible using Simpana language for the benefit those two employees.

Simpana supports CBT: Of course, every major vendor supports it. It is an innovation from VMware. The willingness to support a feature from vStorage APIs is important to protect VMware virtual machines.

What sets NetBackup 7.6 apart from Simpana 10 in this case is that Simpana’s implementation of CBT is limited to recovering an entire VM or individual files from the VM. If you have enterprise applications (e.g. Microsoft Exchange, Microsoft SQL Server etc.), you must stream data through an agent inside the guest to protect those applications and perform granular recovery. The value of CBT is to minimize data processing and movement load on production VMs while performing backups. A virtual machine’s operating system binaries and related files are typically static and CBT won’t add much value there. The real value comes from daily changes to disk blocks by applications! That means ZERO value in Simpana to protect enterprise applications with its implementation of vSphere CBT.

Simpana had block detection for over a year,  Simpana does synthetics: The employee is trying to add a check box for Simpana next to NetBackup’s capability to make use of Symantec V-Ray to detect deleted blocks. Nice try!

First and foremost, the block optimization technique described in my blog is present in NetBackup since 2007, with version 6.5.1 when Symantec announced support for VMware Virtual Infrastructure 3. Congratulations on trying to claim that Simpana had this capability after half a decade! But wait…. We are talking about apple and orange here.

This technique had been available for both full and incremental backup schedules. It works no matter where backups are going to, disk, deduplicated disk, tape or cloud. NetBackup’s block optimization happens closer to the data source. Thus it detects deleted blocks at the backup host so that the deleted blocks never appear in SAN or LAN traffic to the backup storage. That is optimization for processing-power, interconnect-bandwidth and storage!

CommVault employee was in a hurry to equate this to something Simpana caught up recently.  This is what I believe he is referring to. (I am asking him to tweet back if there is anything else).  Quoted from Simpana 10 online documentation.

DASH Full is a read optimized Synthetic Full operation which does not require traditional full backups to be performed. Once the first full backup is completed, changed blocks are protected during incremental or differential backups. A DASH Full will run in place of traditional full or synthetic full. This operation does not require movement of data. It will simply update indexing information and the deduplication database signifying that a full backup has been completed. This will significantly reduce the time it takes to perform full backups.

There are so many things I want to say about this, but I am trying to be concise here with bullet points.

  • What Simpana has here is an equivalent of NetBackup OpenStorage Optimized Synthetics that was introduced in NetBackup 6.5.4 (in 2009). While NetBackup still supports this capability, Symantec had taken this to the next level with NetBackup Accelerator. For the record, NetBackup Accelerator is also backed by Optimized Synthetics and hence the so-called ‘block detection’ is there in NetBackup since 2009.
  • The optimization I was talking about was the capability to detect deleted blocks from the CBT data stream while CommVault is touting about data movement within backup storage!
  • The DASH full requires incremental backups and separate schedules for synthetic backups. NetBackup Accelerator eliminates this operational inefficiency by synthesizing full image inline using the resources needed for an incremental backup.
  • If you are curious about how NetBackup Accelerator in general is different from Optimized Synthetics (or DASH Full), this blog would help.
  • Last but not the least, did I say that NetBackup Accelerator for VMware works with enterprise applications as well? Thus both CBT and deleted blocks detection (both relevant to applications that does the real work inside VM) adds real value for NetBackup Accelerator

High Availability for Business Critical Applications on VMware vSphere

In the last blog we talked about VMware vSphere HA and FT. As we discussed, vSphere HA is quite impressive in protecting against infrastructure failures at a reasonable cost. vSphere FT, on the other hand, has very limited use cases. However, none of these solutions are sufficient to meet high availability requirements for business critical applications with demanding service level agreements.

  1. Neither vSphere HA nor FT has application awareness. These technologies monitor just the container (the virtual machine). If an application or a resource that it depends on goes down, these technologies cannot detect and remediate the issue.
  2. Both technologies cannot provide availability during planned downtimes. If the application or operating system needs to be patched, the application will not be available to users.
  3. The remediation in vSphere HA requires restarting guest operating system that can be time consuming. This poor RTO may not be suited for enterprise applications.

This is where Symantec comes to rescue VMware vSphere administrators. Symantec has two products to fill these gaps so that organizations can confidently virtualize business critical applications. Thus you get to enjoy the agility and cost efficiency of VMware vSphere without compromising enterprise availability.

 Symantec ApplicationHA: This solution solves problem 1 given above. Symantec ApplicationHA monitors designated application and resources (e.g. disk, volume, file system, network…). If a failure is detected Symantec ApplicationHA can restart the application and its resources in a pre-defined order. The application monitoring is quite efficient and foolproof. For example, if you are monitoring MS SQL Server, you can configure the ApplicationHA agent to login and logout from the database as if it were a regular database user. If the application restart fails (it can attempt application restarts for a configured number of times), Symantec ApplicationHA will send a trigger to vSphere HA (if available) to restart the VM on the same or on a different host.

Symantec ApplicationHA has support for over 21 business critical applications. Moreover, it provides a framework to create custom agents for homegrown applications as well. Symantec has been in the business of HA and DR for long time with superb reputation for HA agents.

Symantec Cluster Server, powered by Veritas: This product solves all the three problems stated earlier. This can work with or without vSphere HA. The application monitoring and remediation workflow is similar to that of Symantec ApplicationHA. In fact, the agents for both the products are the same. What is different about Symantec Cluster Server is its ability to migrate just the application and its resources to a standby VM as part of remediation.  There is no need to wait for VM to restart thereby significantly reducing the downtime and improving RTO.

The ability to migrate application to another VM also mitigates the downtime normally incurred for planned activities like applying maintenance updates. You can update patches on standby node and migrate the application. Considering the long patching processes for modern operating systems, you definitely don’t want to deploy business critical applications on vSphere without the availability from Symantec Cluster Server.

Symantec Cluster Server for VMware is purpose built for virtual environments. When compared to traditional clusters like Windows Failover Cluster (formerly known as Microsoft Cluster Server), Symantec Cluster Server gives you high availability without compromising the perks of virtualization. For example, Windows Failover Cluster requires you to create VMs with physical RDM (raw device mapping) disks. If you use RDM, the flagship vSphere capabilities like vMotion, DRS, vStorage API based backups etc. are lost. Symantec Cluster Server has hot-plug APIs to work in VMFS and NFS based datastores.

 Note: VMware has released product called vSphere App HA with vSphere 5.5 release. Lorenzo has written a great blog where Symantec Application HA and vSphere App HA are compared in detail. Check it out here.

Run baby run! High Availability for business critical applications in virtualized environments

Most of you are on a journey to a software defined data center. Some of you used virtualization to consolidate infrastructure to reduce capital expanses. Some of you may be virtualizing (or starting to think virtualizing) business applications to take advantage of the agility and flexibility that virtualization brings. Naturally, one thing you may be worried a lot is system and application availability if you have reached that part of the journey.

The good news is that VMware is not a stranger to HA. VMware vSphere includes a feature named vSphere HA (formerly VMware HA) that protects VMs against hardware failures.  Two or more ESXi hosts can form an HA cluster. vSphere HA provides the following values.

  1. Decent protection against hardware failures (ESXi host failures). When a host fails, the virtual machines on that host can be restarted on another ESXi host sharing the same data store.
  2. Limited protection against guest OS failures. The VMware tools running on guess operating system sends heartbeats to vSphere HA. If heartbeat stops (e.g. the guest operating system is hung), vSphere HA can restart the VM on the same or on a different ESXi host.

The use of vSphere HA depends on the service level agreement (SLA) between IT department and business unit. In most development/test workloads, vSphere HA is good enough as the services can be resumed in less than 10 minutes. The main bottleneck here is the time it takes to reboot the guest operating system.

Another solution is vSphere Fault Tolerance (vSphere FT).  It creates and maintains an additional copy of the VM being protected. It provides continuous availability by ensuring that the states of the primary and secondary VMs are identical at any point in the instruction execution of the virtual machine. However, vSphere FT is not for everyone. Although its protection against hardware failures is impeccable, its protection against OS and applications misbehavior is extremely limited. The cost of operating two virtual machines (and related storage) and other limitations like lack of support for vStorage APIs makes vSphere FT suitable for very limited use cases.

Both vSphere HA and vSphere FT lacks something quite important when it comes to protecting business critical workloads, viz. application awareness. Let us say that you are running an instance of Oracle with a few databases inside a virtual machine. What happens if an Oracle instance fails? What happens of an instance loses access to underlying storage? Neither VMware HA nor FT detects it and hence downtime will be incurred. Downtime = Lost revenue.

There is another weakness in vSphere HA and vSphere FT solutions. It does not protect applications against planned downtimes. When you need to patch, upgrade or perform any other maintenance task related to components within the guest (operating system binaries, application binaries etc.) you must shutdown the application that may be costly for tier 1 business critical applications.

ScenariovSphere HAvSphere FT
Detect host failureVMs are restarted on another host (Recovery time = restart time)The VM executing instructions in lockstep on surviving host takes over (Recovery time is near zero)
Detect VM failure (VM not sending heart beats, OS hung) VM is restartedNo protection likely as both VMs are in lockstep
Detect Application FailureNo ProtectionNo Protection
Compatibility with vMotionYesYes
Compatibility with vStorage APIs for Data Protection (VADP)YesNo (in guest backup agent required)
Avoiding Planned Downtime (patching, upgrades etc.) Planned downtime cannot be avoidedPlanned downtime cannot be avoided

Symantec has solutions to tackle these types of scenarios. One was jointly developed with VMware. The second one comes from a time-tested solution that was ported to support vSphere platform. Let us look at each of them in another blog.

What’s up with VADP backups and VDDK on vSphere 5.1?

VMware vSphere 5.1 has been in the market for more than a few months now and the interest in the new capabilities is high. Because of this the market saw many backup vendors rush to announce support for vSphere 5.1 in their VADP (vStorage APIs for Data Protection) integration. Everything looked clean and shiny and new.

On November 21, Symantec made an interesting announcement1. In a nutshell, the statement was that support for vSphere 5.1 would be delayed in its NetBackup and Backup Exec products. It was because they discovered issues while testing the VADP 5.1 API for integration. The API in the current form may introduce risk in performing consistent backups and ensuring reliable restores. All vendors receive the same API, not all vendors perform the same level of testing.

In order to explain the intricacies, first we need to take a quick look at how a backup product is integrated with VMware vSphere. With each release of vSphere, VMware publishes a set of APIs known as VMware APIs for Data Protection or VADP. One of the key components of VADP is Virtual Disk Development kit aka VDDK. This is the component through which third party code receives authenticated access to vSphere Datastores and virtual machine disk files. VMware makes this component available to its technology partners. Partners (backup product vendors in this case) ship this along with their product that has calls to vStorage APIs.

With each version of vSphere, an equivalent version of VDDK is released. The VDDK is generally backward compatible to one or more earlier versions of vSphere. For example, VDDK 5.1 supports2 vSphere 5.1, 5.0 and 4.1. VDDK 5.0 supports3 vSphere 5.0, 4.1, 4.0 and VI 3.5. Since the updated VDDK is required to understand the modified data structures in a new version of vSphere, lower versions of VDDK are in general not supported for accessing a higher version of vSphere. For example, VMware historically and currently (as of today) does not support the use of VDDK 5.0 to access datastores in vSphere 5.1.  VMware documents supported versions of vSphere for each of its VDDK versions in release notes.

The key to remember is the statement in bold face above. VMware does not support any violated combinations because of the risks and uncertainties. The partners are expected to ship the correct version of VDDK when they announce the availability of support for a given vSphere release.

What Symantec announced and VMware confirmed4 is that VDDK 5.1 has issues and hence the support for vSphere 5.1 in its products will be delayed. This makes sense since VDDK 5.1 is the only version currently allowed to access vSphere 5.1. The face-saving reactions from other vendors to this announcement revealed some of the dirty games and ugly truths to come out in the area of VADP/VDDK integration.

 

  1. Vendors were claiming support for vSphere 5.1 but still shipping VDDK 5.0 with their products. This is currently not supported by VMware because of the uncertainties.  This may change but at the time vendors claiming support, they were taking risks that typically are not acceptable in field of data protection business.
  2. Vendors were mucking with API calls and silently killing hung processes. That may work for an isolated or random hang. But will not work when there are repeatable hang situations like those observed in VDDK 5.1. Plus, there are performance and reliability concerns in abruptly ending sessions with vSphere.
  3. Most vendors weren’t testing all the edge cases and never realized the problems in VDDK 5.1, thus prematurely announcing support for 5.1

 

If your backup vendor currently supports vSphere 5.1, be sure to ask what their situation is.

Sources and references:

1. Quality wins every time: vSphere 5.1 support update, Symantec official blog.

2. VDDK 5.1 Release Notes, VMware Support resources

3. VDDK 5.0 Release Notes, VMware Support resources

4. Third-party backup software using VDDK 5.1 may encounter backup/restore failures, VMware Support KB

Dear EMC Avamar, please stop leeching from enterprise vSphere environments

VMware introduced vStorage APIs for Data Protection (VADP) so that backup products can do centralized, efficient, off-host LAN free backup of vSphere virtual machines.

In the physical world, most systems have plenty of resources, often underutilized. Running backup agent in such a system wasn’t a primary concern for most workloads. The era of virtualization changed things drastically. Server consolidation via virtualization allowed organizations to get the most out of their hardware investment. That means backup agents do not have the luxury to simply take up resources from production workloads anymore as the underlying ESXi infrastructure is optimized and right-sized to get line of business applications running smoothly.

VMware solved the backup agent problem from the early days of ESX/ESXi hosts. The SAN transport method for virtual machine backup was born during the old VCB (VMware Consolidated Backup) days and further enhanced in VADP (vStorage APIs for Data Protection). The idea is simple. Let the snapshots of virtual machine be presented to a workhorse backup host and allow that system do the heavy lifting of processing and moving data to backup storage. The CPU, memory and I/O resources on ESX/ESXi hosts are not used during backups. Thus the production virtual machines are not starved for hypervisor resources during backups.

For non-SAN environments like NFS based datastores, the same dedicated host can use Network Block Device (NBD) transport to stream data through management network. Although it is not as efficient as SAN transport, it still offloaded most of the backup processing to the dedicated physical host.

Dedicating one or more workhorse backup systems to do backups was not practical for small business environments and remote offices. To accommodate that business need, VMware allowed virtual machines to act as backup proxy hosts for smaller deployments. This is how hotadd transport was introduced.

Thus your backup strategy is to use a dedicated physical workhorse backup system to offload all or part of backup processing using SAN or NBD transports. For really small environments, a virtual machine with NBD or hotadd transport would suffice.

Somehow EMC missed this memo. Ironically, EMC had been the proponent of running Avamar agent inside the guest instead of adopting VMware’s VADP. The argument was that the source side deduplication at Avamar agent minimizes the amount of data to be moved across the wire. While that is indeed true, EMC conveniently forgot to mention that CPU intensive deduplication within the backup agent would indeed leech ESXi resources away from production workloads!

Then EMC conceded and announced VADP support. But the saga continues. What EMC had provided is hotadd support for VADP. That means you allocate multiple proxy virtual machines even in the case of enterprise vSphere environments. Some of the best practice documents for Avamar suggest deploying a backup proxy host for every 20 virtual machines. Typical vSphere environment in an enterprise would have 1000 to 3000 virtual machines. That translates to 50 to 150 proxy hosts! These systems are literally the leach worms in vSphere environment draining resources that belong to production applications.

The giant tower of energy consuming nodes in Avamar grid is not even lifting a finger in processing backups! It is merely a storage system. The real workhorses are ESXi hosts giving in CPU, memory and I/O resources to Avamar proxy hosts to generate and deduplicate backup stream.

The story does not change even if you replace Avamar Datastore with a Data Domain device. In that case, the DD Boost agent running on Avamar proxy hosts are draining resources from ESXi to reduce data at source and send deduplicated data to Data Domain system.

EMC BRS should seriously look at the way Avamar proxy hosts with or without DD Boost are leaching resources from precious production workloads. The method used by Avamar is recommended only for SMB and remote office environments. Take the hint from VMware engineering as to why Avamar technology was borrowed to provide a solution for SMB customers in VMware Data Protection (VDP) product. You can’t chop a tree with a penknife!

The best example for effectively using VADP for enterprise vSphere is NetBackup 5220. EMC BRS could learn a lesson or two from how Symantec integrates with VMware in a much better way. This appliance is a complete backup system with intelligent deduplication and VADP support built right in for VMware backups.  This appliance does the heavy lifting so that production workloads are unaffected by backups.

How about recovery? For thick provisioned disks SAN transport is indeed the fastest. For thin provisioned disks, NBD performs much better. The good news on Symantec NetBackup 5220 is that the user could control the transport method for restores as well. You might have done the backup using SAN transport, however you can do the restore using NBD if you are restoring thin provisioned virtual machines. For Avamar, hot-add is the end-all for all approaches. NBD on a virtual proxy isn’t useful, hence using that is a moot point when the product offers just virtual machine proxy for VADP.

The question is…

Dear EMC Avamar, when will you offer an enterprise grade VADP based backup for your customers? They deserve enterprise grade protection for the investment they had done for large Avamar  Datastores and Data Domain devices.