NetBackup master server and VMware vCenter

3. The control and command center

vCenter server is the center of an enterprise vSphere environment. Although the ESXi hosts and virtual machines will continue to function even when vCenter server is down, enterprise data centers and cloud providers cannot afford such a downtime. Without vCenter server crucial operations like vMotion, VMware HA, VMware FT, DRS etc will cease to function. A number of third party applications count on plug-ins in vCenter server. A number monitoring and notification functions are governed by vCenter. Hence larger enterprises and cloud providers deploy vCenter on highly redundant systems. Some use high availability clustering solutions line Microsoft Cluster Server or VERITAS cluster server. Some deploy vCenter on a virtual machine protected by VMware HA that is run by another vCenter server.

NetBackup master server plays a similar role. It is the center of NetBackup domain. If this system goes down, you cannot do backups or restores. Unlike vCenter server which runs on Windows (and now on Linux) you can deploy master server on a variety of operating systems like Windows, enterprise flavours of Linux, AIX, HP-UX and Solaris. NetBackup includes cluster agents for Microsoft Cluster Server, VERITAS Cluster Server, IBM HACMP, HP-UX Service Guard and Sun/Oracle Cluster for free. If you have any of these HA solutions, NetBackup lets you install master server with an easy to deploy cluster installation wizard.

An enterprise vCenter server uses a database management system, usually Microsoft SQL Server, for storing its objects. NetBackup comes with Sybase ASA which is embedded in the product. This is a highly scalable application database. No need to provide a separate database management system.

In addition to Sybase ASA database, NetBackup also stores backup image metadata (index) and other configurations in binary and ASCII formats. The entire collection of Sybase ASA databases, binary image indexes and ASCII based databases is referred to as NetBackup Catalog.  NetBackup does provide you a specific kind of backup policy called Catalog backup policy to copy the entire catalog to secondary storage devices (disk or tape) easily. Thus even if you lose your master server, you can perform a catalog recovery to rebuild the master server.

In VMware, you might have dealt with vCenter Server HeartBeat. This feature provides you that capability to replicate vCenter configuration to a remote site so that you could start the replicated vCenter server at that site in case of primary site loss. NetBackup goes a bit further. Unlike vCenter HeartBeat which has Active-Passive architecture, NetBackup provides A.I.R (Auto Image Replication). When you turn on A.I.R for your backups, NetBackup appends the catalog information for the backup in the backup image itself. The images are replicated using storage device’s native replication engine. At the remote site you can have a fully functional master server (which is serving to media servers and clients locally). The device on the remote master server domain which receives A.I.R images can automatically notify the remote master. The remote master now imports the image catalog info from storage. Unlike traditional import processes where the entire image needs to be scanned for recreating the catalog remotely, this optimized import finishes in a matter of seconds (even if the backup image was several terabytes) because the catalog info is embedded within the image for quick retrieval. The result is Active-Active NetBackup domains at both sites. They could replicate in both directions and also act as the DR domain for each other. You can have many NetBackup domains replicating to a central site (fan-in), one domain replicating to multiple domains (fan-out) or a combination of both. This is why NetBackup is the data protection platform that cloud pilots need to master. It is evolving to serve clouds which typically span multiple sites.

vCenter integrates with Active Directory to provide role based access control. Similarly NetBackup provides NetBackup Access Control that can be integrated with Active Directory, LDAP, NIS and NIS+. NetBackup also features audit trails so that you can track users’ activities.

One thing that really makes NetBackup stand out from the point solutions like vRanger, Veeam etc is the ability for virtual machine owners (the application administrators) to self serve their recovery needs. For example, the Exchange administrator can use NetBackup GUI on client, authenticate himself/herself and browse Exchange objects in backups. NetBackup presents the objects directly from its catalog or live browse interface depending on the type of object being requested.  The user simply selects the object needed and initiates the restore. NetBackup does the rest. There are no complex ticket systems where the application owner makes a request to backup administrator. No need to mount an entire VM on your precious ESX resources in production just to retrieve a 1k object. No need to learn how to manipulate objects (for example, the need to manually run application tools to copy objects from a temporary VM) and face the risks associated with user errors. All the user interfaces directly connect to master server, it figures out what to restore and starts the job on a media server.

Well, so NetBackup is an enterprise platform that makes a traditional VM administrator to a cloud pilot of the future. It is nice to see that NetBackup has support for various hardware and operating systems. Is there a way to deploy a NetBackup domain without building a master server on my own? The answer is indeed yes!! NetBackup 5200 series appliances are available for you for this purpose. These appliances are built on Symantec hardware and can be deployed in a matter of minutes. Everything you need to create a NetBackup master server and/or media server is available in these appliances.

Next: Coming Soon!

Back to NetBackup 101 for VMware Professionals main page

Deduplication for dollar zero?

One of the data protection experts asked me a question after reading my blog on Deduplication Dilemma: Veeam or Data Domain.

I am paraphrasing his question as our conversation was limited to 140 characters at time through Twitter.

“Have you seen this best practice blog on Veeam with Exagrid? Here is the blog.  It says not to do reverse incremental backups. The test Mr. Attlia ran was incomplete. The Veeam deduplication at the first pass is poor, but after that it is worth it, right?”

These are all great questions. I thought of dissecting each aspect and share it here. Before I do that I want to make it clear that deduplication devices are fantastic for use in backups. These work great with backup applications that really offer the ability to restore individual objects. If the backup application ‘knows’ how to retrieve specific objects from backup storage, target deduplication adds a lot of value.  That is why NetBackup, Backup Exec, TSM, NetWorker and the like play well with target deduplication appliances. Veeam, on the other hand, simply mounts the VMDK file from backup store and asks the application administrator to fish for the item he/she is looking for. This is where Veeam falls apart if you try to deploy it in medium to large environments. Although target deduplication appliances are disk based, they are optimized more for sequential access as backup jobs mostly follow sequential I/O pattern. When you perform random I/O on these devices (as it happens when a VM is directly run from it), there is a limit to which those devices can perform.

Exagrid: a great company helping out customers

Exgrid has an advantage here. It has flexibility to keep the most recent backup in hydrated form (Exagrid uses post-process deduplication) which works well with Veeam if you employ reverse incremental backups. In reverse incremental backups, the most recent backup is always a full backup. You can eliminate the performance issues inherent in mounting the image on an ESX host when the image is being served in hydrated form. This is good from the recovery performance perspective.  However, Exagrid recommends not turning on reverse incremental method because it burdens the appliance during backups. This is another dilemma; you have to pick backup performance or recovery performance (RTO), not both.

Let me reiterate this. The problem is not with Exagrid in this case. They are sincerely trying to help customers who happened to choose Veeam. Exagrid is doing the right thing; you want to find methods to help out customers in achieving ROI no matter what backup solution they ended up choosing. I take my hat off at Exagrid in respect.

Now let us take a closer look at other recommendations from Exagrid to alleviate the pain points with Veeam.

Turn off compression in Veeam and Optimize for Local target:  Note that Exagrid suggested turning off compression and choosing Optimize for Local target option. These settings have the effect of eliminating most of what Veeam’s deduplication offers. By choosing those options, you let the real guy (Exagrid appliance) do the work.

Weren’t Mr Attila’s tests incomplete?

Mr. Attila stopped tests after the initial backup. The advantage of deduplication is visible only on subsequent backups. Hence his tests weren’t complete. However, as I stated in the blog; that test simply triggered my own research. I wasn’t basing my opinions just on Mr. Attila’s tests. I should have mentioned this in the earlier blog, but it was already becoming too big.

As I mentioned in the blog earlier, Veeam deduplication capabilities are limited. Quoting Exagrid this time: “Once the ExaGrid has all the data, it can look at the entire job at a more granular level and compress and dedupe within jobs and also across jobs! Veeam can’t do this because it has data constantly streaming into it from the SAN or ESX host, so it’s much harder to get a “big picture” of all the data.”   

If Veeam’s deduplication is the only thing you have, the problem is not just limited to the initial backup. Here are a few other reasons why a target deduplication is important when using Veeam.

  1. The deduplication is limited to a job. Veeam’s manual recommends putting VMs created from the same template into a single job to achieve that dedupe rate. It is true that VMs created from the same template have a lot of redundant OS files and whitespace so the dedupe rate will be good at the beginning. But these are just the skins or shells of you enterprise production data. The real meat is the actual data which is less likely to be the same across multiple VMs. We are better of giving that task to the real deduplication engines!
  2. Let us say you have a job with 20 production VMs. You are going to install something new on one of the VM, so you prefer to do a one-time backup before making any changes. Veeam requires you to create a new job to do this. This is not only inconvenient, but now you lose the advantage of incremental backup. You have to stream the entire VM again. Can we afford this in a production environment?
  3. Veeam incremental backups are heavily dependent on vCenter server. If you move a VM from one vCenter to another or if you had to rebuild your vCenter (Veeam cannot protect an enterprise grade vCenter running on a physical system, but let us not go there for now), you need to start seeding full backups for all your VMs. For example, if you want to migrate from a traditional vCenter server running 4.x to a vCSA 5.0, expect to reseed all the backups again.

My point is that Veeam deduplication is not something you can count on to protect a medium to large environment with these limitations. It has the price of $0 for a reason.

NetBackup and Backup Exec let you take advantage of target deduplication appliances to the fullest potential. As these platforms tracks which image has the objects the application administrator is looking for, they can simply retrieve those objects alone from backup storage. The application administrator can self-serve their needs, no need for  20th century ticket system! The journey to the Cloud starts with empowering users to self-serve their needs from the Cloud.

Deduplication Dilemma: Veeam or Data Domain?

Recently I came across a blog post from Szamos Attila. He ran a deduplication contest between Data Domain and Veeam. His was a very small test environment, just 12 virtual machines with 133GB of data. His observations were significant. I thought I share it here.

Veeam vs. Data Domain
Veeam vs. DataDomain Deduplication Contest run by Szamos Attila

You can read more about Mr. Attila’s tests at his blog here

What does this tell you right of the bat? Well, Veeam’s deduplication is slow; not a big deal as they do not charge for deduplication separately. Not a big deal, right?

Not exactly; there is much more to this story if you take a look at the big picture.

First of all, note that this is a very small data set (just 133GB, even my laptop has more data!). Veeam’s deduplication is not really a true deduplication engine that fingerprints data segments and stores only one copy. It is basically a data reduction technique that works only on a predefined set of backup files. Veeam refers to this set of backup files as a backup repository. You can only run one backup job to a given repository at any given time. Hence if you want to backup two virtual machines concurrently, you need to send them to two different backup repositories. If you do that your backup data is not deduplicated across those two jobs. Thus your data reduction strategy using Veeam’s deduplication and concurrent processing of jobs are inversely proportional to one another. This is a major drawback as VMs generally contain a lot of redundant data. In fact, Veeam recommends to run deduplication mainly for a given backup set where all the VMs come from the same template.

Secondly, note that even with a single backup repository; this tiny data set (of just 133GB) took twice as long as Data Domain’s deduplication. Now think about a small business environment with a few terabytes of data. Imagine the time it would take to protect that data. When it comes to an enterprise data center (100s of terabytes); you must depend on a target based deduplication solution like Data Domain to get the job done.

So, can I simply let Veeam do the data movement and count on Data Domain do the deduplication? That is one way to solve this problem. But you have a multitude of other issues with that approach because of the way Veeam does restores.

Veeam does not have a good way to let application administrators in guest operating system (e.g. Exchange administrator on a VM running Microsoft Exchange) self-serve their restore needs. First the application administrator submits a ticket for restore. Then the backup administrator will mount the VMDK files from backup using a temporary VM that starts up in a production ESX host. Even to restore a small object, you have to allocate resources for the entire VM (the marketing name for this multi-step restore is U-AIR) in the ESX host. As this VM needs to ‘run’ from backup storage, it is not recommended for the backup image to be on a deduplicated storage being served through network. As target deduplication devices are designed for streaming data sequentially, the random I/O pattern caused by running a VM from such storage is painfully slow. This is even stated by the partners who are offering deduplication storage for Veeam. HP did tests with Veeam using HP StoreOnce target deduplication appliance and have published a white paper on this, please see this whitepaper in Business Week . See the section on Performance Considerations.

It is to be further noted that only the most recent backup typically stays as a single image in Veeam’s reverse incremental backup strategy. If you are in an unfortunate need to restore from a copy that is not the most recent copy, the performance degrades further while running the temporary VM from backup storage as a lot of random I/O needs to happen at the back-end.

Even after somehow you patiently waited for VM to startup from backup storage, the application administrator needs to figure out how to restore the required objects. If the object is not there in the currently mounted backup image, he/she has to send another ticket to Veeam administrator to mount a different backup image on a temporary VM. This saga continues until the application administrator finds the correct object. What a pain!

There you have it. On one side you have scalability and backup performance issues if using Veeam’s deduplication. On the other side, you have poor recovery performance and usability issues when using a target deduplication appliance with Veeam. This is the deduplication dilemma!

The good news is that target deduplication devices work well with NetBackup and Backup Exec. Both these products provide user interfaces for application administrators so that they could self-serve their recovery needs. At the same time, VM backup and recovery remains agent-less. The V-Ray powered NetBackup and Backup Exec has the capability to stream the actual object from the backup; no need to mount it using a temporary VM.

NetBackup domain and vSphere domain

2. The resemblance is uncanny

When you took your first class for VMware vSphere, you would have noticed that VMware platform is based on three-tier architecture. It is quite easy to learn NetBackup if you are already a certified VMware professional.

You have virtual machines; this is the life blood of the organization. This is where your applications are running.  Multiple virtual machines are hosted by ESXi hosts. Multiple ESXi hosts are managed a vCenter server. That is how the scalability is achieved and vSphere became an enterprise platform.

NetBackup pioneered this model more than a decade ago. It features three tiers. At the lowest level is NetBackup clients. Multiple clients may be protected using a NetBackup media server.  Multiple media servers are managed by a NetBackup master server.

NetBackup and vSphere: Architecture
NetBackup and vSphere: Architecture

NetBackup clients can be a physical systems (a Windows PC, a Mac, a UNIX system hosting an Oracle database etc.) sending backup streams to a media server. In a virtual environment, it can be a virtual machine or a physical system that can read data from VMware datastore. It is important to remember that your production virtual machines themselves do not stream backups, that operation is offloaded to a dedicated VM or a physical system. This system is known as VMware backup host. Thus NetBackup is providing agent-less backups for your virtual machines.

Now let us look at the media server. In terms of our architectural comparison, we compared a media server in NetBackup domain to an ESXi host in vSphere domain. Just like ESXi hosts have storage connected for serving virtual machines, media server has storage attached to it for serving backup clients. The storage connected used by ESX hosts is referred to as data store or primary storage.  It is on primary storage that your production virtual machines and applications live. The storage attached to media servers are known as secondary storage or backup storage. It is used for the purpose of storing the backups.

You know that ESXi systems can support multiple kinds of data stores. You have NFS data stores and VMFS data stores. You also know that VMFS can be on direct attached, Fibre Channel SAN attached or iSCSI SAN attached. Similarly media server can have secondary storage attached to it. There could be plain disk storage, capacity managed disk storage, deduplicated disk storage or even a tape library.  The disk storage may be directly mounted on media server or being served from dedicated storage server.

We know that multiple ESXi hosts can share the same data store. Similarly it is multiple media servers can share a storage server or tape library. Just like how VMware DRS can start VMs based on where least ESXi hosts, backup jobs are load balanced across media servers.

We know that vCenter is really the center of vSphere. vCenter is the management control station. Similarly, NetBackup master server is the center of NetBackup. Just like vCenter, master server hosts a central database and manages data protection for the entire backup environment.

In a vSphere domain ESXi (the ESXi hypervisor) and VMs coexist in a physical system. In NetBackup domain the media server and clients are almost always on different physical systems. There are some exceptions to this rule.

vCenter is generally a separate system for enterprise environments. For smaller environments, it could also be a VM on and ESXi host. Similarly, NetBackup master server is a separate system for large environments. It may also coexist with a media server.

It is worth mentioning that NetBackup also has a fourth tier. It is called NetBackup OpsCenter. NetBackup OpsCenter can do management and reporting on a number of NetBackup domains that are served by different master servers. This layer makes NetBackup even better scalability. You may have data centers across the globe. A NetBackup master server at each data center managers its own media servers that are protecting the clients. All these master servers report into OpsCenter. This is it like a super control and command center. By logging into this central OpsCenter dashboard, you can get a single-pane-of-glass view for the entire data protection infrastructure.

For a very crude comparison, think about vCenter Server Heartbeat that lets you manage multiple vCenter instances. OpsCenter is like vCenter heartbeat but much superior. OpsCenter is a standalone system with its own database for manages, monitor and report tasks. vCenter Heartbeat is more or less a glue that makes it possible to view all instances of vCenter from a single vSphere client GUI.

That is it for today! We will move on to details on each of these three layers in subsequent blogs.

Next: NetBackup Master Server vs. VMware vCenter Server

Back to NetBackup 101 for VMware Professionals main page