Dear EMC XtremIO, why do I get a flashback of DCT PureDisk when I see you?


Once upon a time, there was a small data reduction software company called Data Center Technologies (DCT). Born in Belgium, DCT caught the eye of storage leader at that time, Veritas Software. Veritas was looking for cheaper options to bring disk based data reduction technologies into its tape dominant backup portfolio. Then leader in data deduplication, Data Domain, was considered too expensive.

DCT PureDiskXtremIO
What is it?Disk based backup with data reductionAll flash storage with data reduction
Who bought it?VeritasEMC
Why?Veritas needed to bring disk based backup technologies to its primarily tape based portfolioEMC needed to bring all-flash based storage technologies to its primarily disk based portfolio
What was the rational?DCT was less known startup with lower price tag when compared with the disk backup leader Data DomainXtremIO was a cost effective alternate as the dominant all-flash upstart was not available for sale
Who ended(ends) up dealing with it?SymantecDell
Market reaction – complexityScale-out architecture was way ahead of its time. Too complex to deploy and maintain.Scale-out architecture based on rack based servers, storage and a mess of cables. Complex to deploy and maintain.
Market reaction – data reductionFixed block deduplication. Great for files and folders. Not good for structured data.Fixed block deduplication. Great in hero benchmarks. Mileage varied for real world workloads.
RemediationLead with the trusted brand, NetBackup and tuck-in PureDisk behind its media servers until NetBackup itself is ready to handle deduplication natively. The final remnant of PureDisk was EOL’ed on December 2015 (5030 appliance).Lead with trusted brand VMAX(now all flash) for high performance workloads, position XtremIO for next tier. Rest of the story is still developing.

Veritas acquired DCT and brought its product, PureDisk to market as yet another option in its backup portfolio. PureDisk featured a scale-out architecture with fixed block deduplication. The individual nodes (content router with meta database engine) stored a share of data segments belonging to specific range of hashes (fingerprints). The product struggled to make a good impression among Veritas’ customers mainly for three reasons.

  • PureDisk positioning confused customers as Veritas already had two successful backup brands. NetBackup was the king of enterprise backup; Backup Exec was quite successful among mid to small businesses. PureDisk had to be positioned for use cases where customers wanted to eliminate tape drives entirely.
  • PureDisk scale-out architecture too complex for its time. It took considerable effort to build a storage pool together (repeated OS and product installation on multiple nodes, complex cabling, hash distribution strategy and so on). Competition used to mock the product installation as a rocket science project.
  • PureDisk had fixed block deduplication, quite primitive in data reduction. While it provided decent data reduction for files and folders, it wasn’t a great choice for structured data.

While product team was trying to figure things out, something unexpected happened. Veritas decided to merge with Symantec. The team still had to plough through data reduction strategy while ship’s direction is yet to be set. The team felt that it is better to make data reduction layer stand behind well respected product brands (especially NetBackup) and tuck in PureDisk’s data reduction technology.

It was not easy as hoped. PureDisk’s fixed block dedupe cannot be used as is to support all the types of applications that NetBackup supported. Engineering team innovated to bring a hybrid approach where the backup client can ‘see the data stream’ and divide it exactly at the logical boundaries. This is then fed into deduplication engine such that individual objects are identified and fingerprinted even if it is coming from a structured data source. This hybrid approach (later branded as ‘intelligent deduplication’) proved to be a decent makeover. Although this effort started within a few years (2007, NetBackup 6.5) It took a number of years to perfect the intelligent deduplication recipe (2012, NetBackup 7.5) for the most common applications.

Another problem was PureDisk’s scale-out architecture. While it was great on paper, it turned out to be a nightmare to install and maintain in customers’ environments. That architecture needed to be dumped and a new architecture needed be built from ground up.  Symantec did this in two prong approach. In the first prong, it simplified the deployment to a limited extent with the introduction of target deduplication appliances (branded as NetBackup 5000 series, in 2010) that could sit behind NetBackup or Backup Exec media servers. The second prong involved re-engineering the deduplication engine by abandoning all the complexities of PureDisk’s scale-out design and tucking it into media server. As the fixed dedupe engine is memory intensive, it took several years to polish it. Finally, when the media server embedded deduplication pool crossed the capacity threshold, Symantec declared EOL for older scale-out dedupe engine. This two prong approach played out over 8 years! It started in 2007 (NetBackup 6.5 supported writing to PureDisk pool through OST plugin) and ended in December 2015 (EOL for the last appliance in 5000 series, the 5030).

When I watch the story of XtremIO, I am getting the flashback of DCT PureDisk. EMC was looking for a cheaper way to bring in all flash array (AFA) solution to the market. It sets eyes on Israel based XtremIO. EMC brings in XtremIO as a storage product in its already established portfolio of VMAX, VNX and so on. XtremIO faces similar challenges with product positioning. It also has fixed block deduplication for data reduction which is not ideal for enterprise applications. It also features scale-out which proves to be complex to install and maintain. To make the matter worse, Dell acquires EMC thereby making customers worry as to which product(s) will survive in the merged company. Now the product team decides that VMAX is a safer brand and pushes XtremIO down in positioning. VMAX was not built for flash (just like how NetBackup was not built for disk when PureDisk arrived) so EMC would need to eventually bring some of XtremIO’s flash specific artifacts into VMAX.

When PureDisk had left a bad taste among customers, Symantec stuck NetBackup brand and built deduplication from ground up with limited artifacts from PureDisk. The salesforce didn’t even use the word ‘PureDisk’ in conversations. EMC is dealing with similar situation where VMAX brand needs to keep its loyal customers while the flash artifacts are natively being integrated.

I am not saying that XtremIO’s future is going to be similar to that of Symantec PureDisk. But its story until now are quite similar that of PureDisk. Time will tell.

Disclosure: I had worked for Veritas/Symantec. However the information in this story is based on publicly available knowledge. I currently work for Pure Storage (no relationship to PureDisk product in the story). The opinions here do not reflect those of my employer.

Software-defined storage does not have to be a rollercoaster ride

Photo Credit:  Chuck Quigley
Software-defined storage does not have to be a rollercoaster ride

Thanks to VMware’s vision on software-defined data center (SDDC), software-defined anything is one of the leading buzzwords today. Software-defined storage (SDS) is not an exception. SDS feels like a big shift in storage world, but good news is that the transition is much smoother than it sounds. Let us take a closer look. What is SDS? There are many vendor specific definitions and interpretations for SDS. Industry analysts have their own versions too. Hence let us focus on what matters the most. The characteristics and benefits expected in general from SDS. There are the four pillars.

  1. Abstraction:Data pane and control pane are separated. In other words, the storage management is decoupled from the actual storage itself. Customer benefit: flexibility to solve for storage and management needs independently
  2. Backend heterogeneity: Storage is served by any kind of storage from any vendor including commodity storage. Customer benefits: Freedom of choice for storage platforms, avoid lock-ins.
  3. Frontend heterogeneity:Storage is served to any kind of consumers (operating systems, hypervisors, file services etc.) Customer benefit: Freedom of choice for computing platforms, avoid lock-ins.
  4. Broker for storage services: SDS brokers storage services, no matter where data is placed and how it is stored, through software that in turn will translate those capabilities into storage services to meet a defined policy or SLA. Customer benefits: Simplified management, storage virtualization, and value-added data services through vendor or customer innovations.

Three out of four pillars are needed to qualify as a software-defined storage solution. Pillars 1 & 4 are must haves. Once you have those two you need 2 or 3. The reality is that SDS movement had started long time ago. Let us use some examples to understand the SDS implementations.

Oracle Automatic Storage Management (ASM): Although Oracle seldom markets ASM as an SDS solution, it happened to be a great example of SDS. It is purpose-built for Oracle databases. It features all the four pillars and storage is entirely managed by the application owner (Oracle DBA). The pillar 2 is questionable here. It does have that pillar because the solution runs on multiple OS platforms. However it serves just one type of workload, hence that pillar is not truly delivering frontend heterogeneity.

Veritas InfoScale: Formerly known as Veritas Storage Foundation, Veritas InfoScale is perhaps the most successful, heterogeneous, and general purpose SDS solution. While it is still widely in use, it is a host based SDS solution (the pillars are built on top of the operating system) and hence not a good fit for virtualized world.

VMware Virtual Volumes (VVOLs): VMware VVOL is purpose-built for VMware vSphere. Hence it lacks pillar 3. VVOLs shine well with the other three pillars. A virtual infrastructure admin could manage everything from a single console.

Now that we covered the characteristics of SDS, let us look at the bigger picture as an IT architect. The great thing about SDS solutions is the interoperability to build the right solution for workloads so as to solve for constantly changing storage needs. You can be quite creative (and of course, even go crazy!) with the type of things you can build with SDS Lego blocks.

You can deploy Oracle ASM on top of Veritas InfoScale so that DBAs could benefit from both. ASM enables Oracle DBAs to manage their storage while InfoScale brings centralized management for storage administrators.

How about that virtual server environment where Veritas InfoScale is falling short? Bring in storage LUNs directly into vSphere hosts for a VMFS experience where they enjoy the benefits provided by VMware. Are virtual machine infrastructure administrators getting ready to manage storage on their own? Give them the array plugin for VMware vSphere Web Client. Or prepare them for VASA provider from storage vendor to get ready for VVOLs!

The main takeaway is simply this. SDS is a blessing for IT architects to solve storage puzzles elegantly. It had been here for long time, it is also constantly evolving with market inspired innovations. Transitioning to SDS is relatively smooth.

Disclaimer: The opinions here are my own. It does not reflect those of my current or previous employers.

What are my top 10 reasons for hopping onto Pure Storage?

Photo credit: Pure Storage Inc
Top 10 reasons for hopping into Pure Storage ride

If you had asked me about Pure Storage a year ago, I would have said that it was one of the leading flash storage array vendors in the market. Although I was not wrong, I must confess that my view then was a bit myopic. Once I got to know this company better, I couldn’t resist joining its team as one of Puritans. Here are my top 10 reasons.

  1. Reliving VERITAS (Software): Don’t you remember your first crush? Do you hope to build a time machine to meet her again? VERITAS and Sun Microsystems used to be my crush as I got into technology. Pure Storage reminds me of that VERITAS of early 2000s, she was the hottest girl in the bar. Everyone wants to partner with her because she stole the show from aging big irons.
  1. It is all about fostering the team, not building MVPs: The Company hires people with the mindset to work towards a bigger mission. Irrespective of the title and role, employees know how to talk both business values and technical merits of what is being build to solve unmet needs of customers. There is a sense of accomplishment on where Puritans are today; at the same time there is a strong ambition for where they want to be in future.
  1. Open workspaces: When you walk into a Pure Storage facility, you would notice the open collaborative workspaces. You cannot distinguish between spots that are used by university hires and VPs. When a Puritan needs your attention, he/she may just come to your desk, yell across the desk, instant message or shoot Nerf darts. I am sending my letter to Santa for a good Nerf gun.
  2. Insider view matters: We all know the value of someone recommending us for a role in an organization. I am fortunate to be around people who believe in what I could bring to the table. Two Puritans who had been with VERITAS in their previous lives helped me understand Pure Storage from insiders’ perspective. These individuals are thought leaders who weren’t shy to take some bold risks to embrace what Pure had promised.
  1. It is not all about work – work – work: Puritans know how to have fun. Tons of pictures in social media speak for themselves. #PaintItOrange
  1. The innovation starts with software: All-flash array (AFA) is the tangible product from Pure, but the innovation started with its core Purity Operating System that powers those arrays. Enterprise grade reliability and performance that is coupled with consumer level simplicity and efficiency could make you see why Pure is considered the Apple of data centers.
  1. Harnessing the power of cloud: Many big vendors in enterprise IT talk about cloud as a way to stay relevant as disruption is imminent. How many times have you seen the same legacy technology repackaged as ‘cloud offering’? Pure Storage used cloud as an opportunity to redefine and merge the lines between management and support services. Pure1 is just the beginning of this innovation.
  1. Storage virtualization meets data virtualization: Storage virtualization is a way to consolidate and manage storage media to improve availability, efficiency and performance. Data virtualization levels up the same paradigm where the application/data owner can create and manage copies without the need to understand where and how it is stored physically. Pure Storage’s data reduction methods and space efficient copy creations blur the line between storage and data virtualization.
  1. Nip CDM problem in the bud: Thanks to the early works from analyst firms like IDC and players like Actifio and Delphix, organizations are starting to understand the storage waste created by copy data. Legacy storage vendors have no motivation to solve the problem, as it would cannibalize high margin revenue from spinning disks. Purity’s approach to virtualize storage and data while letting the application owner manage copies from their familiar tools is powerful enough to kill copy-data sprawl at the source.
  1. Mission and drive to lead the market: Unlike many storage startups that were designed for sale to incumbents, Pure Storage is on a mission to become a mainstream player. While I understand that Pure Storage’s board of directors has the fiduciary duty to act on behalf of shareholders; the visionary management team, energetic employees and ecstatic customers are likely to give enough reasons to let Pure grow on its own.

Disclaimer: I am an employee of Pure Storage, Inc. My statements and opinions on this site are my own and do not necessarily represent those of Pure Storage

Note: This post originally appeared in my LinkedIn Pulse page


Did Rubrik make Veeam’s Modern Data Protection a bit antiquated?

Veeam Antiquated?
Veeam Antiquated?

Modern Data Protection™ got a trademark from Veeam. No, I am not joking. It is true! Veeam started with a focused strategy. It will do nothing but VMware VM backups. Thankfully VMware had done most of the heavy lifting with vStorage APIs for Data Protection (VADP) so developing a VM-only backup solution was as simple as creating a software plugin for those APIs and developing a storage platform for keeping the VM copies. With a good marketing engine Veeam won the hearts of virtual machine administrators and it paid off.

As the opportunity to reap the benefits as a niche VM-only backup started to erode (intense competition, low barrier to entry on account of VADP), Veeam is attempting to re-invent its image by exploring broader use cases like physical systems protection, availability etc. Some of these efforts make it look like its investors are hoping for Microsoft to buy Veeam. The earlier wish to sell itself to VMware shattered when VMware adopted EMC Avamar’s storage to build its data protection solution.

Now Rubrik is coming to market and attacking the very heart of Veeam’s little playground while making Veeam’s modern data protection a thing of past. Rubrik’s market entry is also through VMware backups using vStorage APIs but with a better storage backend that can scale out.

Both Veeam and Rubrik have two high level tiers. The frontend tier connects to vSphere through VMware APIs. It discovers and streams virtual machine data. Then there is a backend storage tier where virtual machine data is stored.

For Veeam the front-end is a standalone backup server and its possible backup proxies. The proxies (thanks to VMware hot-add) enable limited level of scale-out for the frontend, but this approach leeches resources from production and increases complexity. The backend is one or more backup repositories. There is nothing special about the repository; it is a plain file system. Although Veeam claims to have deduplication built-in, it is perhaps the most primitive in the industry and works only across virtual machines from the same backup job.

Rubrik is a scale-out solution where the frontend and backend are fused together from users’ perspective. You buy Rubrik bricks where each brick consists of four nodes. These are the compute and storage components that cater to both frontend in streaming virtual machines from vSphere via NBD or SAN transport (kudos to Rubrik for ditching hot-add!) and backend, which is a cluster file system that spans nodes and bricks. Rubrik claims to have global deduplication across all its cluster file system namespace.

Historically, the real innovation from Veeam was the commercial success of powering on virtual machines directly from the backup storage. Veeam may list several other innovations (e.g. they may claim that they ‘invented’ agentless backups, but it was actually done by VMware in its APIs) in their belt but exporting VMs directly from backup is something every other vendor followed afterwards and hence kudos go to Veeam on that one. But this innovation may backfire and may help Veeam customers to transition to Rubrik seamlessly.

Veeam customers are easy targets for Rubrik for a few reasons.

  • One of the cornerstones of Veeam’s foundation is its dependency on vStorage APIs from VMware; it is not a differentiator because all VMware partners have access to those APIs. Unlike other backup vendors, Veeam didn’t focus on building application awareness and granular quiescence until late in the game
  • Veeam is popular in smaller IT shops and shadow projects within large IT environments. It is a handy backup tool, but it is not perceived as a critical piece in meeting regulatory specs and compliance needs. It had been marketed towards virtual machine administrators; hence higher-level buying centers do no have much visibility. That adversely affects Veeam’s ‘stickiness’ in an account.
  • Switching from one backup application to another had been a major undertaking historically. But that is not the case if customers want to switch from Veeam to something else. Earlier days, IT shops needed to standup both solutions until all the backup images from the old solution would hit the expiration dates. Or you have to develop strategies to migrate old backups into the new system, a costly affair. When the source is Veeam with 14 recovery points per VM by default, you could build workflows that spin up each VM backup in a sandbox and let the new solution back it up as if it is a production copy. (Rubrik may want to work on building a small migration tool for this)
  • Unlike Veeam that started stitching support for other hypervisors and physical systems afterwards, Rubrik has architected its platform to accommodate future needs. That design may intrigue customers when VMware customers are looking to diversify into other hypervisors and containers.

The fine print is that Rubrik is yet to be proven. If the actual product delivers on the promises, it may have antiquated Veeam. The latter may be become a good case study for business schools on not building a product that is dependent too much on someone else’s technology.

Thanks to #VFD5 TechFieldDay for sharing Rubrik’s story. You can watch it here: Rubrik Technology Deep Dive

Disclaimer: I work for Veritas/Symantec, opinions here are my own.

Getting to know the Network Block Device Transport in VMware vStroage APIs for Data Protection

When you backup a VMware vSphere virtual machine using vStorage APIs for Data Protection (VADP), one of the common ways to transmit data from VMware data store to backup server is through Network Block Device (NBD) transport. NBD is a Linux-like module that attaches to VMkernel and makes the snapshot of the virtual machine visible to backup server as if the snapshot is a block device on network. While NBD is quite popular and easy to implement, it is also the least understood transport mechanisms in VADP based backups.

NBD is based on VMware’s Network File Copy (NFC) protocol. NFC uses VMkernel port for network traffic. As you already know, VMkernel ports may also be used by other services like host management, vMotion, Fault Tolerance logging, vSphere Replication, NFS, iSCSI an so on. It is recommended to create specific VMkernel ports that attach to dedicated network adapters if you are using a bandwidth intensive service. For example, it is highly recommended to dedicate an adapter for Fault Tolerance logging.

Naturally, the first logical solution to drive high throughput from NBD backups would be to dedicate a bigger pipe for VADP NBD transport. Many vendors put this as the best practice but that alone won’t give you performance and scale.

Let me explain this using an example. Let us assume that you have a backup server streaming six virtual machines from an ESXi host using NBD transport sessions. The host and backup server are equipped with 10Gb adapters. In general a single 10Gb pipe can deliver around 600 MB/sec. So you would expect that each virtual machine would be backed up at around 100 MB/sec (600 MB/sec divided into 6 streams for each virtual machine), right? However, in reality each stream would have access to much lower share of bandwidth because VMkernel automatically caps each session for stability. Let me show you the actual results from a benchmark that we conducted where we measured performance as we increased the number of streams.

NBD Transport and number of backup streams
NBD Transport and number of backup streams

As you can see, by the time the number of streams has reached 4 (in other words, four virtual machines were simultaneously getting backed up), each stream is able to deliver just 55 MB/sec and the overall throughput is 220 MB/sec. This is nowhere near the available bandwidth of 600 MB/sec.

The reasoning behind this type of bandwidth throttling is straightforward. You don’t want VMkernel to be strained by serving this type of copy operations while it has better things to do. VMkernel’s primary function is to orchestrate VM processes. VMware engineering (VMware was also a partner in this benchmark, we submitted the full story as a paper for VMworld 2012) confirmed this behavior as normal.

This naturally puts NBD as a second-class citizen in backup transport world, doesn’t it? The good news is that there is a way to solve this problem! Instead of backing up too many virtual machines from the same host, just make your backup policy/job configuration to distribute the load over multiple hosts. Unfortunately, in environments with 100s of hosts and 1000s of virtual machines, it may be difficult to do it manually. Veritas NetBackup provides VMware Resource Limits as part of its Intelligent Policies for VMware backup where you can limit the number of jobs at VMware vSphere object levels, which is quite handy in this type of situations. For example, I ask customers to limit number of jobs per ESXi host to 4 or less using such intelligent policies and resource limit setting. Thus NetBackup can scale-out its throughput by tapping NBD connections from multiple hosts to keep its available pipe fully utilized while limiting the impact of NBD backups on production ESXi hosts.

Thus Veritas NetBackup moves NBD to first class status in protecting large environments even when the backend storage isn’t on Fiber Channel SAN. For example, NetBackup’s NBD has proven its scale in NetApp FlexPod, VCE VBLOCK, Nutanix and VMware EVO (VSAN). Customers could enjoy the simplicity of NBD and scale-out performance of NetBackup in these converged platforms.


Taking VMware vSphere Storage APIs for Data Protection to the Limit: Pushing the Backup Performance Envelope; Rasheed, Winter et al. VMworld 2012

Full presentation on Pushing the Backup Performance Envelope

Checkmate Amazon! Google Nearline may be the Gmail of cold storage

April Fools’ Day 2004: Google announced Gmail, a free search based e-mail service with storage capacity of 1 gigabyte per user1. The capacity was unbelievably high when compared to other free Internet e-mail providers of that time. Hotmail and Yahoo! were giving 2-4MB per user. The days when inbox management used to be a daily chore are no more. The initial press release from the search giant differentiated it’s offering from others on three S’s: Search, Storage and Speed.

Google Nearline may be the Gmail of cold storage
Google Nearline may be the Gmail of cold storage

I wish Google waited a couple more weeks to announce Google Cloud Storage Nearline. It would have been fun to see it announced on April Fools’ Day. Nearline to a business today is how Gmail was to a consumer a decade ago.

Search: Google doesn’t talk about search in the context of Nearline. But nuts don’t fall that far away from the tree. Google wants your business to dump all your cold data in its cloud. It has the resources to adopt a loss leader strategy to help you keep data at lower cost in its cloud. Later you may be offered data mining and analytics as a service where Google would really shine and make money. The economies of scale will benefit both Google and you. Does anyone remember the search experience in Hotmail a decade ago?

Storage: Sorry, you aren’t getting the storage for free but it is cheap. It is a penny per month per gigabyte for data at rest. Instead of declaring a price war with Amazon’s Glacier, Google decided to match its pricing while differentiating itself from Glacier radically with simplicity and access. Unlike Amazon, the cold and standard storage from Google uses the same method of access thereby eliminating operational overhead or programming needs.

Speed: Amazon went old school with Glacier. It is designed look and feel like tape. It takes a few days for you to retrieve data, analogous to getting tapes shipped to you from an offsite location. This is where Google directly poked Amazon. Google is offering an average 3-second response time for data requests! Do you recall how Gmail JavaScript based coding made Hotmail to look like a turtle reloading entire web pages for each action?

Let’s come back to April Fools’ Day again. It happens to be the day after World Backup Day. The cold storage today is backup for most businesses. One of the strategic partnerships that Google made for Nearline launch is impeccable. According to Veritas/Symantec, NetBackup manages half of world’s enterprise data. It is not surprising why Google wanted Veritas to be in the Nearline bandwagon2. The best data pumps for business data is NetBackup and that relationship is a strategic win for Google right off the bat.

  1. Google Gets the Message, Launches Gmail
  2. Access, Agility, Availability: NetBackup and Google Cloud Storage Nearline

Dear Competitor “C”, all that snaps are not snapshots!

Benchmarking for truth
Benchmarking for truth

Common sense tells us that the creation of recovery points for applications from storage snapshots should be faster than the traditional methods of backing up the entire dataset. The storage solutions in the market have matured to provide space efficient recovery points through snapshots. A backup and recovery solution can make use of storage snapshots to create recovery points and provide additional values like information life cycle management and content indexing.

The faster you create a recovery point, the better the possibility of achieving aggressive recovery point objectives (RPOs). For example, if it takes 10 minutes to create a recovery point, the best possible RPO is also 10 minutes. Storage snapshots are great candidates for achieving such aggressive recovery points. This is the reason industry analysts vouch for storage snapshot integration in backup and recovery solutions.

However, a competitor to Symantec NetBackup (let us call this vendor as Competitor ‘C’) had been fooling industry analysts for a few years. Competitor ‘C’ positions itself as a ‘leader’ in storage snapshot integration. It received some brownie points for ticking the checkboxes in supporting multiple storage vendors. Symantec had commissioned an independent third party benchmarking company to validate the truth in this vendor’s capability. The result had been shocking.

Check out my official Symantec blog for the gory details.

Disclaimer: The blogs in are reflections of my own opinions.

VMware EVO: The KFC of SDDC

EVO is the KFC of SDDC
EVO is the KFC of SDDC

VMware EVO is bringing to software-defined data centers the same type of business model that Kentucky Fried Chicken had brought to restaurants decades ago. VMware is hungry to grow and is expanding its business to new territories. Colonel Sanders’s revolutionary vision to sell his chicken recipe and brand through franchise model is now coming to IT infrastructure as ready-to-eat value meals.

Most of the press reports and analyst blogs are focused on VMware’s arrival into converged infrastructure market. Of course, vendors like Nutanix and SimpliVity will certainly lose sleep as the 800-pound gorilla has set its eyes on converged infrastructure market. However, VMware’s strategy is much deeper than taking over the converged infrastructure market from upstarts, it is a bold attempt to disrupt the business model of selling IT infrastructure stacks while keeping public cloud providers away from enterprise IT shops.

Bargaining power of supplier: Have you noticed the commanding power of VMware in EVO specifications? Partners like Dell and EMC are simply the franchisees of VMware’s infrastructure recipe and brand. It is no secret that traditional servers and storage are on the brink of disruption because buyers wouldn’t pay premium for brand names much longer. It is the time for them to let go of individuality and become delivery model for a prescriptive architecture (franchise model) from a stronger supplier in the value chain.

Software is now the king, no more OEM: In the old world where hardware vendors owned brand power and distribution chains, software vendors had to make OEM deals to get their solutions to the market in those hardware vehicles. Now the power is shifting to software. The software vendor prescribes (a softened term that actually stands for ‘dictates’) how infrastructure stacks should be built.

Short-term strategy, milk the converged infrastructure market: This is the most obvious hint VMware has given; reporters, bloggers and analysts have picked up this obvious message. As more and more CIOs are looking to reduce capital and operational costs, the demand for converged systems is growing rapidly. Even the primitive assembled-to-order type solutions from VCE and NetApp-Cisco are milking the current demand for simplified IT infrastructure stacks. Nutanix leads the pack in relatively newer and better hyper-convergence wave. VMware’s entry into this market validates that convergence is a key trend in modern IT.

Long-term strategy, own data center infrastructure end-to-end while competing with public clouds: The two of three key pillars of VMware strategy are enabling software-defined data centers and delivering hybrid clouds. Although SDDC and hybrid cloud would look like two separate missions, the combination is what is needed to fight Amazon and other public cloud solutions from taking over the workloads from IT shops. The core of VMware’s business is selling infrastructure solutions for on-prem data centers. Although VMware positions itself as the enabler of service providers, it understands that the bargaining power of customers would continue to stay low if organizations stick to on-prem solutions. This is where SDDC strategy fits. By commoditizing infrastructure components (compute, storage and networking) and shifting the differentiation to infrastructure management and service delivery, VMware wants to become the commander in control for SDDCs (just like how Intel processors dictated direction for PCs in the last two decades). EVO happens to be that SDDC recipe it wants to franchise to partners so that customers could taste the same SDDC no matter who their current preferred hardware vendors are. Thus EVO is the KFC of SDDC. It is not there as a Nutanix killer, VMware also wants to take shares from Cisco (Cisco UCS is almost #1 in server market, Cisco is #1 in networking infrastructure), EMC Storage (Let us keep the money in the family, the old man’s hardware identity is counting its days) and other traditional infrastructure players. At the same time, VMware wants to transform vCloud Air (the rebranded vCloud Hybrid Service) as the app store for EVO based SDDCs to host data services in cloud. It is a clever plan to keep selling to enterprises and hide them away from the likes of Amazon. Well played, VMware!

So what will the competitive action from Amazon and other public cloud providers? Amazon has resources to build a ready-to-eat private Fire Cloud for enterprises that can act as the gateway to AWS. All this time, Amazon focused mainly on on-prem storage solutions that extend to AWS. We can certainly expect the king of public clouds do something more. It is not a question of ‘if’; rather it is the question of ‘when’.

EMC’s Hardware Defined Control Center vs. VMware’s Software Defined Data Center

EMC trying to put the clock back from software-defined storage movement
EMC trying to put the clock back from software-defined storage movement

EMC’s storage division appears to be in old yeller mode. It knows that customers would eventually stop paying a premium for branded storage. The bullets to put branded storage out of its misery are coming from software defined storage movement led by its own stepchild VMware. But the old man is still clever and pretending to hangout with the cool kids to stay relevant while trying to survive as long as there are CIOs willing to pay premium for storage with a label.

Software-defined storage is all about building storage and data services on top of commodity hardware. No more vendor locked storage platforms on proprietary hardware. This movement offers high performance at lower cost by bringing storage closer to compute. Capacity and performance are two independent vectors in software-defined storage.

TwinStrata follows that simplicity model and had helped customers extend the life of existing investments with true software solutions. The data service layer offers storage tiering where that last tier could be a public cloud. EMC wants the market to believe that its acquisition of TwinStrata is an attempt to embrace software-defined storage movement. But the current execution plan is a little backward. EMC’s plan is a bolted-on type of integration for TwinStrata IP on top of legacy VMAX storage platform. That means EMC wants to keep the ‘software-defined’ IP closer to its proprietary array itself. The goal is, of course, to prolong the life of VMAX in the software-defined world. While it defeats the rationale behind software-defined storage movement, it may be the last straw to pull the clock back a little.

Hopefully there is another project where EMC will seriously consider building a true software-defined storage solution from the acquired IP without the deadweight of legacy platforms. Perhaps transform ViPR from vaporware to something that really rides the wave of software-defined movement?

Is the perfect storm headed toward purpose-built storage systems?

Is the era of storage systems (arrays) facing disruption? Do the expensive monolithic chassis sellers need to find new ways to make money? Do the investors betting on newer storage array startups need to cash in now? Although it may feel unlikely in the near term, the perfect storm may not be that far away.

Is the perfect storm headed toward purpose-built storage systems?
Is the perfect storm headed toward purpose-built storage systems?

Let us think about how storage arrays came to solve problems for IT. There were two distinct transformations in this industry:

More information in Symantec Connect’s Storage and Availability blog