Performance Bottleneck on the Move

31 May

I had the most interesting conversation recently with an industry friend.  We were discussing the fact that most of the new array vendors (all SSD and hybrid SSD) are block devices, supporting FC or iSCSI.  Why you may wonder that is a topic of considerate interest?  Recently, there has been increasing interest in using NFS as a storage protocol supporting VMware.  There are a number of reasons for this, including ease of use and management, in some cases better performance, and better scalability.  NFS has seen some adoption but with the release of vSphere 5 and 10GbpE was expected to be the tipping point where many more organizations would move to NFS.

In order to put this into a perspective, consider the fact that with NFS, there are fewer resource contention issues common to block devices supporting VMware.  Of course the response from the block vendors to the performance bottleneck issues has been, put some SSD storage in as a tier or cache and call it a day.  So does this mean that if affordable, organizations would opt to adopt all or hybrid SSD arrays in order to get better performance rather than use actual design skills to eliminate or move the bottleneck where it isn’t bothering anyone?

I don’t have the answer but get a sense that if there is money and a brute force way to solve a problem, then why bother with trying to architect your way out.   Today, there seems to be a lot of moving parts when it comes to storage.  Confusing messages are streaming at us faster than we can process.  New technology and new paradigms often do that, but there is a way to push back.  Before deciding on a technology or a product, make sure you understand what you are going to get, both good and bad.  There are always tradeoffs and it is more than ever critical to know these upfront before an investment is made.

Data Management Functionality on the Move

30 Dec

First, there was VERITAS Volume Manager that changed how storage is managed. It enables multiple arrays to be managed as one while providing value added functionality such as replication (VVR). Over time, many of the desired functionality such as replication, volume management, and new features such as snapshots and thin provisioning have been deployed on the storage controller itself. The market saw intelligent storage arrays come on the market that could “slice and dice”. Great benefit arose from these advancements but so did great challenges. Managing storage became easy because you could do everything through a single pane of glass, but managing performance and capacity became more complicated because the number of variables to consider increased dramatically. Recently, with the adoption of virtualization, VMware in particular, the pendulum has begun to swing the other direction. End users are seeking storage arrays that can deliver performance and reliability first and are seeking data management features elsewhere. This is most visible with the most recent release of vSphere 5. The latest version of ESXi includes thin provisioning, snapshots, replication and machine recovery, volume management, virtual storage appliance, and security. A user can now deploy any storage that delivers storage capacity, performance, and reliability and not worry about the data management functionality.

New solution providers have entered the market offering services traditionally delivered by storage array vendors. In VMware ecosystem, there are three types of vendors: 1) virtual storage appliance, 2) virtual machine replication and recovery, and 3) snapshots, backup, and replication.

  • Virtual Storage Appliance (VSA) offers all the functionality available on a physical controller but in a virtual machine. This virtual machine serves as a storage controller front-ending any external RAID array or direct attached storage. Virtual machines can connect to this shared resource via iSCSI mount points.
  • Replication has been owned, from the revenue perspective, by the array vendors, but with virtualization, users once again have more options. VMware in v5 supports replication as part of its SRM offering. Other replication options are also available from third party providers. Though initially this may have little impact on the overall replication market, virtualization purists will look to these options as a way to increase flexibility and manageability of replication as well as reduce cost and dependence on one vendor for hardware.
  • Snapshots, like replication, have been primarily delivered by the hardware platforms. It is common to perform snapshots on the array and then perform backup using third party software. New vendors on the market are changing this paradigm by offering a single platform independent of the primary storage system for both snapshots and backups that is, again, hardware independent.

Virtualization is not the only area where data management functionality is on the move, but it is where this alternative is easiest to implement.  Over time managers will seek data management functionality where it makes most sense for a given application.  More options will drive down the overall costs of storage services while spurring further innovation.  Another trend that will significantly impact where data management intelligence lives will be cloud adoption, but that is a discussion for another time.

VMworld2011 in Review

7 Sep

VMworld is one of the more interesting conferences I attend on an annual basis and this year was no exception.  As usual, there are three dimensions to the event: VMware’s vision, what solution providers are developing, and what end users are buying.  So let me share with you what I saw and heard:

VMware’s Vision

VMware’s overall theme was ” Your Cloud, Own It”.  This year, there has been a lot more focus on  private and hybrid clouds. For our purposes, cloud means something delivered as a service such as compute as a service, storage as a service, or application as a service.   The keynote presentations positioned virtualization as an enabler to achieving the benefits of cloud rather than virtualization being the end goal itself.  Some specific specific observations:

  • Post PC era – separation of application from operating platforms and devices, enabling the user to access information from anywhere and from any device.
  • Application “virtualization” – allowing application to exist independent of underlying platforms, making it simpler to deliver improvements to applications without the hassle of ensuring interoperability with every platform and device a user may have.
  • Collaboration without boarders – allowing users to share information while maintaining security and access controls within the enterprise.
  • Efficiency – delivering infrastructure that is agile and scalable.

VMware’s vision has yet been fully productised; we expect to see many features in the future releases of vSphere and vCenter or as standalone applications and offerings.  The products and features that were announced sent a strong message that VMware is looking to offer storage data management functions independent of the storage array vendors.   We have become accustomed to having snapshot, replication, and other data management functionality reside on the storage array.  In a way, storage arrays have ceased being a hardware solution and have become a software offering packaged with a chassis and some disks.  The release of VSA and SRM with replication offers a viable alternative, a return to the concept that data management functionality can live on the host and may offer greater flexibility and lower cost than the alternatives.  This is not a new concept, it is actually a return to the age when Vertias Volume Manager was the standard way to deploy storage.   I am not saying that these features, or products offering similar features and functionality are as mature and sophisticated as existing offerings, yet, but they do offer an alternative way to designing and delivering storage in VMware environments.

Solution Provider’s Offerings

Solution providers represent the other dimension at the conference.  Walking through the solution pavilion, I was bombarded with messaged around efficiency,  performance, and virtual desktops.  Efficiency has been something discussed from the capacity optimization perspective for some time, but now the discussion has turned to efficiency as a function of performance.  The concept is old – if you can deliver the right performance without sacrificing capacity utilization, you achieve greater overall efficiency.  The leaders in pushing this message were storage vendors who are bringing all SSD or hybrid SSD/HDD arrays to the market.  SSD  and flash technology in general was prevalent at the conference; it is being deployed in storage array systems as well as in the host.

I/O virtualization is another approach presented at the conference.   The idea here is to pool physical (NIC, HBA) resources and then allocate to each vm what it requires.  Using I/O virtualization technology can also help assure that a vm can’t infringe on another vm’s resources.    Ensuring some levels of quality of service is the first step towards enabling the virtualization of mission critical and transactional applications such as databases.

VDI solutions were front and center throughout the solutions pavilion and in the keynote addresses.   A lot of innovation in this space; developers are trying to align VDI capabilities with how workers actually use technology and information today.  As an example, we are a mobile workforce and require technology to be mobile with us.  Solution providers not directly linked to were positioning their products as a way to reduces the cost of the infrastructure, improve density, reduce downtime, etc.  Everyone had something that helps VDI cross the adoption chasm.

User’s Perspective

Then there were the users who were there to learn more about VMware and find solutions to their existing problems.  Many talked about needing a storage array that gave them the desired set of functionality, others were just looking for something that won’t get them fired.  Some were confused by what they heard versus what their environment back home demanded; others struggled to define a longer term vision for their environment.  In the end, most end users who attended, and there were 19,000 attendants, were buying products, a good night’s sleep, greater efficiency, and an ease of use.  Sometimes it felt like the solution providers were trying too hard to be at the bleeding edge while the users were lagging behind, just trying to stay afloat.

3 Types of Cloud

18 Aug

Cloud, there are as many definitions of Cloud as there are actual types of cloud in the sky.  No single definition is completely right nor is it completely wrong.  Definitions change based on who is talking and what are the objectives.  From my current vantage point, I see both the pitfalls and the benefits of each approach to defining the cloud.  In an effort to simplify, I have identified three main categories of cloud.

Before I get into the categories, please keep in mind that at its core, cloud implies something being delivered to you as a service.

1.  Application-based clouds include services such as on-line backups, on-line archiving, email as a service, CRM as a service, and any other application that is delivered as a service often through a browser.  Some of the best known application clouds are SalesForce.com, Shutterfly, KodakGallery, and Facebook.  In all these instances, a consumer is storing information, sharing information, and interacting with others via an application designed by the provider.

2. Compute-based clouds, public, private, or hybrid, allow users to dynamically provision compute resources for their application and pay only for the resources they consume over time.  Compute clouds offer the ability to instantly increase available resources when there is need or reduce them and pay accordingly.  For organizations that experience seasonality in their business cycle, departments that want to run promotional programs that are limited in time, developers with new ideas who may not want to or have the ability to invest in a physical infrastructure, compute cloud is a great way to reduce costs and improve time productivity.  Compute clouds remove the initial investment barriers, allowing for anyone with an idea to have resources available to test the viability of their ideas.

3. Storage-based cloud, public, private, or hybrid, allows users to procure storage resources for their applications.  Most subscribers to compute clouds will also subscribe to a storage cloud, but the purest form of a storage cloud is buying storage capacity for applications that reside inside the corporate data center.  Some cloud providers have claimed that their storage cloud services are enterprise class and can replace traditional storage systems, but most position their services as a secondary or tertiary tier of storage.  Most common use cases of storage cloud services are archiving, storing data off-site instead of creating tapes, and using it as a backup target.  Some organizations are also beginning to use cloud storage as a way to provide access to data from multiple locations.   Accessing cloud storage can be achieved through REST API or by leveraging a cloud gateway.  A cloud gateway is an appliance that provides standard protocols applications are used to seeing such as iSCSI, NFS, CIFS while translating to cloud protocols on the back end.  Cloud gateways also often deliver WAN optimization, encryption services, and local caching of active data.

Cloud may mean many different things to different people.  The three main categories presented here are a high level view of the ever evolving cloud market.  Each category can be subdivided into many others, but that is a separate conversation for another time.

What are users REALLY buying

18 Jul

I have been meeting with a variety of organizations and what is a reality marketers and product managers often forget is that everyone buys something different even if on surface it seems like they are buying the same thing.  For example, an organization announced that it is looking for 75TB of storage to support their db environment.  It seems pretty straight forward, they need storage.  Then, when you get a little closer, you discover that what they are really buying has little to do with storage at its foundation.

So what was this organization buying?

  • the database application required some level of performance and so the storage system had to deliver the best possible performance per GB at the lowest possible  price.  At some point, if all available options were offering the same performance at a similar price point, then the conversation became more based on perception of performance and how much is good enough.
  • the last system deployed ran without any issues so we are buying reliability of the brand we have had rather than the unknown of the brand we don’t have any exposure to.
  • Operational efficiency and effectiveness – we have been using system A for the past two years and even though the application we will be deploying is very different, there is a perceived notion that we can easily architect the known platform for this use case better than something else
  • Support is critical – even though I supposedly have had no issues with the system I have, I want to make sure that I have best support in my neighborhood just in case.
  • My friend in company B just deployed a new environment and it includes product A, which means it is good and I too should buy it
  • Company X is taking my hardware back and giving me credit, making the purchase less expensive.  I am not sure that it will deliver what I need or want or will be less expensive over time, but it looks good right now.
  • I like the new product but I don’t have the resources to evaluate it and it is too much work to do other background checks, I will just buy a product from company I know even if it is doesn’t offer what I really need or want.
  • I don’t know what I need but Vendor C just told me something interesting and I want it so I will buy it.
  • I don’t like the sales rep from company B so I will buy from company X because their rep is nice.

There are many reasons why purchase decisions don’t seem rational or logical.  Anyone selling in this industry must be aware of who is buying, what they are buying, and why.  The best technology doesn’t always win, but a good enough product with a strong sales and marketing force can become a billion dollar enterprise.

The Dell Storage Forum

9 Jun

I just got back from the Dell Storage Forum in Orlando, affectionately known as the DSF, which sounds like a giant shoe store to me, but whatever. Plenty of bloggers are covering the DSF, so please don’t consider this entry to be a comprehensive summary of the event.  Rather, I am just sharing a handful of thoughts, in no particular order:

I got to meet Michael Dell — My first impression was that he is taller than I imagined, not like I had spent that much time imaging what he looks like in person!  Michael is sort of a hero of mine.  I got involved in the industry the same way he did.  My business began in a dorm room at Yale University in 1985.  I think he started two or three years earlier at UT Austin.  As I like to tell the story, his mommy let him quit school, whereas mine would not even entertain such a thought.

BTW, I feel a strange kinship with Michael Dell.  My grandfather, the late Willie Farmer, was a big band conductor in the 20s and 30s.  His band was known as Willie Farmer and the Dell Orchestra.  (In case you don’t get the reference, this is was a word play on the children’s song, the “Farmer in the Dell”.)

I also got to meet Darren Thomas — He heads up the storage strategy for Dell. My first impression was that he is smarter (way smarter) than I imagined.  A lot of people in the industry are criticizing Dell for not having a comprehensive storage strategy.  While I agree that Dell lags behind EMC, Netapp, and IBM in the completeness by which they tell their storage story, I now believe that they have a solid vision and they will soon disrupt several segments of the industry.  In particular:

  1. I imagine that Dell could change the game in rich media storage, especially medical imaging
  2. I believe that they will change the economics and use cases for deduplication
  3. I believe that they will add some spice to the NAS market

Replication versus Backups — Both Michael Dell and Darren Thomas talked about data replication as an alternative to traditional backups.  I agree and disagree.  Without a doubt replicating primary data makes a world of sense, but data replication does not address all of the scenarios that can go bump in the night. Over the years,  I have seen plenty of data losses on replicated storage systems. You have to take into consideration data corruption, software bugs, sabotage or hacks, and good old-fashioned user error.  Note to self:  This is a good topic for a future blog entry.

The Perils of Rolling Your own Enterprise Storage — I had occasion to meet Laz Vekiarides who runs software engineering for Equallogic.  It turns out that he is a pretty colorful and funny guy with no shortage of strong opinions!  I was chatting with him and Walter Wong from Carnegie Mellon.  We got on the subject of what it takes to qualify a new hard drive as a component of an enterprise storage system.  My clients are always asking me why hard drives on Newegg might cost $79 for 2TB, but a terabyte in an enterprise array might cost 20 times that much.  I liked the way that Walter described the phenomenon.  He says, “disk is cheap, but storage is expensive.”  Laz’s answer was a bit funnier.  I did not capture his exact words, but the gist was, “If you are going to take any ol’ drive off the shelf and put it in an enterprise RAID array, you might as well save the money and just dig a rock out of the ground and shove it in there instead.”  (Note to self:  This would also be a good topic for further exploration in a future blog entry.)

Rice Pudding — They served rice pudding for desert twice, once at dinner and the next day at lunch.  I heard some people complaining that there should be more variety in the deserts and speculating that they were serving left-overs.  Personally, I liked the rice pudding and was excited to see it back again for lunch.

So, that’s my take on the Dell Storage Forum.  The real question, of course, is whether Dell’s legions of salespeople will be able to articulate the depth and breadth of their storage vision.  I’m guessing not, but I remind myself that if the vendors were good at articulating everything their products could and could not do, I would be out of a job!

Learnings from SNW Spring 2011 in Santa Clara, CA

11 Apr

I have been attending SNW for many ears now. First, as a representative of a product company, later, as an industry analyst, and this last time, as an alliance manager looking to understand who are the players, what are they offering, and how they are positioned in the competitive landscape. The theme of the conference was Innovation. Innovation in the technology and innovation in the delivery of technology. Here are some things I thought were innovative and interesting: – Global file system that manages file locking across multiple locations while keeping all data centrally located. This is great for collaboration, file sharing, and content distribution. Imagine you have a patch or an update that has to be sent to multiple locations. Collaborating on a file across geographies can be difficult. Emailing the file can be network and storage intesive. With a global file system, a single instance of the file can be placed in the file system and be available to everyone. The file can be cached locally but all file locking is managed centrally. – File system designed to manage disk and tape in a single name space for active archiving applications. Imagine having a network share that you write to that is backended by a disk and tape. Tape enables the environment to scale incrementally at a relatively low cost. It is also very green. Of course many would argue that the management of tape is time consuming and opens a liability window in case something fails. These environments actually ensure that tape is safe on a tape media by applying integrity checking and media health checks. The software also creates multiple copies of data on disk, on tape, and on tape off site for greater resiliancy and manages the retention process. – Near CDP has been available on Windows for a while, but now it is available on Linux. What is especially interesting is near CDP for MySQL where you can restore at the table or object level. – Cloud and virtualization was on everyone’s minds and lips. You need virtualization in order to have a cloud. Many technologies fall into this space whether they offer a new approach to storing data or provisioning virtual machines. The area that bodes most promise in my mind is what is referred to at the highest of levels as cloud orchestration. This is software that makes it possible for an enterprise to deploy servers and storage and provision resources on the fly with a framework to manage the workflows. Most of the vendors in this space are only beginning to scratch the surface of what is possible, but are offering interesting functionality that can be useful to some, if nothing else to streamline how they provision and deprovision virtual machines. There were other technologies of interest including self healing high performance storage systems, clustered file systems, deduplication and compression bundled into primary storage arrays, Flash and SSD offerings, and cloud enabling technologies. It will take some time to work through all this new information and make some sense of it. I just like to see innovation, real applicable innovation alive and well.

Follow

Get every new post delivered to your Inbox.