DaveWentzel.com            All Things Data

SAN and Storage AntiPatterns

I've been writing a lot of blog posts over the years about various AntiPatterns in data architecture.  A "pattern" is simply a solution to a common problem.  Antipatterns are the problems associated with common solutions.  I see the same common mistakes being repeated using the same faulty logic and then touting these solutions as "best practices".  I think the most egregious antipatterns in IT are in the storage realm these days.  These storage anti-patterns are applicable mostly to systems where a performant IO subsystem is critical.  If you are just using a SAN to consolidate storage for some development VMs then these anti-patterns probably are not applicable. 

AntiPatterns usually have cool-sounding names like The Big Ball of Mud (a system with no discernible structure), Dependency Hell (think of DLLs), and Spaghetti Code (too many GOTOs).  I'm not clever enough to give cool names to these storage anti-patterns, someone else can do that.  I just want to point out the problems.  

The SAN Will Save Us Money

The usual argument from the SAN vendors is that the SAN storage system is shared, so multiple hosts (servers) can share resources from a common pool. In-turn, the utilization per disk is higher and fewer disks are required in total. Somehow, the "saving money" argument overlooks the minor detail that instead of buying 1000 disks at $200/each for all of our servers ($200K), we are now buying 100 disks at $2000 each (still $200K).  Where's the savings?  And we all know more spindles are better than fewer.  

Let's think about it differently.  I'm a SAN vendor and I want to sell my product to you, but I need to have a value proposition.  Immediately I can think of 2 arguments a SAN vendor might make:

  1. "Performance is going to be much better with our SAN than with your direct attached storage (DAS)."  I've never really heard this argument made.  So you can assume that a head-to-head performance test will show that a SAN cannot perform better than DAS.  Why?  Because DAS is physically located right there with the machine, I don't need to traverse the fabric to get my bits.  
  2. "You'll save a lot of money over time because your disk utilization will be better".  That's highly doubtful, as you'll see...keep reading.  The fact is, the SAN is a huge capital outlay.  

Assuming the cost part is not something that deters your organization, the problem is that all the main elements of the SAN argument is really bad for database disk performance. Storage performance is about distributing IO over very many disks and IO channels, which means the individual disks should be relatively inexpensive.  Remember that RAID originally was an acronym for "Redundant Array of Inexpensive Disks".  

Your SAN is really just a "computer" sitting in front of a whole bunch of disks.  It's kinda like a Windows file server.  The difference is the SAN is tuned a little more and the underlying networking hardware is much more robust than 100BaseT.  The SAN "computer" is really just a location for the SAN vendor to install their "software" that doles out the storage.  That "software" on that "computer" can often be a huge bottleneck.  The software is really just trying to overcome poor IO performance by fluidly moving hot spots around to different LUNs, and other "tricks".  We wouldn't need these tricks if we just scoped out DAS a little smarter.  Direct attached storage doesn't have any of that software overhead.  It's merely electrical signals moving across the wire.  

The SAN vendor will almost always configure the storage system to the principles of their value-add arguments, which more or less guarantes bad database performance.  Further, the vendors will tell you that it is easy to expand storage.  In fact, some vendors can dialin to your SAN and allocate more storage within minutes.  This is because the SAN ships with all of the disk bays populated, just "turned off".  They can do this because disk is cheap.  Now you call for some extra storage for a few new VMs and they enable a few inexpensive spindles and charge you a few times more than DAS would cost.  I've worked at many places where I've made a request for additional SAN space and was told, "No, it's too expensive."  Well gee, that seems to be contrary to what the SAN vendor told you a year ago.  Don't forget, your new SAN will require a dedicated resource to administer it.  

Cutting Your Storage Footprint As Your Primary Goal

This is a fine objective if you just want to centralize the storage of your non-mission critical VMs and dev boxes.  It's a terrible objective for your mission critical, high-performing database server.  I see data architects buy into this constantly.  They read about storage technologies that cut physical storage costs and they go full throttle for that goal.  They try to apply the latest buzzwords in storage to their data infrastructure.  Deduplication...compression...Thin Provisioned Storage.  These are all topics I've written about and are performance killers for your RDBMS.   In many cases us data architects denormalize and duplicate data strategically for performance, yet we turn around and forget these facts and try to save money on physical storage.  It just doesn't make sense.   

I worked on a project where I/O was of the utmost concern.  We changed data types from ints to smallints to squeeze a few extra rows of data on a page to maximize storage.  We changed DATETIMEs to SMALLDATETIMEs and changed FILLFACTORs and PADINDEX settings, all to get more rows on a page.  Anything to squeeze any extra I/O performance out of the system was attempted.  Why would I turn around and introduce a storage reduction tool?  Yes, storage footprint may be reduced, but processing overhead will ultimately increase as data needs to be recreated (via decompressing or re-duplicating) on the fly.  Don't be fooled by the vendors that say these technologies integrate well with your RDBMS.  

Storage Virtualization

Storage virtualization is the process of abstracting multiple storage devices into a single pool of storage.  This means that just-in-time provisioning and data movement between devices and tiers can occur in near real-time with just a few settings changes by the SAN admin.  There is no industry-standard definition (that I'm aware of) that defines what encompasses "storage virtualization".  There are a few things that generally fall under the "storage virtualization" umbrella that are performance nightmares.  

  1. fan-in/fan-out on critical hosts
  2. Do not virtualize with a goal of maximum utilization.  That just leads to saturation of the I/O subsystem.  
  3. Oversubscribing your LUNs can be detrimental to throughput.  

Thin Provisioning

Thin Provisioned Storage is the process of setting up a "drive" on a server where the space is not allocated on the SAN until it is actually needed.  Vendors will tell you that they can monitor not only space usage but read/write operations to prevent bottlenecks.  Again, the theme here is that the SAN vendor is telling you that your data is fluid across the fabric.  They are also tell you that they know your data access patterns better than you do.  They probably don't.  Why would you want your RDBMS log files, with sequential read characteristics, spread across disks haphazardly, essentially making the access random instead of sequential?  

Not Having Dedicated Disks for Log Activity

Most SAN engineers/vendors say that you don't need dedicated disks for high volume transaction logs.  And they certainly don't need to be RAID 1.  Balderdash.  Log writes are sequential, small block activity.  The goal should be the least amount of (write) latency as possible, with the highest IOPs.  This flies in the face of Q depth settings on controllers that will sacrifice some latency for higher IOPs.  This is counterproductive.  Please trust me...your SAN, even with dedicated RAID 1 and a dedicated service processor for your transaction logs, will not be able to support high volume transaction logging if your DAS couldn't.  Period.  Don't do yourself even more harm by *not* segregating your log traffic.  

SAN is Always Better than DAS

A SAN can never deliver the IOPS expected of its individual component disks.  Never.  So if you really need IOPS, go with DAS storage.  Better still, go with SSDs.  

Setting the Controller Cache to Favor Reads vs Writes

Read caching is usually disabled in TPC benchmark systems. A tiny read cache per LUN is best.  Anything more than that and you are sacrificing write-performance.  Think about it.  You have a database server with 128GB of dedicated RAM connected to a SAN with one half that amount of cache, servicing many hosts.  When a "read" query is executed, which memory cache will likely hold the data, the SAN cache or the RDBMS buffer cache?  Which cache is optimized for read-ahead reads and is tuned to work with your RDBMS?  Read cache on the SAN is pretty much unnecessary, skew your cache towards write activity.  

Use RAID 5 to Save Money

Don't use RAID 5.  The RAID 5 Write Penalty is not worth the price.  In the last 5 years or so fewer people are touting RAID 5 so hopefully RAID 5 will be relegated to the dustbin of history very soon.  RAID 0+1 or 1+0 is great.  I've heard SAN vendors tout RAID 50 as the next panacea.  RAID 50 is just a specialized case of RAID 5, so it will still incur the performance penalty.  

Automatically Assuming Your Data Warehouse Should be on Your SAN Because of it's Storage Requirements

A DW on a SAN can be a bad idea.  Remember that most SANs are tuned for random access and your (properly designed) data warehouse will issue more sequential access than random.  Other points:

  1. Your DW will have massive RAM, you don't need a SAN read cache.  
  2. OLTP requires high volume, low latency disk access, which is what most SANs are designed for.  Your DW requirements will demand maximum throughput.  Again, the goals conflict.  

Here is how this anti-pattern plays out in the real world.  

  1. Your users will complain that your DW is slow.  
  2. You'll look and eventually determine the IOPs are not sufficient.  
  3. You'll tell your SAN guys.  They'll say your DW isn't tuned, the SAN is fine.  They'll be right, kinda, because the SAN has different performance goals and they can't be changed.  

Bottom line is the SAN is a huge investment that may not pay off if you are using it solely for your high-performing databases.  Instead, save your money and buy the best SAS 15K drives, or SSD, you can.  Then short stroke them and configure them for the best IOPs you can.  

Add new comment