EMC VNX Fast Cache and Application Performance ; It’s not all Pancakes and Banana Sandwiches.

The issue we face as storage professionals is always going to be the age old question. “What I/O demand or pattern exists in your application?” This is a question that should always be asked. Sometimes application owners just don’t know what it is, sometimes we have to run with wild guesses and sometimes we can rely on our experience to determine the correct amount of drives, spindles, widgets and software to throw at an application to make it run correctly. Sometimes we get this wrong.

As my coworker, friend and colleague Joe Kelly @virtualtacit says:

It’s not all pancakes and banana sandwiches!

Why is this? Well because even the best algorithms in the world can’t account for every single application in the world. They get very close, and in the case of EMC Fast Cache on VNX, it does very very well in virtual environments, certain database environments and a ton of other success stories but you have to make sure that you just don’t turn it on because you have it for everything which I know is very tempting. I’ve done it.

But SANGEEK? Come on? I bought these really fancy EFD drives and I want to see them work!!!

Don’t worry. You’re EFD drives will work. Very well. But you need to look at everything you put on your array and determine if it is appropriately being used. By making educated decisions on which luns to enable FAST Cache, your whole array will perform better, more efficiently, and most importantly make your boss very very happy.

Some examples of where you will most likely see the best performance gains:

  • Wierd Random small block reads and writes. In the case of reads as long as it isn’t some long continuous thread FAST Cache does very well. In the case of writes, FAST Cache likes to absorb these like a buffer and either avoid or write out forced flushes faster. We can see a lot of this in VM environments in which sometimes you just don’t know what that pesky application owner is doing at every given moment.
  • A decent amount in regards to locality of reference in data. I’ll get a little “computer sciency” here and I promise to not do it that often, but here you have two different types temporal and spatial locality. According to some random professor with a huge UNIX Beard, temporal and spatial reference points basically mean that if a value is referenced there is a high probability for it to be referenced again in a short period of time or a value will be referenced that is close in proximity to the first value. In the case of hard drives and FAST Cache, this is highly applicable in areas like website data, transaction processing, etc
  • For example, if Joe is trying to buy a part for his car to fix his air conditioning online and discovers that he needs another part to fix the air conditioning system than originally searching for, there is a high probability that all of these part numbers are going to be somewhere close to the tablespace or hard drive that the original part number was on.

    Some examples in which you should consider turning FAST Cache off:

  • Large block I/O sequential reads.  This is the probably the worst case of bad things for FAST Cache. Large block I/O sequential reads typically do very well on more spindles and benefit from it.  By locating large blocks into FAST Cache, you are essentially removing it from being able to read from a bunch of spindles placing it on fewer spindles (EFD’s aren’t that much better than this type of I/O than other drives) and causing performance impact.  Take this example.  Say you have 30 146GB 15K RPM FC drives set in a raid 4+1 configuration.  You have the ability to have 5000+ read IOPS coming from those drives.  If you are doing a large block I/O sequential read, chances are a lot of that data is located on many of those drives (if not all of them…your application may vary).  So if you take my locality of reference example of the ‘what FAST Cache is good for’ and apply it to this scenario you can probably see why ‘promoting’ extents to a 4 or 8 drive FAST Cache would be problematic.
  • Database or log journal Volumes.  Typically it is best to leave these alone and don’t turn on FAST Cache.  We see this with recoverpoint journal volumes, Oracle DB volumes and MS Exchange log luns.
  • Volumes where you don’t need extra cache.  In a typical VNX File implementation, since the Datamovers already have a Cache on their own it is generally a good idea to make sure FAST Cache is turned off on luns that are presented to the datamovers.  This can also apply to applications or servers that are doing some sort of caching at the application/server level in which FAST cache wouldn’t really provide much benefit.
  • If you could take one thing away from this post, and you’ve made it this far. Know where your data is (locality of reference) and try to gather as much information about the size in which things are being written and read from the array.

    Please email me or comment if you have any specific questions or ideas to bounce off of me.

    …and remember. It’s not all pancakes and banana sandwiches in the storage world.

    @sangeek

    This entry was posted in EMC, VNX. Bookmark the permalink.

    3 Responses to EMC VNX Fast Cache and Application Performance ; It’s not all Pancakes and Banana Sandwiches.

    Leave a Reply to Dave-AL Cancel reply

    Your email address will not be published. Required fields are marked *

    301 Moved Permanently

    Moved Permanently

    The document has moved here.