EMC NS-480 with Fast, Fast Cache and how I configured it.

Before my Angry Eye Crazyness, I had just received a new EMC NS-480 on the floor at my work.  Because I like to remain as anonymous as possible as to where I work because this is my personal blog, I will layout the requirements given to me, my spec’d out configuration and the justifications I used for everything.  This is in part to my good friends and colleagues at EMC, my awesome account team and of course my VMware architect who is super smart and keeps me in check every time I might have some crazy idea about how VMware should run.  That being said and to protect the innocent all names have been omitted.  Also, for those who don’t know me, I never try to claim any ideas are my own.  I think the best ideas come from a team effort, good research and well sometimes W.A.G.’s (Wild a$$ guesses).


  • 160 TB usable ‘mixed usage’ storage to be shared with VMware, Oracle, MS SQL and other random things
  • 80 TB usable NAS storage to be used for departmental shares, research units, archival of media files and general network attached space.  Also this will start to be used for VMWare test/dev space to experiment with how VMware runs over NFS rather than FC
  • 22 TB Exchange 2010 storage to support our specific environmental use case (I could write a more detailed thing on this later if you want, but this is about 1/2 of the environment because Exchange 2010 is also doing it’s replication to another datacenter in which we bought another 22TB for Site #2.  Site #2 is not part of this NS-480.

What was purchased:

EMC NS-480


2 x Datamovers

Additional Fiber connections (more than the standard NS-480 comes with)

135 x SATA 2TB 7200 RPM

75 x FC 600GB 15K RPM

10 x EFD 200GB

45 x FC 600GB 15K RPM (for Exchange 2010)


Fast Suite (Fast, Fast Cache, Navi-Analyser, QOS, etc)

Celerra Replicator


How I set it up:

Fast Cache:

4 x 200 GB EFD R1_0

General Use FAST Pool A (Mixed Use, Heavy Hitters) (approx 80TB usable):

5 x 200 GB EFD

35 x 600 GB FC 15K

40 x 2 TB SATA

General Use FAST Pool B (Mixed Use) (approx 80TB usable):

35 x 600 GB FC 15K

40 x 2 TB SATA

Celerra Pool (Presented to the DMs)(approx 78TB usable):

7 x 6+1 R5 Raid Groups made from 2TB SATA drives

Exchange Pool:

45 x 600 GB FC drives

(I use the term ‘pool’ for consistency but will most likely set up 4+1 RAID groups per MS recommendations, let me know if you think throwing it into a pool and allowing the SPs to do the layout would be good or bad)

Hot Spares (can’t forget these):

6 x 2 TB

5 x 600 GB FC

1 x EFD

Other stuff:

Vault Drives 4+1 300 GB FC +1 300 GB Hot Spare

What does everyone think?  I think this allows flexibility, meets requirements and will provide good performance to all of the components/applications that will reside on the array.  With the addition of FAST/Fast Cache, I believe this will give us even more flexibility during the ‘spikey times’, allow us to schedule during predictable times, and make everything run smoother with reduced Cache Forced Flushes which have been the bain of my existence before EFD’s and FAST Cache was released.

Disclaimer: this will work for my environment and is only ment as a ‘how did he do that?!?’ and it may or may not be good for your environment.  Feel free to use it or ideas from it if it makes sense to you! Just send me an email and let me know this helped you.

Until my next post…oh and the eyes are doing so much better today.


This entry was posted in configurations and tagged , , , , , , , . Bookmark the permalink.

5 Responses to EMC NS-480 with Fast, Fast Cache and how I configured it.

  1. sangeek says:

    Forgot to also add… This ‘type’ of configuration might also work well with the new VNX hardware with similar Hardware/Software sets.

  2. Pingback: Tweets that mention EMC NS-480 with Fast, Fast Cache and how I configured it. | -- Topsy.com

  3. sangeek says:

    Oh and we did put everything in a pool for exchange and Jetstress was really fast.

    • Joel Ramirez says:

      I had a client that had very inconsistent jetstress results. They ran the test several times on a FAST storage pool and the standard deviation for performance was so broad that it couldn’t be concluded what the performance expectation should be for Exchange 2010. In fact, we brought in one of the the Exchange Ranger 50-lb brains who recommended that the client simply buy more disk and set it up in a traditional RAID group leveraging metaLUN’s. He’s been around the block a couple of times though and, after talking with him about it, he let me in on why FAST isn’t really that useful for Exchange 2010 workloads.

      There are a couple of processes that run against the DAG’s such that the data is touched at a somewhat regular interval. It’s not necessarily predictable, but it negates the statistics that the FAST algorithms use to determine what data is “hot” or “cold”. What you get is a varied skew of promotion and demotion when you should see your promoted/demoted data settle in to just tens of gig after a couple of weeks (that’s been my experience at a few clients – pretty sweet). Also, due to the random and tiny nature of Exchange I/O, FAST Cache doesn’t help that much either. It’s nice to have the bigger bucket, and it doesn’t hurt to use it for Exchange LUN’s, but it tested right around “pointless”. I’m sure it’s great for backup though.

      Anyway, I’d love to hear about how you made it work or an update on the pilot.


      • sangeek says:

        Hi Joel-
        The pool we used for exchange was dedicated exclusively for exchange and luns were fully allocated/ pre-populated. So basically it was about as close to traditional RAID groups as we could have gotten. Provisioning was a bit quicker to set up as well.

        As to FAST Cache, I believe since this particular CX4 was setup with a multitude of applications running on it, it may or may not have helped out specifically Exchange but for other workloads it may have freed up more controller cache for Exchange to do it’s tiny loads on.

        The other pools as noted in my original spec’s were split up and letting the FAST algorithmic magic do it’s tricks.

        I believe as the CX4 and now VNX platforms become more mature, they should get better and better with time. But with anything you need to use the Microsoft 50 lbs. Brain people from time to time to help you with your workloads. Luckily 2010 is a bit nicer on disk subsystems but can still be a bear.

        I’m also currently working with FAST on the VMAX arrays which does things a lot different and more efficiently. It would be nice to see some of this Symmetrix Magic make it down to the mid-range storage arrays eventually as well.

        Thanks for the comments! Let me know if I can answer anything else.


Leave a Reply

Your email address will not be published. Required fields are marked *