Before my Angry Eye Crazyness, I had just received a new EMC NS-480 on the floor at my work. Because I like to remain as anonymous as possible as to where I work because this is my personal blog, I will layout the requirements given to me, my spec’d out configuration and the justifications I used for everything. This is in part to my good friends and colleagues at EMC, my awesome account team and of course my VMware architect who is super smart and keeps me in check every time I might have some crazy idea about how VMware should run. That being said and to protect the innocent all names have been omitted. Also, for those who don’t know me, I never try to claim any ideas are my own. I think the best ideas come from a team effort, good research and well sometimes W.A.G.’s (Wild a$$ guesses).
- 160 TB usable ‘mixed usage’ storage to be shared with VMware, Oracle, MS SQL and other random things
- 80 TB usable NAS storage to be used for departmental shares, research units, archival of media files and general network attached space. Also this will start to be used for VMWare test/dev space to experiment with how VMware runs over NFS rather than FC
- 22 TB Exchange 2010 storage to support our specific environmental use case (I could write a more detailed thing on this later if you want, but this is about 1/2 of the environment because Exchange 2010 is also doing it’s replication to another datacenter in which we bought another 22TB for Site #2. Site #2 is not part of this NS-480.
What was purchased:
2 x Datamovers
Additional Fiber connections (more than the standard NS-480 comes with)
135 x SATA 2TB 7200 RPM
75 x FC 600GB 15K RPM
10 x EFD 200GB
45 x FC 600GB 15K RPM (for Exchange 2010)
Fast Suite (Fast, Fast Cache, Navi-Analyser, QOS, etc)
How I set it up:
4 x 200 GB EFD R1_0
General Use FAST Pool A (Mixed Use, Heavy Hitters) (approx 80TB usable):
5 x 200 GB EFD
35 x 600 GB FC 15K
40 x 2 TB SATA
General Use FAST Pool B (Mixed Use) (approx 80TB usable):
35 x 600 GB FC 15K
40 x 2 TB SATA
Celerra Pool (Presented to the DMs)(approx 78TB usable):
7 x 6+1 R5 Raid Groups made from 2TB SATA drives
45 x 600 GB FC drives
(I use the term ‘pool’ for consistency but will most likely set up 4+1 RAID groups per MS recommendations, let me know if you think throwing it into a pool and allowing the SPs to do the layout would be good or bad)
Hot Spares (can’t forget these):
6 x 2 TB
5 x 600 GB FC
1 x EFD
Vault Drives 4+1 300 GB FC +1 300 GB Hot Spare
What does everyone think? I think this allows flexibility, meets requirements and will provide good performance to all of the components/applications that will reside on the array. With the addition of FAST/Fast Cache, I believe this will give us even more flexibility during the ‘spikey times’, allow us to schedule during predictable times, and make everything run smoother with reduced Cache Forced Flushes which have been the bain of my existence before EFD’s and FAST Cache was released.
Disclaimer: this will work for my environment and is only ment as a ‘how did he do that?!?’ and it may or may not be good for your environment. Feel free to use it or ideas from it if it makes sense to you! Just send me an email and let me know this helped you.
Until my next post…oh and the eyes are doing so much better today.