By was on Oct 30, 2008
It's time for me to build out more space in one of my datacenters, and I'm having a hard time coming up with the "right" design. Do I put lots of small switches in the cabinets, and home run few cables (4 per cabinet) or do I put big switches in (core switches) and home run everything?
Of course the big switch vendors want me to buy the big switches, but dang, that's a \*huge\* investment. I've been very successful deploying the switch-per-rack-function model.
Is it better to chew rack units on a big frame and be expandable, or chew rack units in every rack? Very tough question. Add in the overhead of managing lots of small switches and things get very interesting. There is clearly an inflection point, but where is it? How would you calculate it?
The switch-per-rack-function model starts out like this: you build a network that's redundant at the firewall and load balancer layers, and then you do odd and even racks. Each rack has a front-end network (FE), a back-end network (BE), a service processor network (SP) and a serial console (just in case!).
You've already chewed through 4RU per rack, and that's not including the 2 RU that you need for additional power strips 'cause the power density is so high, the vertical strips (208/30A) don't have enough outlets!