By was on Jun 08, 2009
We're building out our next generation infrastructure, and we're upgrading critical parts of our infrastructure to 10GBe. One of the things we've been debating is whether to use a big core switch or take advantage of some of the virtual chassis technologies that are now available (clustering many smaller switches to look like a big chassis switch).
We're well on our way to proving out the virtual chassis technology from Juniper. It fits our data flow rates, and it's easy to manage. Over time, we've put in network gear from several vendors. This was good as we used the best from each vendor. The problem is that it causes a maintenance hassle. Your engineers have to know 2 or 3 command sets, and that's a problem. So now we're moving towards standardizing on one or two vendors to simplify maintenance.
It turns out that we realized several things about our rack layouts. We always have limited space, so we typically laid our systems into the racks on an as-needed basis. We would find the right amount of space, make sure we had enough power and just dump the systems in as we could (splitting the systems so no one service was in a single rack/row, etc.)
Our next generation design takes into account data providers and data consumers, making sure that they connect via high bandwidth connections. Our front-ends (data consumers) don't typically consume more than 1gb connections to the backends (data providers), but if you get several front-ends talking to one backend, and the backend is on a 1gb link to the top-of-rack switch, you can get into some serious problems. We're able to work around that on our very high bandwidth backends by trunking connections, but even then it's suboptimal. So we're going to make sure we have 10g connections to the backends, and that we keep the number of "all-10g" racks to a minimum (to keep costs under control). Our rack layouts will definitely change, focusing on high availability, very high performance while staying in our power constraints.
I'll talk more (probably) in a future blog about the other things we're doing in the network.