From the blog.

Managing Digital Racket
The more I tune out, the less I miss it. But that has presented me with some complex choices for a nuanced approach to curb
Complexity – My Friend, My Enemy
Over my years of network engineering, I've learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer

Building a 10G Network – Part 1

616 Words. Plan about 4 minute(s) to read this.

One of my current projects is to build out a 10G network to support server consolidation we’re doing. How did this come about? Well, for a long time, we were corporately going the route of isolating specific services to specific servers, growing these little pizza box 1U servers everywhere. All of a sudden, the data center managers are having trouble providing rack space, power, and cooling to the zillion or so little servers. So, the new push is to free up RUs, power, and heat by migrating all these little servers into blade centers, in some cases blade centers running VMWare.

The one major problem with this strategy is the network. The pipes we need to plumb into the blade centers have to be huge: we need big bandwidth going into these things. So, either we plumb a ton of 1G copper lines into the blade centers, or else we consolidate the network pipes from a ton of 1G coppers into a much smaller number of 10G pipes. 10G is clearly the way to go, but in my corner of the world, that has its challenges:

  1. I don’t have 10G ports anywhere. Over the last 2 years, I’ve built a sizable data center on 1G copper, mostly using 6500s in a combined core/distribution layer and 3750s at the access layer. While the new 3750Es support 10G uplinks, the legacy 3750s do not. The 10G need happened all of a sudden, as so many things do in a fast-moving company. It’s not like we had a chance to get it built before it was actually needed.
  2. The 10G standards do not include any sort of UTP copper. Blech. I have a data center full of Cat6a, which we were hoping would eventually become a 10G copper standard, but it ain’t happened yet, assuming it ever will. From what I hear, 10G over UTP is a problem.
  3. 10G is expensive right now…really expensive. And if you want high port density, the options are few. My beloved 6500 platform isn’t a great answer for high 10G port density. I should restate that – you can get port density, but you’ll oversubscribe the 6500 badly. To eliminate or at least reduce the problem of 10G oversubscription at high port density, you have to get into the Nexus 5K and 7K boxes. Those boxes are so new, they ship with lab techs, never mind that I don’t have the budget for them. So I’m waiting until 2009 to do a serious Nexus eval.

Nonetheless, I still need to get 10G out to the incoming blade centers, so here’s what I’ve done. I’ve added 8 port 10G blades to my core/dist 6500s. To get a physical infrastructure going, I’ve got contractors on-site building a new fiber MDF. While we’re still hoping that we’ll be able to use our Cat6a to carry 10G at some point, for now the answer is multi-mode fiber. (CX4 is okay, but doesn’t go the distances we need.)

Now, to deal with port density…I’m ignoring that particular problem right now, at least as far as the oversubscription issue is concerned. What I ended up doing was buying 4900Ms to act as the 10G access layer. I’ve got the 4900Ms coming back into my 6500s using 2x10G etherchannels (or will just as soon as I’m done building it). So the 4900Ms pick up the blade centers, and the 6500s pick up the 4900Ms. Yes, this design is oversubscribed – this is not Cisco’s new “data center ethernet” proposed standard, which includes the concept of lossless ethernet.

I’ll blog about the 4900Ms later on. I’ve run into a couple of interesting “new to me” features getting them configured.