From the blog.

Managing Digital Racket
The more I tune out, the less I miss it. But that has presented me with some complex choices for a nuanced approach to curb
Complexity – My Friend, My Enemy
Over my years of network engineering, I've learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer

My Home Lab, ESXi 5.5 Server Build, and The Logic Behind It All

1,735 Words. Plan about 11 minute(s) to read this.

Several folks have asked me about my home lab server build since I’ve tweeted a time or two about it. Here’s what I’ve built so far, and some of the logic behind my choices.

The Purpose Of My Lab

I am working on network virtualization, automation, and software defined networking tools. I need to work with a variety of hypervisors, virtual switches & routers, and virtual networking appliances. While I’ll have a few applications I’m running along the way, applications themselves not my explicit interest. This is not a lab to do Microsoft Windows Server work, for example. Specific tools I plan to work with include F5 BIG-IP VE lab edition (which I’ve blogged about a little bit already), Brocade Vyatta vRouter, VMware ESXi, Cisco CSR1000V, Juniper Firefly, LiveAction, OpenDaylight, OpenStack, Open vSwitch, and whatever else comes to mind. If you’re a vendor and I didn’t mention some product you wish I had, feel free to contact me if you’d like me to consider evaluating your product.

In the physical realm, I’m hoping to do some work with Shortest Path Bridging using Avaya gear, and am hoping to do some pure OpenFlow hardware switching with Pica8 gear. We’ll see what shakes out. I need vendor support to pull some of this off, since I can’t actually buy switch hardware outright. My pockets are only so deep.

My Lab Switch

In a case of being in the right place at the right time, I inherited a pair of Cisco SG300-52 switches a couple of years ago. To be fair, I thought about they only thing they had going for them was 52 10/100/1000 ports. I assumed they were consumer grade switches that happened to be manageable. After digging into what these switches can actually do, they have turned out to be unexpected gems that are astonishingly useful as lab switches.

  • LACP. You can build up to 8 link aggregation groups with or without LACP.
  • 802.1q. No big surprise here, but nothing to sneeze at. Full VLAN support is wonderful for this switch considering the role I’m asking it to play.
  • Non-blocking. Wow! Whoda thunk it?
  • CLI or web GUI. The CLI is IOS-like, but not what you’re used to on a Catalyst switch. A lot of different default behaviors, command nuances, and certainly command outputs. The GUI is useful, just clicky-clicky as Ivan would say. I can get configuration done faster at the CLI on this switch.
  • L3 switching. This is the feature that truly shocked me. Really? This thing can route? Yes…yes, it can. Now, it’s not a full featured router — no dynamic routing protocols such as OSPF or even RIP, so you’re stuck with static routing. And it can’t handle that many routes – not much TCAM (“up to 512 static routes and up to 128 IP interfaces” per the data sheet). But for a lab? It’s great stuff. And for what routing it can do, it does so at line-rate. Bonus! I have put both my SG300’s into routed mode, and use one of them as the default gateway for my house network. From there, I static route from the switch to my DMVPN router, static route to the lab, and then default route to the firewall, each of those legs having their own L3 segment. You really can build a “grown up” style network with this thing.
  • Miscellaneous other features like full SNMP management, Q-in-Q, Voice VLAN, UDLD, a variety of QoS functions, and much more.
  • Pricing on this specific model switch is ~$700, but note that there are several models of the SG300. If you go with lower port density, you can obtain the same SG300 functionality much more cheaply. The SG300-20 is ~$350, and the SG300-10 is ~$210, for example.
  • My only complaint is that these switches are the noisiest thing in my rack due to the fans. They are in the furnace room of my house where there’s plenty of other noise going on from time to time. But if a quiet switch is a big deal to you, this might not the switch you’re looking for.

The Server Build I’ve Specified

In addition to the purpose of the lab I defined up above, I had some other requirements for the servers.

  • Quiet. I don’t want a lot of fan noise or related racket.
  • Low power consumption. I don’t want to jack up my electric bill too much while still running the lab gear 24×7.
  • Rack mountable. I have an enclosed rack I inherited from a data center migration I was a part of several years ago. I prefer to rack my gear if I can.
  • IPMI. I want to have remote access to the server, even if I blow it up.
  • Quad on-board NICs. I don’t need 4 on-board NICs for bandwidth reasons. Except for the occasional iPerf and some storage traffic, I won’t be putting that much load on the lab network. Rather, I wanted 4 on-board NICs for maximum network configuration flexibility. I like options. Options are good. I could have added network interfaces via a card, of course, but that didn’t appeal to me as much.

To accomplish all of these things, I’ve settled on this build, listed here at NewEgg.com. I have a few comments about some of these components you might want to think about, as your situation or requirements might be different from mine.

  • There are many cheaper power supplies in the world, but after researching the pinouts my motherboard choice required, I ended up with the Enermax Revolution. The fan intake is on the top of this supply, not the side or the rear. Therefore, your case needs to be slotted to accommodate this. Also note that this fan design means that the power supply fan will move no air through the chassis at all — just through the power supply itself.
  • The iStarUSA 2U rack mountable case you see listed is simple and cheap. It’s not that deep, and it’s not that heavy. I am racking it without rails using the rack ears on the front, and that’s good enough. There are rail guides in the case if rails are your preference, but that was not critical for me. That said, I do think 2U is a good form factor for a case. Yes, 1U cases take less real estate in a rack, but I have lots of RUs to play with. More than that, I steered away from 1U racks because I just didn’t like the implications of that form factor. Tighter overhead space in the chassis. Often proprietary fans that have to spin at higher RPMs to move the same amount of air a larger diameter fan could move. A greater challenge finding the right power supply. Etc. A 2U case allows for standard ATX-sized power supplies, more room for storage, standard 80mm case fans, and more room to work & route interior cabling overall.
  • I bought an SSD and not traditional spindle for one reason: performance. I wanted fast spin up and shut down times for VMs, and fast I/O in general when running multiple VMs all contending for the same disk. 256GB goes further than you might think using ESXi thin provisioning. Plus, VMs running Linux don’t require all that much space anyway. I have a plan to add an Ethernet-connected storage array to the mix which will give the lab big, slow storage useful for storing ISOs, snapshots, OVAs, and archive data such as long-term logs. This proposed array will also be the housing for lab backup, however I end up doing that, TBD.
  • The SuperMicro MBD-X10SLM+LN4F-O motherboard was designed for Intel Haswell family silicon, which is low-power consumption and seems to run fairly cool as well. This board also features the quad NICs and IPMI I was looking for. For virtualization folks, note that this specific board has the Intel C224 family chipset, not C226. The C226 has more virtualization related technology baked in, including Intel vPro & VT-d. While these features weren’t critical to me personally, they might be to you. Make sure you’re clear that the motherboard you’re eyeing has all the features you’re looking for. Also, note that you can often go right to the manufacturer’s site and download the manual for the motherboard in question. That will help with detail you might care about (i.e. the number of fan headers available, etc.)
  • I splurged on two nice Noctua NF-R8 80mm case fans. While the Xeon CPU I installed came with its own cooling tower and fan, I wanted more firepower to move air through the case on the off chance my lab rack got hot. The Noctuas seem to be quite high end, supporting 4-pin PWM (pulse width modulation) fan headers from the motherboard, and featuring a somewhat exotic sealed bearing assembly. They also came with silicon mounting posts that isolate them from the case itself. They are whisper quiet and only spin as fast as the motherboard decides they should. Frankly, they are overkill, but I am especially pleased with them. Exceptional engineering impresses me, even in something so rudimentary as a case fan. I hope they last.
  • As far as the Intel Xeon E3-1225V3 Haswell 3.2GHz 8MB L3 Cache LGA 1150 95W Quad- Core Server Processor BX80646E31225V3 CPU I bought…it’s a CPU. Depending on your particular needs, you might obsess over which is the perfect Haswell for you. This one has quad cores, but does not have hyperthreading capability. That might matter to you. Also look for CPUs that do or do not support onboard video; for example, the E3-1225V3 I list has processor graphics, while the similarly named & slightly cheaper Intel Xeon E3-1220V3 does not.

This server will run VMware ESXi 5.5. The only catch is that you need to build the ESXi installer to support the Intel i210 NICs. This is easily googled; I don’t feel I have much to add here.

Pictures

IMG_1116 IMG_1122 IMG_1121 IMG_1120 IMG_1119 IMG_1118 IMG_1117 IMG_1123

Tags: , ,