Do You Really Need Gigabit To The Desktop? Maybe. Maybe Not.

Send to Kindle

The topic came up recently that a segment of user workstations on a network I manage were only serviced by 100Mbps ports, and not gigabit. None of us especially had a problem with that, as 100Mbps is an awful lot of bandwidth for the typical desktop user. If 100Mbps seems parsimonious, consider the following.

1. I work from home on a broadband 35Mbps/2Mbps connection.

  • Almost every service I use is outside of my house. All of the networks I care for are elsewhere. My e-mail is cloud-hosted. My file-services are cloud-hosted.
  • Creative work that I do is local to my workstation, then shipped off via e-mail or file sync when completed.
  • Real-time communication is via IM or some flavor of IPT (Skype, VoIP, etc.)
  • Most data entry required of me is accomplished via web forms, and many tools that I use are web-based, i.e. the heavy processing and data moving tends to happen on the back-end, not between my client and the remote server.

My Internet bandwidth is adequate. In the download direction, 35Mbps is plenty, even when downloading files 1GB or greater. No, huge files are not delivered instantaneously, but they come down quickly enough. I do feel a pinch when uploading large files, but that’s a rare need.

My point is that life is quite workable on my home broadband connection. By comparison, a symmetrical 100Mbps connection in a typical office environment is a vastly larger connection, and probably one with much less latency.

2. When building out network closets for enterprises, you can often get away with 100:1 oversubscription in real life. I’ve built out lots and lots of network closets over the years. When designing the uplinks, my concern is more about redundancy & resiliency, and less about bandwidth. Why? I know from monitoring that uplinks from closet switches servicing the typical enterprise user just don’t get hit that hard. It just doesn’t matter that you’re giving all those desktop users gigabit access. They’re almost never going to sustain gigabit throughput for any great length of time. Traffic patterns are very bursty over short durations. That lends itself well to high uplink oversubscription.

Another thought is related to the horsepower found in a typical desktop workstation. Can that machine even fill a gigabit pipe? In most cases, no – the disk and/or other buses inside the workstation are going to be a bottleneck. While exceeding 100Mbps isn’t so hard to do, being able to sustain hundreds of Mbps is more of a challenge.

The key here is to know your traffic. Users shuttling around heavy data sets to work on locally (say, video editors) probably have beefy workstations with none of the aforementioned bottlenecks, and are pushing lots of packets across the wire. That’s a different issue than the typical cubicle worker faces. Developers are also folks to keep an eye on, as they love to do things like spin up large database instances on their workstations to run tests against.

3. Gigabit to the desktop costs money.

  • How well will your cabling plant do if you start pushing gigabit Ethernet across it? If the plant is CAT5 or higher, it should be okay. I’ll add that in my experience, the quality or age of the cabling installation is less of an issue than some vendors might want you to believe, especially when it comes to servicing workstations. But, it’s still a potential issue. Connections deteriorate. Copper cables that have been resting for years don’t like to be physically disturbed, especially at their punch down points. Cabling that has been feeding desktops for years might be of nominal quality to support gigabit. Running new cable isn’t cheap. Therefore, you need a really good business case to perform a cabling plant upgrade merely to support 100Mbps to gigabit.
  • The cost of gigabit switches has come down dramatically, to be sure. In a net-new installation, there’s little to think about. While you might be able to save a few bucks buying the 100Mbps switch model, it’s usually not worth it – you’ll just buy the gigabit switch and be done with it. But what if you’re looking at a network refresh, where the existing closet switches are 100Mbps? Certainly, there are many good reasons to upgrade aging switches. They have an electronic lifespan. Technology moves on, and perhaps you need new features that your old switch doesn’t have, say to support a VoIP deployment. But if the only thing driving the decision to replace a 100Mbps switch is that you’d feel better if those users have gigabit, you might be missing something. Again, know your traffic patterns.

Conclusion

You should never do anything in networking “just because.” Technology exists for a reason. Oftentimes, the way to know whether or not to implement a certain technology is to understand why it exists. In the case of ever increasing Ethernet speeds, you might think, “Well…because faster.” Right. Faster. So by that logic, 10G is better than gigabit, and 40G better than that. “Oh, well 10G to the desktop is just silly. A user would *never* need that sort of bandwidth, and you can’t even get 10G LOMs for workstations with any ease right now anyway. Plus the cabling required. Oh. Hey…” Right. Hopefully, you see what I did there. Yes, gigabit is much less of a cost than 10G, but do your users actually need gigabit? If you’re confident they don’t need 10G, ask yourself why you’re confident that they *do* need 1G.

Am I advocating 100Mbps switches for user workstations across the board? Not really. Except in rare cases, I haven’t installed a fast Ethernet switch for a long time. But I am saying that if you’ve got a bunch of 100Mbps desktop switches that are doing what the users need, then the fact that they are merely fast Ethernet isn’t a good enough reason by itself to want to throw them out.

Now, if you’re a non-technically minded manager wanting to use this article to squash a purchase request, remember that there are a lot of reasons other than simple speed upgrades that would indeed warrant a 100Mbps switch replacement. Listen to your engineering people making the recommendation. That’s why you hired them. ;-)

 
Download PDF
  1. As a road warrior/remote worker I tend to agree with your statements re: relative performance. I’m waiting for the day when enterprises will only deploy wireless networks for new/refreshed office spaces and abandon traditional switches altogether. I worked at a (the) major networking vendor for a long time and hardly anyone ever plugged a notebook into the network — virtually everyone in an office of 300+ used wireless exclusively even though switch ports were available at each desk location. “Good enough” networking I suppose. Many people will trade off some performance and/or quality for convenience and mobility.

    • Agree. Some have shared the opinion that 802.11ac is the Ethernet switch killer. 802.11ac will back haul to switches of course, but obviate the need of wired desktops for most users.

      • i would tend to agree that for the average office user that wireless will be the norm even for IPT & VC.

        I was recently involved in a PoC for a new branch solution that was 100% wireless for the end user and only used wired for backhaul or infrastructure systems requiring PoE or were not wireless capable. The upshot was that not a single user was aware of the change, no issues reported and even comments that it appeared faster in a couple of cases….

        I expect that 802.11ac / .11n deployed branch networks will be the norm for that organisation going forward, esp as the cost savings and convenience (office space moves no longer have issues with cabling, etc) are considerable

  2. Great post Ethan. I personally feel that the benefits of 1Gbps to the desktop outway the costs. I think the bigger question these days is should you be burning 10G uplinks to connect all these 1Gbps users. In the past I’ve often had 192 1Gbps ports connected via 2 1Gbps uplinks with no issues whatsoever. Lately I see more engineers utilizing 10Gpbs uplinks when all they really need is 1Gbps uplinks.

    Cheers!

    • I built out a couple large network closets last year and did 10G backhauls. These were to 3750X stacks with 7 members. Part of the reason was wanting to have LOTS of bandwidth for the VoIP deployment we were doing, rolling out a couple of hundred phones in the area. There’s no QoS like too much bandwidth, although we did QoS as well. Another reason was due to some dev groups in the area that kept going off on their own to stand up large instances instead of using a dev environment attached to the data center properly. We wanted to be sure there was less chance they’d impact other users in the area. But if it wasn’t for those mitigating circumstances, we probably wouldn’t have bothered.

      I share the experience of 2x1Gbps uplinks supporting 192Gbps of desktop facing ports with no issue for the most part.

  3. Most content distributors on the Internet are (apparently) shaping flows to limit bandwidth. The best I could do (in a non-scientific ad-hoc test) seemed to be keeping 3 simultaneous downloads running from iTunes, achieving a total of about 50 Mbps. Netflix streams are “only” around 6 Mbps each at Super HD quality.

    At first I think distributors were relying on the last mile limiting each flow. Now it looks like they’ve artificially capped them at what they think is common or reasonable in their opinion. This results in subpar performance for anyone with a fast connection. I wouldn’t be surprised if the sales of fast consumer connections suffers from this.

    So in case anyone thinks their Internet should work faster, it is good to consider that today access to the particular services you use might not get any faster by speeding up your local connection.

    I recently downgraded from 200M/10M to 110M/10M, as the only thing running faster than 50 Mbps was speedtest.net. The resulting savings is a nice bonus, too.

  4. Back in 2007 Cisco wrote a white paper explaining the benefits of GE…
    http://goo.gl/Y5wF53
    It’s been a while since I read the white paper but I think they bring up two new issues (in addition to what Mr. Banks mentions.) One is the buffering and “time on the wire” servers “waste” serving data to FE clients. The other is the idea that a single person doesn’t seem to wait that much longer for data at FE speeds but when you look the wait time of a larger population you tend to see speed-ups, which equals increased productivity.

    It should be noted Cisco sells network hardware and their opinion is biased.

    Unless you have users that have a demonstrable need for sustained GE speeds, I’m a fan of GE to the desktop and NOT doing 10GE uplinks–just GE uplinks. I’ve built closets with 10GE uplinks but it was just done because there was the money to do it. I don’t see them exercised enough to justify the cost.

  5. Pingback: Show 175 Dying Desktops, Insecure Firewalls, Networking The Internet of Things

  6. Great Post. I don’t argue with any of the technical details here. I will say that outside of the potential cable plant issues with doing Gigabit to the desktop, most of the budgetary arguments only apply if you are doing Cisco switches, where a large price gap still exists between Gigabit and 10/100. With any other vendor I’ve looked at recently the price difference is not significant, and in some cases doesn’t even apply. Juniper, for example, only makes Gigabit access switches. Just my $0.02.

  7. Pingback: Internets of Interest for 20th January 2014 — EtherealMind

  8. good points. in my experience it’s the features/price ratio dictates the purchasing decision. as noted above – there still a premium for 1GE with Cisco. Plus some advanced features are just not there or – i.e StackPower or modular PS for power budget expansion.
    Plus with most of access going wireless today the role of access switch is changing. I look at any greenfield office today and switch ports are used by phones.