From the blog.

Google Plus Is Mouldering
A quick search for "Google Plus is dead" reveals a number of recent articles about the pending death of the social media platform. It's not

Living With The iPhone 6S+ 128GB

The Apple iPhone 6S+ 128GB is Apple’s current huge flagship phone. If you want dimensions, weight, screen specs, etc., please google them up. I add no value by regurgitating them here. Rather, I want to focus on my user experience, now that I’ve had the thing for several months.

How’s that huge form factor working out?

Fine. Yeah, it’s big — about as big as a checkbook. But it still fits in my pants pocket. I do not use a case for it unless I’m in the woods hiking as it would be too big, in my opinion.

I find it hard to operate one-handed. My thumb can’t reach across or up the screen to get at certain things, and if I try, I often hit something I didn’t mean to. Hilarity ensues, usually accompanied by me sighing heavily.

Update. A few people have mentioned the “reachability” feature to me, where you lightly double-tap (not press) the home key, and the screen slides down about halfway. Yes, I’m aware of it, but I left it out initially because I feel it has limited uses. Items at the top of the screen (north-south) do indeed become easier to reach, but reaching across the screen (east-west) is still hard. Plus, you lose the visibility of half of whatever was on the screen, as that content slides off the bottom. No big deal in some screens, but annoying in others. Reachability is a good idea, but it doesn’t change the fact that this is a big phone.

Don’t forget that to use this phone one-handed, you have to balance the monster in your hand while pressing things with your thumb. This is not an easy task, made all the more peril-fraught considering how much the stupid thing costs. You don’t want to drop it just because you’re stabbing at an icon in a hurry.

Battery life. Tell me the truth.

It’s fine. I get a solid day out of it under almost any circumstance, even hiking in the woods with the GPS running and cell signal marginal, causing the phone to poll often. Even flying across country and watching Amazon Prime videos offline doesn’t have me reaching for a power brick. Guesstimate? I get 12+ hours of heavy use out of the device after 5 months or so.

Radio reception report: LTE, WiFi, Bluetooth.

Cell reception and LTE are fine most of the time, but there are moments where my wife’s 5S has 3 bars of LTE, while my 6S+ is limping along with a bar or two of 3G. We’ve seen the opposite scenario, however, so I don’t have a strong opinion here.

Wifi is fine. The phone has an 802.11ac chipset, and connects either of the 2 Apple Airport Extreme APs in my home with no drama, dropped connections, throughput challenges, or other weirdness. In general, I find that the wifi performance on my phone matches the experience I’m having with my other devices, such as a MacBook Pro. Therefore, I consider it unremarkable. It just works.

I use Bluetooth on the phone most often to pair with audio headunits in my cars. Pairing always works. Signal is always strong. Music and podcasts are always listenable, even at volume. I also have an earpiece that I pair now and then. Again, it just works.

Is the software reliable?

I am running iOS 9.2.1 as of this writing, and have no complaints. Several earlier versions in the iOS 9 family upset me greatly. At the moment, Apple seems to have things sorted. I pray they don’t alter the deal further.

I don’t use many of Apple’s built-in apps. For instance, the Podcasts app is irredeemably broken. Many of the rest they force on me aren’t interesting.

What do I use it for?

Here’s how I use my phone.

  • As a phone, but only occasionally. The iPhone voice app interface is familiar and predictable on those occasions I need it.
  • As a chat client. I run the iMessages, Slack, and Google Hangouts apps.
  • To consume media. I stream video via Amazon Prime, or watch it offline. I stream music via Spotify, or listen to offline copies. I listen to podcasts with Overcast.
  • To read e-mail & compose short responses. I gave up on Apple Mail, and am currently favoring Microsoft Outlook. Outlook lets me program swipe functions, which means I can fly through my inbox as most of the e-mail I get is boring crap sent by people who, by inference, hate me.
  • To read newsfeeds. I use the Feedly app, and consume almost all of my feeds via the phone, most often before going to sleep or getting up in the morning.
  • To read books. I use the Kindle app to consume books. I don’t do much of this on the phone, preferring my Kindle. However, a cat chewed on my Kindle Voyage and cracked the touch screen, and thus I’ve been using the iPhone again. It’s…an okay experience, I guess. The 6S+ screen is big enough to work as a Kindle device without wishing I was dead five pages in.
  • For wilderness navigation. I use the MotionX-GPS app for my outdoors jaunts. With the 6S+ screen being as large as it is, MotionX is a great tool. I’m surprisingly happy with this aspect of the phone, as it’s worked out better than expected. At least until the phone gets too cold (around 20F or so), at which time it shuts down.
  • As a nightstand clock. I use the unlocked version of the Nite Time app for this, and rest the phone on a stand in landscape mode.
  • As external biomemory. I track most of my life in Wunderlist, frequently when I’m out and about with my phone.

Those are the big things. Yes, I have many other apps I use on my iPhone, but they aren’t nearly as key to my iPhone experience.

One comment about apps on the 6S+ is that some apps scale to fit the screen, which makes them look weird — fonts too big, mostly. Apps written for the 6S+ will scale properly, and might also support a landscape view. However, landscape view on the 6S+ is less useful than you’d think it would be for many applications. The screen is big, but not so big that a split pane view makes sense. At least, it doesn’t really work for me.

Do you miss your iPad Mini?

No. For me, the iPhone 6S+ is good enough to do the things I used the iPad Mini for previously. Not a perfect replacement, mind you. But good enough.

Can you type better on the bigger phone?

Not even slightly.

What accessories did you buy?

Highly recommended accessories I use every day.

Do you really use 128GB of storage?

Yes. I shoot video and take a lot of pictures. I also store a lot of video and audio content offline since I travel by jet often enough to care. While the bump from 16GB to 128GB was a ridiculous $200, I’m happy I have the capacity.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Will Public Cloud Make Us Prisoners Of Pricing?

A Packet Pushers listener wrote in with the following theory about public cloud eating all the private data centers of the world. Here’s an excerpt.

I think the more interesting division is the Morlock-Eloi division that is rising through cloud computing. Business people as Eloi can pick from a menu and decide what they need. The Morlocks in the cloud do all  the technical work and make things happen. Once the Morlocks have the Eloi in complete cloud lock-in, they will raise prices. Those of use who are Morlocks will have to switch to providers to find work that is interesting. Only the companies that have the funds to afford their own Morlocks will avoid the price gouging that is to come…

What do you think?


I think the formula is a bit more complex. I believe market pressures exist in the unsettled cloud market that will keep prices under control for the foreseeable future.

Public Clouds Competing For Your Business

I think the Morlocks vs. Eloi comparison is apt. The more consumable we make IT, the easier it will be for the technically ignorant Eloi to use it successfully, but cluelessly. This will put the Morlocks in a position of power. And yes, that could mean that people working in enterprise IT today might find the most interesting job opportunities working for big cloud providers. 

However, I think there’s more in play here than the knowledge haves vs. have-nots. Public cloud is a competitive marketplace, and not a monopoly. Therefore, market forces are at work. As long as there are several public clouds to choose from, there will always be price competition.

That said, cloud is still young, and the competitive landscape isn’t in its final form yet. For example, there are the recent exits by Verizon and HP. And then there are reportedly immature players like SoftLayer. AWS is the strongest public cloud provider today, but Google & Azure offerings are catching up rapidly. Both of these have built-in markets to go after — lots of existing customer relationships they could turn into cloud consumers. Therefore, I believe pricing will remain part of the equation for the long-term.

Private Cloud Becoming Viable (?)

Another monkey wrench in the “public cloud is going to charge whatever they want” notion is OpenStack. OpenStack is hard to implement now, and that’s an adoption barrier for the mass market. But eventually, OpenStack matures. When it does indeed grow up and become predictably easy to setup, reliable to operate, and stable in its codebase, I believe that will be a big factor in helping the Eloi decide to maybe keep their clouds private.

And don’t count out VMware. Sure, running a private cloud on VMware is hard on the budget. But VMware has valid reasons to keep their private cloud product set salable. A VMware private cloud keeps their enormous customer base in the fold, and generates revenue for its own sake. If VMware gets enough market pressure, I think they’ll figure out how to sell private cloud at less than extortionate prices.

The Talent Pool Will Grow

A big problem right now is that there is no one predictable way to build a private cloud — and built into that statement is an assumption that private cloud is desirable for all organizations. I’m not sure it is yet. But because there is no one standard way to go about the brave new IT, finding talent with experience and wisdom to guide an organization is hard to come by — at least for right now.

For sake of argument, let’s assume that private cloud does become a generally accepted approach to IT. Having a generally accepted approach will reduce the barrier of private cloud adoption. Getting to technology stability — i.e. OpenStack or VMware becomes a standardized way to build private cloud — then we end up with products to train the masses on. In that case, we’d have an accessible talent pool to build out private clouds. As a result, we would yet again have downward pressure on public cloud pricing.

Public Cloud Still Has Architectural Issues

And let’s not forget that public cloud still has both latency and security challenges with answers that cost money. Overcoming latency challenges is addressable with enough money thrown at the problem (as well as carefully considered design), but that money adds to the cost of consuming public cloud. Security issues are similar — solvable with money perhaps, but that eats into the public cloud value proposition.

That means that the decision to move to public cloud is, ultimately, complicated. I don’t think the decision will ever be so easy or obvious that everyone will just do it. Thus, public cloud pricing will always be under pressure. And even for those organizations fully committed to public cloud, there will be other cloud alternatives.

I suppose a follow-up post would be a consideration of cloud lock-in. Would it be simply too hard to move to another cloud, and might that cause some price gouging? Could be. Of course, you could always build greenfield and migrate. Perhaps migrate is the new upgrade.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

So, You Want To Be A Manager

As a younger engineer, I was frequently frustrated by co-workers or managers who had control, but lacked my technical ability. At that time, I equated knowledge with responsibility. I felt that if I knew intimately how something worked, I should be the one in charge. I should have control. From some of the frustrated e-mails we get at Packet Pushers, I know that others of you identify with this.

And so it was as a young man that I aspired to be a manager. Management looked like control to me. After all, I worked for a manager. That manager told me what to do, and I did it. That’s a simplification of the relationship, but at the root of it, there it was.

I thought that as I acquired technical expertise in operating systems, security, and networking, I should be the one holding the reins. Since I knew how the systems actually worked, and even understood how they worked together, I should be the one telling everyone else what to be doing.

That’s logical, perhaps. But it’s naive. Management is not engineering. Management is not technical leadership, at least not by default. Management is a skill all its own that, like anything else, must be learned. A good IT manager…

  • is experienced with people.
  • understands how businesses operate.
  • can translate business needs to technical requirements.
  • communicates those technical requirements to engineering.

That’s really what you’re signing up for when you want to be a manager. Managing people, especially hard-headed technical people, is an extraordinarily challenging job. Being a rock star in the data center doesn’t make you a rock star in the office.

Let’s say you choose not to salute my cautionary flag. You believe in your heart of hearts that if you were in charge, things would be better. Maybe you’re right, but consider this. If you are granted managerial responsibility, you are going to have to keep doing the engineering job you’ve always done.

Each time I’ve been a manager with direct reports, I’ve still had to perform engineering duties. And that’s true whether I, in my ignorance, pushed to be a manager, or whether the manager title was hung on me against my will. Doing both is no fun. Engineers think management is no big deal, and trivialize the workload. Don’t make this mistake.

If influence is really what you’re after, you don’t want the manager role. You want a technical leadership role. A technical lead with no direct reports allows you to be the excellent engineer you’ve trained so hard to be, while avoiding the burden of business meetings, budgets, reviews, executive interaction, and (to some degree) project management.

A technical lead role means that you can focus on design, engineering collaboration, and research. You can recommend for and against certain strategies to your manager, who then deals with the business end of things…like getting the solution paid for.

At this point, I feel a disturbance in the Force, as if a thousand readers are all saying, “But if I don’t become a manager, I’ll never get paid more money!” Depending on your employer, that might be true. But folks, it’s a trap. If money is your only reason for accepting a management proposition, it’s the wrong reason. You’ll be unhappy, and the extra money won’t make up the difference.

If you take on a manager role successfully, you’ll focus on management. Again, don’t confuse IT management with engineering. Yes, if you were previously an engineer, that knowledge and experience will come through as a manager. Doing both at the same time is, at best, difficult. Be careful what you wish for.

This piece was originally written for Human Infrastructure Magazine, a Packet Pushers publication. Subscribe to HIM here, and receive it in your inbox every couple of weeks or so.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Should Monitoring Systems Also Perform Mitigation?

In a recent presentation, I was introduced to Netscout’s TruView monitoring system that includes the Pulse hardware appliance. The Pulse is a little guy that you plug into your network, where it phones home to the cloud. When it comes online, you’ll see it in your TruView console, and can configure it to do what you like. The purpose of a Pulse is to run networking monitoring tests, such as transactional HTTP or VoIP, from specific remote locations, and report the test results centrally. In this way, you can tell when certain outlying sites under your care and feeding are underperforming.

As far as remote network performance monitoring systems go, TruView is similar to NetBeez, ThousandEyes, and doubtless some others. Each of these solutions has their pros and cons. They are useful. They are necessary. They do their jobs well, not unexpectedly for a market that’s got a lot of years behind it. We need monitoring, yes — even in the age of SDN. But I believe monitoring could eventually evolve and couple itself with SDN to become something more powerful.

Historically, monitoring solutions have been very good at alerting you when something has gone awry. Shiny red lights and sundry messages can tell us when a transaction time is too high, an interface is dropping too many packets, database commits are taking too long, or a WAN link’s jitter just went south. That information is wonderful, but doesn’t resolve the issue. A course of action is required.

Perhaps the future of monitoring is not in the gathering of information, but in the actions taken on the information. This is where the more interesting bits of software defined infrastructure come into play. Software defined infrastructure is admittedly immature, lacking in standards, and fraught with vendor contention. But I believe that for monitoring solutions to have long-term viability, they will need to have mitigation engines that can react to certain infrastructure problems. I’m presuming (perhaps laughably so) that we’ll have some modicum of software-defined interfaces eventually. But let’s say that happens, such that it becomes possible for developers to write monitoring solutions with mitigation engines for software defined infrastructure. Isn’t that a logical progression?

To make my point here, some solutions already do this sort of thing. Consider SD-WAN. If WAN links were my only consideration, would I need a separate transactional monitoring system to tell me that a given link fell far below the quality required for a voice call? Not in an SD-WAN world. I would have configured a policy such that my voice call would have been routed across a link capable of meeting a voice SLA. SD-WAN does both monitoring (maybe not transactional, but monitoring all the same) and takes action if required. The transactional monitoring supplied by a standalone system is — well, not uninteresting — but less interesting at that point.

Admittedly, this turns monitoring tools into something else entirely. They go from being polling engines and stats collectors into policy and configuration engines with a great deal of logic and complexity required, especially if they are to work on disparate network topologies. But as the network becomes more programmatically accessible, monitoring seems like mere table stakes – the bare minimum required to have a viable tool. Reconfiguring the network to maintain a predefined SLA seems like a logical long-term goal. Don’t just tell me about the problem. Fix it.

As an aside, SD-WAN is perhaps an unfair role model to set out here. SD-WAN has a limited problem scope. An SD-WAN forwarder only has to monitor the virtual links between it and other forwarders in the network, and make a forwarding decision across those links. In addition, the action set is limited. Forwarding across a large network with complex attributes made up of any number of vendors’ gear is a somewhat different scope than the one the SD-WAN vendors are solving. Still. There’s a startup idea here for someone.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Next BOSNOG Meetup 1/28/16 @ 6:30p at MS NERD Center

Attention Boston area networkers — the next Boston Network Operators Group meetup will be held at the Microsoft New England Research and Development (NERD) Center on January 28, 2016 @ 6:30p. Dave Husak, founder of Plexxi, is the featured speaker. Food and drink will be provided. I expect to be there, and hope to see you, too!

Sign up here.


Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Book Recommendation: Wasteland Blues


I am a fan of any sort of apocalyptic fiction. Movies. Books. Anime. Weird Al songs. You name it. If it posits a future after the world we know is gone, I’ll give it a try. Thus it is that I recommend Wasteland Blues to you by Scott Christian Carr and my fellow Packet Pusher Andrew Conry-Murray.

The setting is a post-apocalyptic world filled with nasty critters, scary people, and a ragtag group thrown together for a cross-country journey. Wasteland Blues is an easy read. It’s a fun read. It’s even a memorable read. Months after I’ve completed it, certain images remain indelibly marked on my brain. (Specifically, the wheelchair. And the part about forced labor in the rock mine. Oh, and the big, nasty critter that…um…enough said. No spoilers.)

You should get the book, which is currently rocking 4.7 out of 5 stars on Amazon.

Oh, so you’re recommending a book by a friend of yours. Gee, why should we trust your obviously biased opinion? You’re not wrong. This is a book co-authored by my friend, Drew. I am biased. No doubt about it. So…let me put it this way. I read & watch a lot of sci-fi in my spare time. While most of the fiction I consume falls right back out of my head, I remember Wasteland Blues. That says a lot.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Resolve is easy. Planning & execution are hard.

I know from experience that resolutions are easy. We flex our determination muscle and resolve to achieve X. Making a resolution is easy, but bringing that resolution to fruition is hard. More of us have made resolutions than have seen them through.

Often, we think the issue of accomplishing our resolution is one of will or self-control. In that context, if we don’t realize our goal, it’s because we just didn’t want it badly enough. When we fail, we pity ourselves, have a consolation cookie or three, give up, and go back to a moribund contentment with the status quo. Maybe next year, we’ll be more serious, we think. More determined. Yes, we’ll try it all again at some future point when we can muster up the will to give it another go.

This is all wrong.

For me, difficulty in realizing goals has never been due to a lack of desire or will.

Failure to reach my goals is the natural result of a lack of planning.

My goals don’t just happen because I want them to. I must chart the path from where I am to where I want to be, and then follow that path. Let’s consider a few examples.

  1. Reading a book. If my goal is to finish a specific book, an attainable path to reach that goal is reading a chapter at a time. A formal plan might specify a day and the chapter to read on that day. When the plan is fully executed, I’ll have completed the book.
  2. Writing a blog post. I express many of my ideas and experiences through writing. When I write, I work from an outline. That outline is the plan. Without the outline, the blog post tends to wander, and may never be completed at all.
  3. Executing a change control. To perform a complex IT infrastructure task, I write detailed plans that describe the change, checkpoints along the way, estimated elapsed times, a backout plan, code to apply, scripts to execute, and so on. Without that plan, the change control is likely to fail.
  4. Earning a certification. The path to a professional certification includes acquiring a mastery of topics sufficient to pass one or more exams. Therefore, a certification plan involves documents and books to read and lab work to perform, all along a timeline that culminates in an exam attempt. Without the plan, a certification is more of an idea than a reality — something I think I’d like, but never seem to make progress toward.

While I don’t have any specific resolutions for the new year, I didn’t accomplish as much as I’d have liked to in 2015. In part, this is due to inadequate planning. I’ve been so busy over the last several months, that I haven’t had time to plan. And that’s exactly wrong. The way I get work done, I don’t have time NOT to plan. Looking forward, I must have a plan so that I will have sufficient focus.

Allow me to suggest that you, too, would benefit from a plan for whatever your resolutions might be. Desire and determination will bring resolve, but planning will increase the chance of actual execution. Break your goal down into manageable tasks, assign due dates, and then make it so.

Good luck. I’m off to spend some quality time with Wunderlist.


Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Join Me For An SD-WAN Webinar with Silver Peak 19-Nov-2015

I’m leading an SD-WAN related webinar as a guest of Silver Peak on November 19, 2015 at 12pm PT / 3pm ET. If you’d like to attend, register here.

I’ll be covering 5 ways that SD-WAN impacts network operations. The big idea is to explain what SD-WAN is, apply that architecture to traditional WANs, and note the positive impacts. Tony Thompson from Silver Peak will conclude the webinar with an introduction of Silver Peak’s broadband / hybrid WAN platform branded “Unity.”

I hope to see you there.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

On APIs: Cars, not assembly lines.

In recent years, infrastructure vendors have been proudly pointing out their APIs. The idea is that because a chunk of infrastructure can be monitored and configured with APIs, the product can be described as automation-ready or open.

Vendors, you’re getting it wrong here. Even making the assumption that the APIs are both well-documented and a thorough exposure of system capabilities (both bad assumptions in many cases), yes, an API does open a system up. But consider — how many of your customers have the time or ability to write a custom application to leverage those APIs? And even if they did, can they handle the technical debt incurred by the system they create? Most of your customers have neither the time nor debt tolerance.

Why is there a partial disconnect between vendors and customers when it comes to APIs? I believe it’s because vendors often focus on the high-volume, high-dollar accounts that make up the most interesting part of their sales pipeline. In the networking space, buyers with 8 or 9 figure spends probably do have access to the expertise required to create customized systems leveraging APIs. Therefore, open APIs seem like a big deal that should be in all the marketing slide decks.

Perhaps. But let’s consider an alternate way of looking at this issue. APIs are tools used to build something valuable. From a certain point of view, it makes as much sense for a vendor to brag about their API as it does for an auto manufacturer to brag about their assembly line. I care about the car. I care little about how the car was made. APIs are not the car. Please sell me a car.

Some folks in the vendor community get it, don’t get me wrong. Here’s a key response I received to a tweet the other day from Colin Dixon of Brocade and the OpenDaylight Project.

Remember, most customers are not spending 8 or 9 figures. Most customers are mid-market. They spend 5 or 6 figures annually. Their operations teams are comprised of a small number of folks with an increasing workload often spread across multiple silos. These folks don’t care terribly much about the API itself. Rather, they care about a fully integrated system that out of the box make their lives easier.

The overlooked mid-market — that as an aggregate spends billions — is looking for their IT infrastructure vendors to make their operations easier. They need simple configuration. They need excellent reporting. They need fabulous monitoring. They need full-stack integration. And they don’t have the time or ability to write these systems themselves. Consider the success of hyperconvergence: a critical part of the value proposition is the operational interface.

A historical case-in-point is SNMP. The mid-market isn’t creating their own network management stations from scratch simply because there’s a documented MIB. Rather, they buy an NMS from SolarWinds or one of the dozens of other players in the space. That said, might they write a script to access some particular OID containing a value of special interest to them? Absolutely. I’ve done that myself. But that’s a far cry from writing an NMS.

Vendors, I believe it’s time to start thinking about APIs and the growingly converged data center in the same way as we think about SNMP’s usefulness. The vast majority of your customers want open APIs, sure. But what they want (and need) even more is a turnkey system that leverages APIs without having to know or care about them.

Cars. Not assembly lines.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

CCIE Recertification + Certification News 2015

I have been a CCIE since 2008. Every 2 years, I’ve gone through the process of re-certifying in the same way: I take the routing & switching written exam until I pass it. I have friends who like to pass a different CCIE written exam for the variety and knowledge expansion of studying another technical specialty, but I’ve always been too busy to be willing to make that work. Most of my IT career I’ve had at least one job as well as a side hustle. Studying something other than routing and switching just for the variety hasn’t been something I was willing to take on. Besides, routing and switching is where I’ve spent much of my time.

The last time I re-certified, it took me three times to pass the CCIE R&S written exam. While that exam is a challenge that many people fail to pass the first time out, I felt like I was getting rusty on some fundamentals. Three times was not the end of the world, but the effort felt forced. I wanted a refresher.

Refreshments served from the firehose. Bon appetit.

Now that my next re-certification cycle is upon me, I opted to refresh by spending ten days with Narbik Kocharians, whose bootcamp was helpful in my lab attempt back in 2008. While I did many hours of the lab work that most of the students were eager to conquer in preparation for their lab exams, I focused chiefly on Narbik’s lectures. The big idea for me was not to bang around on the CLI, which I’m still adept at. Rather, I wanted to review the major CCIE R&S topics, Narbik-style. Yes, CLI is helpful, and I still do that sort of work. But sometimes you just need to recall the intricacies of how EIGRP variance works, review what the OSPF LSA types are, remember all the things about RIP you forgot because you rarely see it in real life, and contemplate NBMA behavior in a DMVPN cloud.

Narbik is a full-time CCIE instructor, and his company, Micronics Training, is part of the Cisco 360 program. Narbik has a close relationship with Cisco, and was one of three co-authors on the current CCIE R&S v5 exam guide published by CiscoPress. Narbik knows his material very well, and he hones his lectures constantly with new information, lab findings, IOS quirks, and real-world anecdotes. Plus, he wastes no words. As in, if you miss a paragraph or two because you decided to check Twitter, there’s a decent chance you’re lost until he changes to a new topic.

A Narbik lecture is not just drinking from a firehose; it’s drinking from a firehose that’s on full-blast, delivering relentlessly until Narbik is done. There are no lecture breaks. There is no repetition of big ideas. You listen. You digest. You write constantly. You get what diagrams you can. You do the CLI commands he is dictating to you off the top of his head in real-time illustrating what he’s teaching. You focus utterly, or you miss out. And you better have done your studying before you show up in the class. If you think you can show up to this bootcamp cold and just “get it,” you’re kidding yourself badly. Most of the lectures will go over your head unless you are already well-grounded in the CCIE blueprint topics.

For me, a Narbik lecture is the best thing in the world. I was that kid in school who was bored after 10 minutes of class because the information came too slowly. I would “get it” quickly, and often read ahead in the book to predict what the teacher was about to go over next. It was a game to keep myself entertained because school just didn’t move fast enough. There is no “slow” with Narbik. If your mind is always in overdrive, jumping from thing to thing because nothing moves quickly enough to keep you engaged, Narbik is your instructor.

Over the 10 day bootcamp, I was able to hear the following lectures delivered by Narbik.

  1. DMVPN
  2. EIGRP
  3. OSPF
  4. MPLS
  5. L3VPN
  6. QoS
  7. BGP
  8. L2 Security
  9. RIPv2
  10. Multicast
  11. IPv6
  12. Redistribution

This isn’t everything that’s on the CCIE written blueprint, but it is a sound list of core topics. I took thousands of words of notes, accompanied by hours of audio using Microsoft OneNote.

Was it worth it? For me, I found the refresher to be invigorating and worthwhile. From a technical perspective, I think I needed it more than I even thought I did. Even if I wasn’t re-certifying my CCIE digits, sitting in that classroom was an excellent experience. Now, onto the fun job of diligent studying, augmenting my notes with documents from, and making it through the exam.


Me with Narbik in October 2015. I can’t explain the look on my face. It’s just weird.

CCIE written retake policy updated.

If you are engaged in the CCIE program, I noticed the following on Cisco’s CCIE/CCDE written exam policy page.

Effective October 1, 2015, the following policy will no longer be in effect: Candidates may attempt any CCIE or CCDE written exam up to four times per rolling calendar year. Candidates cannot retake the same written exam more than four times per rolling calendar year regardless of passing or failing the exam.

This is encouraging to me, as (depressing though it is to contemplate) it’s entirely possible to fail that exam four times in a row. I’m not planning on that happening to me, but being limited to four times in a year was disconcerting if your CCIE status was at stake. Now, it appears you can keep throwing $400 at Cisco as many times as you care to, assuming each attempt is 15 days after the last one.

Other certification bits.

I don’t write about certifications very much these days, because I’ve lost interest in most of them. But here are a few other points I’ll raise about certs for those of you who still are compelled by them.

  1. Yes, I’ll be pursuing CCIE Emeritus status if/when Cisco makes that available to me sometime in 2018. I don’t spend much time at the Cisco CLI these days, working more with a multi-vendor range of technology. While many take their CCIE’s further than I have, becoming double, triple, or more CCIEs, that doesn’t make sense for my specific career. I’ve already achieved what I personally wanted to with the CCIE program. There is no doubt the certification changed my life, but I’m heading places now that the CCIE can’t take me.
  2. The CCDE program is still interesting to me, but I find the focus on service provider and very large enterprise technologies a disadvantage for me. Lots of work to get through the study, and I lack sufficient motivation to make a go of it right now. I still believe it’s a great program. Maybe I’ll get back to it someday.
  3. The ONF has launched a couple of SDN related certifications. They don’t look overly difficult, at least at a glance. I’m thinking of giving the ONF Certified SDN Engineer exam a try completely cold to see what I’m in for. Then I’ll pursue a course of study to fill in my no doubt many weak areas. If I do try it, I’ll pass along what I can without betraying the integrity of the program.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Would I take Wireshark training?

From time to time, the Packet Pushers inbox receives a message related to career development. This is such a set of questions along with my answers, modified here to improve clarity.

Wireshark_icon.svgQ: I’d love you get your opinions on the relativity of Wireshark technical skills in today’s enterprise environments.

A: When you need a packet analysis tool, you need it. But you likely won’t use packet analysis everyday. Wireshark’s relevance to you depends a lot on how often you find yourself resorting to packet analysis to resolve a problem.

In my experience, the root causes of most issues are almost always higher-level problems — not packet-level problems. Therefore, packet analysis is my tool of last resort.

When I have resorted to packet analysis, Wireshark has helped me resolve issues by…

  • Proving certain packets were or were not present, and on what network segments they were appearing.
  • Decoding packets to expose odd behavior — for example, how MS Lync performs SIP signaling.

Those are just two examples from my experience. And for what it’s worth, any decent packet capture tool can be helpful for these sorts of tasks.

  • At a data center site I supported for almost 4 years, we had a large WildPackets (now Savvius) install. I used it most often to troubleshoot complex issues with customers directly connected to our network over a WAN.
  • Tcpdump at a Linux CLI has often been sufficient by itself. Even so, I’d often import a tcpdump capture into Wireshark for further analysis if the hex dump on the screen wasn’t helping or if the data was moving too fast.

To summarize, if the buck stops with you when it comes to troubleshooting strange and bizarre application behavior, you’ll want to be able to use a packet capture & analysis tool effectively. Wireshark is ubiquitous; most network engineers use it. Wireshark has an active user and development community. Plus, there is a commercial variant through Riverbed if you care to go that route. Therefore, I view Wireshark as a safe packet analysis tool to spend time learning intimately.

Q: I am at a crossroads of having my employer pay for Wireshark training, but they require a one year commitment of employment in exchange for the training.

A: Personally, I don’t care what the training is that is offered — I‘m not willing to indenture myself to get the training. If my employer gives me the training for “free” as long as I stay a year, my attitude is to be willing to pay for the training if I choose to jump ship. And if I’m not willing, I won’t make the commitment to the employer.

Employers that lock employees into contracts for what are, overall, nominal expenses to them don’t have the employee’s best interests at heart. You’re just a piece of meat they are trying to keep captive.

Now, hiring and retaining staff is hard — good staff even harder. I get that. I have been a manager several times in my career and am a small business owner with employees this very moment. However, a training class that ultimately benefits the employer should be viewed as a cost of doing business. By indenturing you for a year, they’ve twisted the class into leverage to keep you around. That arrangement is mostly downside for you, and mostly upside for them.

The way I see it, such servitude is a trap unless you’re willing to buy yourself out.

Q: What is your experience in the enterprise environment with Wireshark troubleshooting? Is it even relevant anymore with all of the enterprise applications and taps deployed in today’s large network environments?

I have not taken a Wireshark class. But if my past experience from almost 20 years ago taking an Etherpeek class is still relevant, you will learn far more in a packet-level troubleshooting class than simply how to use Wireshark. You’ll get down into the bowels of protocols, learn to read headers & flags, learn how to predict things like TCP acknowledgement & sequence numbers, etc. That’s fundamental knowledge that gets right down into the nerdy depths of protocol specifics that you really can’t get in too many other courses. If that sort of thing appeals to you, the class will be a great experience.

Of course, I presume you’ll also learn Wireshark as a tool and how to make the most of it. There’s lots of power there, many advanced features, a filtering language, the ability to write plugins, and so on.

Stepping back from Wireshark specifically, the question is really about packet-level network analysis. Is that relevant in a modern data center environment, considering other analysis tools that might be in play? My opinion is that packet analysis is now and always will be relevant, no matter how large an environment gets, and no matter what other network analysis tools might be deployed.

In my mind, packet analysis will forever be useful because devices in a network can change a packet in a variety of ways.

  • Packets can be filtered (dropped). If you don’t know where a packet is being lost in a long chain of devices, a packet analyzer can help. Filtering is common with IPS appliances, firewalls, and routers.
  • Packet payloads can be changed. If that payload is being changed in some way, understanding what exactly is being changed can help a troubleshooting effort. Payload munging is common with load balancers.
  • Packets can be simply forwarded…but more slowly than expected. Packet analyzers can help identify the source of a delay by making it easy to quantify inter-packet gaps. Capture before a given device, and the flow seems normal. Capture after that same device, and the conversation gets ugly.
  • Packets get encapsulated. Understanding which device encapsulated a packet in what sort of wrapper and where that encapsulated packet is headed can sometimes shed light on a complex problem, say where a packet is getting shoved into a GRE or VXLAN tunnel you weren’t expecting.
  • Packets can be translated. The packet analyzer can help you determine which NAT deity to sacrifice a chicken to.
  • Packets can be tagged. MPLS and 802.1q for starters.
  • Packet headers can be manipulated. You won’t believe some of the things that show up in the TCP options field. Or the DSCP values being set (or not set) in the ToS byte. Etc.

And so on. Other sorts of analysis tools tend to function at a higher level, and aren’t good at correlating complex data flows and detecting problems. Admittedly, packet analysis tools provide a close-up, intimate view of data, and it can be hard to see the forest for the trees. But with training and experience, humans are often the best data correlators available. Therefore, my feeling remains that packet analysis might be the “last resort” tool in your toolbox, but as I said early in this post…when you need it, you need it.

By the way, don’t let me limit Wireshark to being a troubleshooting tool. Troubleshooting might be the primary purpose of Wireshark for most engineers, but it’s also a learning tool.

  • Execute a random traffic capture with Wireshark, and challenge yourself to explain every flow you observe. It’s a bit like looking into the Matrix, and provides useful insights. (Hint: if you’ve never done this before, you’ll quickly learn what unknown unicast flooding is as well as just how much IPv6 you’ve deployed, knowingly or not.)
  • Wireshark is also useful for reconstructing network conversations, which, by the way, may not be ethical, in accordance with corporate policy, or even legal depending on many criteria, but can be done.
  • Wireshark is also an interesting discovery tool. What hosts are really out there on the wire, and what is a baseline of normal conversation for them? I admit to wiling away hours down the rabbit hole simply working with various conversations in packet traces, trying to comprehend what I was looking at. 

Q: Have you covered Wireshark on Packet Pushers at all?

Packet Pushers has produced some content about Wireshark over the years. Click through the link below. A show we recorded with Gerald Combs will turn up, as well as some community blogger articles. None of the Wireshark content you’ll find is terribly recent, but it’s still content that you might find helpful.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

A Few Points About VMware EVO SDDC Networking

EVO-SDDCA Packet Pushers listener that heard us chatting about VMware’s EVO SDDC solution raised a few concerns about the networking functionality in the current version of EVO SDDC. I was able to talk briefly with Krish Sivakumar, Director of Product Marketing, EVO SDDC & Ven Immani, Senior Technical Marketing Engineer, EVO SDDC at VMware to help clarify some of the issues.


EVO SDDC comes with switches running Cumulus Linux, the network operating system from Cumulus Networks. The switches & networking functionality are meant to be transparent to the customer. That is to say, the EVO SDDC solution comes with two ToR switches per rack, and racks are interconnected with spine switches, all of which are provided by VMware as a part of the EVO SDDC purchase.

EVO SDDC does not expect that the switches will be configured by hand. Rather, the expectation is that they will be configured automatically. EVO SDDC users configure the compute, storage, and networking as a whole solution using a “single pane of glass” UI. There is no assumption that the existing network team will be inheriting a new Cumulus Linux-based network to learn how to configure.

Further, there is an assumption that the ToR and spine switches purchased as a part of the EVO SDDC solution are dedicated to EVO SDDC. VMware is not anticipating that customers will use spare ports that might be available to uplink other data center hosts.

That said, there are concerns with integrating the leaf-spine network that arrives as a part of EVO SDDC with the rest of the network. In addition, there are concerns about the sorts of applications that can run on the EVO SDDC platform considering that Cumulus Linux is the NOS of choice.

Raised concerns & VMware’s responses

Concern one: the EVO SDDC network is layer 2 only, and not layer 3 as currently implemented.

VMware’s response. As the product heads to market by the end of the year, it will support upstream L3 connectivity via BGP or OSPF in the ToR switches. The 40G spine layer is and will remain layer 2 only, as the spine’s job in the EVO SDDC environment is for east-west connectivity among EVO SDDC racks and not connectivity to the rest of the data center.

Customers who need to route between workloads in other parts of the data center would use their existing core. Also note that a common scenario for EVO SDDC customers will be inclusion of VMware NSX for network virtualization, meaning that workloads are assigned into their own VXLAN domain, and interdomain routing would happen as it does in NSX environments today, possibly via a gateway device.

I might dig into the nuts and bolts more deeply in a future post, but couldn’t get into too many specifics in our time-limited call today.

Concern two: there is no PIM support in the shipping version of Cumulus Linux. Therefore, how is IP multicast supported in EVO SDDC?

VMware’s response. Cumulus Linux supports IGMP snooping and L2 multicast. In other words, Cumulus Linux sees IGMP join requests, and forwards multicast traffic within the L2 domain to the requestors. This satisfies the requirement of VMware VSAN, which is part of the EVO SDDC solution. VSAN only needs L2 multicast, and not L3. As of today, VSAN nodes must exist in the same L2 domain.

However, if a customer application requires a true L3 multicast tree based on PIM, that is not handled by the EVO SDDC network. Check with VMware, understand your application requirements well, and do a technical review if you want to host a L3 multicast application requiring PIM on EVO SDDC.

Concern three: there doesn’t seem to be a network reference architecture for EVO SDDC. True or false?

VMware’s response. There is no reference network architecture documentation for EVO SDDC as yet. However, VMware is working actively on this. There are two families of documents coming. One will cover typical customer networks and explain how to integrate those networks with the EVO SDDC network. The other will involve virtual networking architectures using NSX, and will borrow heavily from NSX best practices that have been already established.

Separation of duties

A final consideration is that EVO SDDC blurs the lines of responsibility between networking and virtualization silos heavily. My take is that the networking team should not have to configure the EVO SDDC network. Let the virtualization team handle that via the single pane of glass. By that, I don’t mean they will actually have to configure the network. I mean that the network configuration will be a largely transparent element of the overall EVO SDDC solution.

Considering that the solution is supposed to be completely automated across compute, storage, and network tiers, let EVO SDDC do what it’s supposed to do, network folks. Assign IP blocks. Help integrate the edge of the solution with the core of the data center. Monitor utilization and other useful statistics. But don’t expect to have to get in there and manhandle switch port configurations, tweak protocols, etc. In theory, you don’t have to and shouldn’t even want to. Part of the EVO SDDC value proposition is that it will “just work.” (I heard you snicker. Stop it right now.)

Please tell me in the comments the impractical side of that point of view. Maybe it’s not realistic, although my gut says it is, much like virtual switches tend to be managed by virtualization folks. Provide guidance. Coordinate for connectivity to the legacy network. Set expectations. But don’t get in there with a wrench to make it go. If I’m wrong on this separation of duties notion, tell me what I’m overlooking.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Interop 2016: Introducing The Future of Networking Track @interop

logoFor the last several North American Interop conferences, I have been the Infrastructure track chair or co-chair. For Interop Las Vegas 2016, I will be doing something else. Greg Ferro and I are working together to create a new premium track titled The Future of Networking. In that track, Greg and I along with several other speakers will be discussing data center and wide area networking technologies and trends that we believe will be important over the next few years.

The point of the Future of Networking track is not to sit around and prognosticate from an ivory tower about what we think might happen. The reading of chicken entrails will be strictly limited — we promise. Rather, we will examine emerging networking technologies with genuine momentum behind them and look at their potential business and technical impact to organizations.

Put another way, we want the Future of Networking to address what you should care about as the endless parade of startups and new products clamor for your attention (and dollars). The idea is to understand what’s coming, think about it in the context of what your business needs, and make smart decisions right now in preparation. The decisions you make today will impact the decisions you are able to make tomorrow.

We’ll chat more about this track as it develops, providing registration information, etc. Stay tuned!

Note: if you’ve worked with me in the past on Infrastructure track content submission, I can put you in touch with the right folks for the upcoming Interop.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

The FCC Might Make It Harder To Use Wifi Firmware Of Your Choice

FCC-Logo-Round-e1367443023186There’s an ongoing issue in the wifi world where the FCC has proposed some new rules. The rules could effectively prevent using third party firmware in a wireless device. Why? The FCC is finding issues of radio interference along the wifi spectrum where such interference should not be if the interfering device was operating within the boundaries of US law.

In some of these cases, the interference could cause a serious safety issue, something the FCC has power to help prevent. If vendors make it difficult or impossible for users to install third party wireless operating systems, that prevents a third party OS from enabling radio channels or power settings that might be illegal in the US.

I’m not all that interested in expressing a strong opinion about this, as I see both sides of the issue and find this one nuanced and a bit complicated. As a former board member of a small FM radio station, I both respect and appreciate the FCC’s role in managing the airwaves, no matter where in the spectrum the communications fall. That said, I tend to think that any ruling that would prevent the use of third party firmware is a significant loss for both the open source community and consumers.

No matter what my feelings are, I mostly want to raise awareness through this post, as the FCC is still open for comments from the general public on the topic, at least until 9-October-2015. This is the sort of topic that will raise the ire of many. No doubt some will want to exercise their right to share some useful thoughts with the FCC on how to phrase their proposed changes differently.

But first, you should read a bit more. Rich Brown is involved in the wireless world and open source community, and posted a thoughtful commentary here.

For more information on this issue, please read…

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Google Plus Is Mouldering

A quick search for “Google Plus is dead” reveals a number of recent articles about the pending death of the social media platform. It’s not fair to say it’s dead as yet. But it’s certainly mouldering.

I took an informal survey on Twitter, LinkedIn, and Slack, asking folks if they were still using G+. Here is an anonymous compilation of those results.

I use G+ regularly. (6)

  1. I do. (Note, shared by a Google employee friend of mine.)
  2. I am still there…Maybe this is why I felt so lonely. 😕
  3. I use the communities almost daily; but it’s been there a while. Could easily move it to a different service.
  4. I do it for my blog and results are good.
  5. Yep.
  6. I do use G+ to about the same extent I use Facebook and Twitter. I prefer G+ over Facebook. Twitter is too different to compare.

G+ has residual value. (6)

  1. It’s definitely gone downhill but there appear to be some ardent users still. But then again, some people still use Friendfeed too! I get comments and +1’s from real people. And it’s worth adding your stuff there just so Google sees it!
  2. I was, sort of, for photography communities and when I was playing their game, Ingress. That’s all it’s good for as far as I’m concerned.
  3. Not for Facebook type stuff, but I have a group on there for network/IT stuff. I like the format better for that. And less chatter/clutter compared to Facebook.
  4. I use it twice a month, maybe.
  5. G+ is too good. It’s too attractive to marketing people. I just skim it for obviously amateur content, does that count as use?
  6. I use it to find how many fans I have in India.

G+ is dead to me, or never was alive at all. (32)

  1. I liked the idea at first, but I just wish the circles could be followed as well as assigned.
  2. I don’t see much happening there. But then, I never really put much time into it in the first place.
  3. Never got around to setting up an account.
  4. It was A.U.D.b.V. = Activated, Unused, De-activated by Vendor.  It’s a shame really because anyone wanting to hit Google trends needed to use this…now Google will have to actually add up other sites social trends.
  5. Setup my account and never logged in again.
  6. Nope, it had some nice features, but I never really saw much in the way of posts that I was interested in there, so eventually stopped checking it.
  7. Nope, I liked it better than Facebook, but none of my friends every made the transition.
  8. Never did.
  9. Nope.
  10. Created an account one day and never went back.
  11. Oh yea…Forgot about G+…
  12. I tried to use it but didn’t see much engagement.
  13. I use it to share stuff, but only because of due diligence (as opposed to enthusiasm or pleasure).
  14. What’s G+? Can you remind me again?
  15. Nope. Gave up on G+ long ago.
  16. Nope. I used G+ last in 2012 to express my displeasure in their real name policy.
  17. I stopped using it about 2 months after it came out.
  18. Never used it – never really liked it.
  19. Nope, put a message on there to follow me on the Twitters years ago…
  20. Nope. Have some stale account out there, but never really got into it. One social media platform too many…
  21. May have actually “used it” twice.
  22. I Waved goodbye to Plus a long time ago! (Pretty sure this respondent capitalized that “W” on purpose.)
  23. I didn’t realize G+ was still live. Haven’t looked in over a year.
  24. No, it has been a wasteland for years.
  25. No, I actually like the feel/format just never used it.
  26. Never used it really.
  27. I have an account but I barely use it.
  28. Still?
  29. I do not really use G+ anymore. I wanted it to work. Just never could get into it.
  30. No. I had high hopes that it would be the social platform I used to interact with my professional network, but here I am on Twitter.
  31. I might have looked at it two or three times after I signed up.
  32. Funny you should mention it. I checked it out again yesterday because I’m so bored with Twitter. I wish G+ would have taken off.

I think we can see where this is going.


Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

What Does It Mean When A Project Has Been Forked?

Open source projects that involve lots of folks sometimes run into conflicts. Should the project go in direction X, or direction Y? Is feature A more important, or feature B? And so on. Sometimes the concerns around an open source project are more pragmatic than pedantic. Should we, as a commercial entity, continue to use this open source project as is, or go in our own direction with it?

The keyword to look for in these circumstances is fork. If an open source project has been forked, the implication is often that someone picked up their marbles and gone to play somewhere else. I don’t mean to say that all forks are bad, but the term does have a certain tension with it. To quote Wikipedia,

…a project fork happens when developers take a copy of source code from one software package and start independent development on it, creating a distinct and separate piece of software. The term often implies not merely a development branch, but a split in the developer community, a form of schism.

I raise this issue as the good folks at Brocade want to be very sure that I (and you) understand that their Brocade SDN Controller is not a fork of the OpenDaylight Project. Lest I have led you astray in anything I have written — and I know I got it wrong once in the past — I want readers to be clear on this forking point.

For instance, I mentioned in this recent post,

So, what do I mean that the Brocade SDN Controller is “more or less” OpenDaylight? Well, ODL is a controller made up of several different projects. Brocade might not use every project that’s a part of the official ODL releases. And, it’s possible they’ll add some of their own projects that don’t end up as part of the ODL package. For example, in the 2.0 release announced on 15-September-2015, Brocade has deprecated the ODL-standard DLUX UI, replacing it with a custom Brocade-branded UI.

If you thought that meant “Brocade forked ODL,” it does not. One of my contacts at Brocade stated for maximum clarity,

Everything in our controller distro is from ODL (core article of faith), although not everything from ODL is in our distro. DLUX is one thing that is not in our distro with 2.0, although one could download and use that if desired. The Brocade UI is very explicitly technically separated from, though commercially bundled with, the controller distro (just like Topology Manager). Anything that is Brocade special sauce is developed separately from the core controller distro, because we have pledged to stay in synch with the Project—this is both a user expectation, and also the way we ensure that we can continue to leverage any and all aspects of the Project. Any proprietary extensions added to a distro will increasingly make that impossible over time. [Italics mine.]

I hope that dispels any confusion.

Now, lest this seem like a trivial matter, forking is one of those things to keep a close eye on as you investigate any of the several ODL-based controllers available today. If you go with an ODL-based controller, ideally it’s one that has not been forked, key long-term issues being SDN application portability and product upkeep. Use a forked ODL-based controller, increase your risk for vendor lock-in. Use an un-forked ODL-based controller, and minimize that risk.

Now stop giggling, and get on with your day. ;-)

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks