From the blog.

Complexity – My Friend, My Enemy
Over my years of network engineering, I've learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer
Brief Me At Interop?
Want to brief me about your product or otherwise have a chat? Send an e-mail to while there's still room on my calendar. I

Interview: Dr. Pat McCarthy Of The Giant Magellan Telescope

On the Citizens of Tech Podcast #43, we interviewed Dr. Patrick McCarthy of the Giant Magellan Telescope project, currently under construction in Chile.

The GMT is in a new class of “extremely large telescopes.” Featuring a custom glass formulation, seven asymmetric mirrors being polished in Arizona, and software that will correct in real-time for atmospheric distortion and physical alignment, the GMT will gather images too dim for us to have ever seen before.

Among the anticipated advances is the ability to see planets orbiting distant stars, allowing us to get that planet’s spectrographic signature. That data will help us find planets with the chemical signatures of life. We’ll also be able to look ever further back in time as we observe across light years, clarifying our understanding of the universe’s opening moments.

Pat was an outstanding spokesman for the GMT, clearly explaining the project’s worth to science, construction challenges, and relation to other extremely large telescope projects. He also helped us understand the pros and cons of terrestrial vs. space-based telescopes.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

MacBook Battery Replacement Requires Admin Credentials?

Over the weekend, I investigated the possibility of Apple replacing the tired battery in my four year old rMBP13. Yes, they can do it. It’s $199 for that particular model. But they also require an admin-level username and password for the device. Here’s an excerpt from the chat session.

Apple support rep:

What is the Admin Name and password for your Mac?


Will not share. Definitely should not be required for a battery replacement.

Apple support rep:

It is required. When the Mac goes to the repair depot that is required. You can remove that information so there is just an automatic log in. And you can set it up again when you get it back. We do not ask for any information that is not required.


Okay, then we’re done here. Thanks very much for your help!

An automatic log in, while an improvement from a certain point of view, isn’t a fix. No, you don’t have to know the user/pass now to access the system now, but you’re still on the system with admin-level credentials. Anyone with admin equivalent credentials to the system can, with a minimum of effort, get into whatever part of the file system they might like, make changes to the system, etc.

No one should give these level of credentials to anyone, let alone Apple over a chat session. Not even a properly-encrypted-with-a-valid-cert chat session that makes me believe I was, in fact, speaking to an official Apple representative.

Battery replacement in a compact laptop chassis such as a modern MacBook is an arduous affair, which is why I’m happy to pay someone else to do it. But the price of admin equivalency, even temporarily, is a price too high. Whatever the technical reasons might be for this current requirement, Apple should do better. I suggest a service mode that could be used to verify that the replacement battery installation was successful. No doubt it’s not that simple. Nothing ever is.

I’ll try a meatspace Apple store and see if there’s a way I can get the replacement done without having to hand over the admin credentials.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Connecting Python To Slack For Testing And Development

The scripting language Python can retrieve information from or publish information into the messaging app Slack. This means you can write a program that puts info into Slack for you, or accepts your queries using Slack as the interface. This is useful if you spend a lot of time in Slack, as I do.

The hard work of integrating Slack and Python has been done already. Slack offers an API, and there are at least two open source Python libraries that make leveraging these APIs in your Python code a simple task. I chose slacker after a bit of googling, but it’s not a preference borne of experience. The community seems to be behind slacker as opposed to Slack’s own python-slackclient, so I went that direction.


  1. I’ll assume you’ve got Python installed already. My environment is Ubuntu Server 16.04 with Python 2.7.12.
  2. Install the python package manger pip, if you don’t already have it.
    sudo apt install python-pip
  3. Install the slacker python library.
    pip install slacker
  4. Generate a testing and dev token at the Slack API web site.
  5. The token will be everything required for authentication to your Slack group. Protect it like a password.

Armed with the token and slacker library, your Python installation is now Slack-capable.


I took this code right from the slacker github page to make sure things were working without having to read any documentation. I created a channel called #exp to run my test in.

from slacker import Slacker

# Replace abcd-etc. with your testing and dev token
slack = Slacker('abcd-*****-*****-*****-*****')

# Send a message to #exp channel'#exp', 'Python was here.')

I ran the test using python

The result looked as follows.


Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Chicagoans: TECHunplugged Is Coming October 27, 2016

TECHunplugged is a one-day event where end users, influencers and vendors come together to talk shop. At the Chicago event on October 27, 2016, I’ll be speaking on the following big idea.

How The Network Automation War Might Soon Be Won

Here’s the abstract I proposed to the TECHunplugged team.

Automation in the virtualization world is a long-established feature. A plethora of excellent tools exist to help stand up server infrastructure, operating systems, and applications. This has helped bring much of the IT stack together in a way that makes system deployment a repeatable, predictable task. By contrast, network automation is a struggling, emergent technology. Why is it that the automation of network provisioning has proven so challenging?

Ethan Banks, 20 year IT veteran and co-host of the Packet Pushers podcasts, will explain the network automation challenge from a practitioner’s point of view. He’ll also discuss recent advances in network automation tooling from both the open source and commercial software worlds. Network automation might feel rather behind other IT silos, but there’s significant progress that will change network operations sooner rather than later.

To set context, I’ll explain why automating the network is so hard.

  • No standard way to describe a desired outcome.
  • Proprietary interfaces.
  • Snowflake architectures.
  • Unpredictable ways of measuring results.
  • A surfeit of choice.

And then we’ll talk about what’s being done to enable network automation.

  • Intent.
  • Abstraction.
  • Telemetry.
  • OpenConfig.
  • The simplicity movement.
  • Vendors like Anuta, Apstra, and Glue.

If you’re in the Chicago area, register. You’ll hear me speak along with several other folks. I’ll also be at an “ask me anything” roundtable.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

For Your Ears: Citizens of Tech Podcast 40

In this show, we get into what expiration dates on packaged food and drugs really mean. How should you react when the date expires? If you assume, “Throw it out to be safe,” you’d be wrong.

We also chat about dealing with password expiration policies. They must be super complex and changed frequently, right? Maybe not. Super complex and frequently changed means hard to remember, which studies show can lead to less security, not more.

IBM has manufactured an artificial neuron, which isn’t so interesting by itself. We’ve been here before. The interesting bit is the material used to behave like a neuronal membrane. A genuine advance.

Microsoft has announced a smaller XBoxOne S, now with 4K capabilities. Just not gaming 4K capabilities.

Blackberry is on permanent deathwatch now, as they have begun the, “All else has failed, so let’s litigate,” phase of operations.

All that, plus our regular “Content I Like” and “Today I Learned” features.

Expiring Stochastic Passwords – Citizens of Tech 040

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

I’ll See You At Cisco Live 2016 Las Vegas

I will be at Cisco Live 2016 in Las Vegas. So far, my calendar has me scheduled to attend some Tech Field Day presentations, visit with vendors, hang out in the Social Media Hub, and host a CloudGenix SD-WAN mixer event (free food and drink for all, plus fellow nerds to network with, just register).

I’m just at CLUS on a social media pass, so I won’t be at all of the Cisco-specific events. That pass doesn’t get me into all the things I don’t think, but at least I’ll be around.

If you’re a vendor who would like to brief me at CLUS, I’m happy to chat. Please schedule me. If you like the podcast, I’ll have Packet Pushers stickers for you to decorate your lanyard or laptop with.

At the end of the day, I just like hanging out with nerds, so I hope to see you there. Come up and say “hi.”

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Complexity – My Friend, My Enemy

My Twitter blew up after I tweeted thusly.

Why did I tweet this?

Over my years of network engineering, I’ve learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer features mean fewer things that can potentially go wrong. The less that goes wrong, the higher the network uptime. That’s a generalization we could pick at, but I believe it holds true overall.

I have an example of a lesson learned.

Going back almost 10 years, I was an architect at a site evaluating Cisco’s VSS. This was early days for VSS, and the code was not fully baked. Still, on a whiteboard it sounded like a good idea, and offered us a tidy way to uplink our dual-homed access switches to a VSS core without having to block uplinks. That was compelling, as we were still in the 1GbE uplink days, and link saturation was happening from time to time.

What is VSS and any of the similar systems that create a virtual chassis or MLAG capability? It is a complex, distributed control plane, where two or more physically distinct systems maintain constant communication so that they can behave as a single physical system.

I remember reading a detailed Cisco technical whitepaper on VSS that explained just how the magic happened. It sounded…okay. There were dedicated links between the switches. There was some special tagging going on. There were processes to assign which switch would be the master. There were discussions about how to prevent and recover from split-brain. There were notes on what would happen during a single supervisor engine failure. And so on. It all seemed well-thought out and carefully considered. And no doubt it was.

Yet in our lab testing, VSS was a disaster. At that point (very early days, please remember), the code was so underdone as to hard crash IOS and create a cascading failure between the two VSS members. One switch would crash hard, and reboot. On reboot, it would cause the other switch to crash, and so on. I remember walking into to the lab racks after we’d stood up the initial VSS pair, and looking at console output. They had been cascade crashing back and forth for hours, and none of us on the team had been doing any work. The VSS pair blew up all on its own, without even a test traffic load going through them.

This problem was so bad, that we couldn’t get our testing done. We thanked Cisco kindly, and shipped the test sups back.

Complexity kills, but sometimes we need it.

Now, years have gone by since that VSS story. I know of many folks with successful VSS implementations. 1.0 code is always risky. I’m not suggesting that you shouldn’t use VSS. If it’s working for you, have at it.

However, I am suggesting that the complexity introduced by VSS created fragility in the switching system. From that, I extrapolate that any sort of complexity can introduce fragility. This is not at all a new idea. I’ve heard it discussed most eloquently by David Meyer, who’s studied the issue deeply.

For the network engineer, there is irony here. Networkers have hard problems to solve for businesses. Sometimes, those hard problems call for complex solutions. If we assume that a consistently performant, always available network tends to match well with business goals, then complexity is an enemy. Simplicity is better. Less to break. A smaller dependency tree. Easier troubleshooting when something does go wrong.

Yet often (and here’s the irony), complexity is a friend simply due to the nature of the problem we’re trying to solve. Thus, we find ourselves experimenting with immature VSS code as a potential solution to engineer away uplink bottlenecks.

Or adding VXLAN overlays with a centralized controller to meet a segmentation requirement.

Or distributing policy using BGP in the form of flowspec to improve DDoS mitigation capabilities.

Or stretching L2 between data centers, and then having to implement a DCI protocol plus maybe LISP to help with traffic trombones, so that we can put any IP anywhere without disturbing the application.

Or introducing complex device profiling and authentication techniques replete with guest networks, quarantine networks, and encapsulated traffic, because BYOD. And because security.

Or creating WAN edge configurations with DMVPN, PfR, and QoS, because we have to get the most from those slow, expensive private WAN links we can’t afford to upgrade.

Or…need I go on? You can tell your own story of the lurking network horror found in a carefully crafted configuration stanza created by your own Frankensteinian hand.

Simplicity must make a comeback.

As networkers, the complexity we’ve engineered into our networks is indeed partly due to business requirements. Complexity genuinely has a place in our world.

At the same time, we receive the word from on high that we must accomplish X. And within the familiar comfort of our silos, we think about just how to accomplish X. We read. We research. We talk to our vendors. We do some lab work. And finally, we recommend a solution to X that we think will work.

Also from our silos, we probably make that recommendation. We do it without talking to the app people, the dev team, the storage admins, or even our managers if we can avoid it. We take the problem as stated to be gospel. Using the hammers we are comfortable using, we solve problems.

And now we’re screwed. All of us.

We’ve built these complex monster networks with so many dependencies, change is hard. Cloud is coming, or maybe even here, and we don’t know how to map the network we’ve got to what we’ll need next. We’ve entrenched so many special features so deeply inside of our network infrastructures, that we’ve locked ourselves in. Forget the vendors locking us in.

We did this to ourselves.

And we need to undo it. We need to go back to simple. The simpler, the better. We need to resist the temptation to hit problems with the hammers we’re used to. We need to leave our silos and start solving business problems as members of integrated IT teams. Our data center infrastructures work as cohesive, integrated application delivery systems. Why don’t we, as IT folks, follow that same integration model?

We need to be willing to professionally push back from the brink of complexity, and rally around the simpler solution. That might introduce conflict, yes. But done right, conflict will result in a better overall solution. Raising a cautionary note doesn’t make you a jerk, assuming you don’t act like one.

We need to get away from the network engineer performing complex voodoo to salvage a poorly conceived application architecture. We must be willing to demonstrate why the complex network solution is actually riskier than re-thinking the app. We won’t always prevail in those discussions, but we’ll never prevail if we don’t have the discussions at all.


As data center network designs have gone through the complexity roof, saner minds are pushing back, offering simpler solutions while still meeting business goals. is an example. Cumulus Networks’ recently announced “L3 to the host” I feel is another.

I believe this simplicity movement should continue. We all need to get to the point that we eschew nerd knobs in favor of the simplest possible solution every time. That doesn’t mean we don’t end up with a complex solution at times. Certainly, we will. But let’s do the hard work required up front to avoid complexity if we can.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Webinar – Challenges Delivering Apps The Modern Way

TL;DR. I’m hosting a webinar with Citrix about application deployment in the context of a modern data center — containers, NFV, etc. They are bringing nerds, and I am going to ask them questions. There’s a live demo at the end, so they’ve promised me. You should register and attend. The event is soon – Wednesday, June 22, 2016.

1280px-Citrix.svgFor a while now, I’ve espoused the notion that the data center is an engine: a group of discrete, yet tightly integrated parts that serve a single purpose. That purpose is delivering applications.

In the past, delivering those applications has been via “cylinders of excellence.” In other words, technology silos. Each engineers with his or her speciality would grind away at their little bit of infrastructure provisioning, making it do what the application needed. Many weeks later, when everyone was done their part and with a bit of luck, the application would work.

Hooray for us. We are infrastructure engineering. Only, that’s a terrible way to go about provisioning apps in an era where businesses want to stand up applications independent of an infrastructure engineer’s efforts.

The GIFEE movement (listen to Datanauts podcast #28 for an intro) is bringing automation, hybrid cloud, and — dare I include another buzzword — devops to bear on any environment willing to make the culture shift. But like any culture shift, we don’t know what we don’t know.

As GIFEE takes off, we’re finding challenges in the new approach to deploying applications. For example…

  • How, exactly, do you manage an explosion of containers?
  • Containers aren’t VMs, and you don’t access them via the networking services we’re used to. How do you handle that?
  • How do traditional load balancers fit into a GIFEE scheme? Or are they dead at this point?
  • Where does the human stop and automation start?
  • Why has modeling (think YAML) become so important in the modern data center?
  • How do you keep control over network performance, when services could be running in the public cloud across a high latency link?

I’m going to quiz Citrix engineers about all of this stuff. We’ve been working offline on the topics to cover. Even if you don’t give a hoot about Citrix, you should learn something, especially if you’re unfamiliar with containers and modern data center application delivery approaches. Register for this webinar here.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Should You Care About Cloud Native?

To hear vendors tell the story, every enterprise in the world will be running cloud native applications on hybrid cloud networks any second now. In fact, if you’re not already firing up those containers, your business is behind. I mean…gosh…you’re probably losing thousands of dollars each minute because you’re not agile enough. You’ll be doing massive layoffs before you’re done reading this article just to stay alive.


To understand the hype around cloud networks and cloud-native applications, you have to understand the point of them. Deploy new code quickly by minimizing infrastructure engineering involvement in the deployment process. That’s a simplification, but I think it fairly characterizes the core value proposition of devops and cloud infrastructure.

If you’re a typical enterprise, note that deploying code in this manner does not mean upgrading your MS Windows Active Directory servers more quickly, adding new functionality to SharePoint in some cloudy way, patching Oracle servers, or any of those other things you do to maintain the legacy applications you bought from “Big Software” vendors.

In fact, if you’re not developing your own applications in-house, cloud’s primary use case for you is arguably SaaS.  Not IaaS. And you don’t need a cloud network to run SaaS. You just need a decent connection to your SaaS provider.

The SaaS experience is primarily about changing the application consumption model. With SaaS, you outsource an application and its infrastructure to a provider. Ordinarily, you’d have bought from Big Software and run the pre-packaged application on your own infrastructure. With SaaS, you no longer care about the application’s installation, maintenance, or infrastructure. You pay someone else a handsome monthly fee to care about that for you.

The IaaS experience is largely about changing the infrastructure consumption model. Obviously, IaaS is very much about infrastructure. Compute, storage, networking, security, and availability requirements are still considered. But all of those concepts have been abstracted from physical servers into resources that are consumed programatically.

Clouds — IaaS — can be built privately, running on your own infrastructure. They can be consumed publicly using Azure, Google, AWS, and several others. Or they can be consumed as a hybrid of private and public clouds.

Here’s the thing. I do not believe that the SMB and mid-market making up the majority of IT shops in the world (probably you) are going to shift to cloud native applications soon. The way I see it, you don’t have the problems that the devops movement coupled with cloud is solving. In addition, you have neither the applications nor operations that map to this model especially well.

As a long-time infrastructure engineer (I was a server, storage, and backup engineer before I was a network engineer), I see a great deal of value in devops and the cloud model. It makes sense to me and pushes my nerd buttons. But I am not at all convinced that this does or even can make sense for traditional IT shops consuming Big Software applications. There is not yet a clear migration path for those who do not have their own development teams with applications to deploy.

For the average IT shop, the infrastructure play that makes more sense to me is hyperconvergence. Not cloud native.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Handling Criticism of Your Product

Over my years as a writer and podcaster, I’ve had a few vendors express their displeasure at something I said or did not say about their pet product. The fact is, sometimes I find babies ugly. That’s because sometimes…they are.

In fact, members of the IT community at large sometimes find babies ugly, and express those opinions in public. That’s how community works. We share knowledge, experience, and opinions. We agree. We disagree. We discuss. We speak through our microphones and keyboards, and it’s all intended to be for the greater good.

What To Do

Vendors, you are a part of that community. You’re not above it. You don’t control it. You’re simply a part of it, just like the rest of us. With that in mind, what do you do when someone in the IT community calls your baby ugly?

  1. Recognize that one member of the community doesn’t control the rest of the community. We have opinions. We share. We consider. And yes, we also influence each other as well as any audience we might have. But a single opinion shared doesn’t kill your product.
  2. A negative opinion shared publicly is a chance for you to take the information and react positively. Is the negative opinion valid? Is there an opportunity to improve the product? Is the criticizer willing to engage with you offline so that you can find out more details about their negative opinion? This is a chance for you to grow your product based on the input of someone who cared enough to talk about it in a public space! That’s gold you can’t mine any other way.
  3. Take the opportunity to patiently educate. Sometimes, media creators get it wrong. For example, I recently wrote a piece where I described a product, but made an irrelevant point along the way because of how the vendor was positioning their product. The vendor did not castigate me. Instead, they took the time to argue the relevance of my point, and convinced me that I’d gotten it wrong. Score a point for the vendor, as I’ll never look at that product in the same way and can now accurately describe it in future discussions with the community.

What Not To Do

Some vendor responses to public criticism are beyond what could be considered reasonable. I’ve heard several stories from Packet Pushers listeners and others in the IT community about how their public criticism, no matter how well-balanced or substantiated, ruined their relationship with a vendor. This should never be.

As a vendor, when you persecute a critic, it puts the rest of the community on notice that it’s safest to never talk about your products. Ever. And maybe to not use or recommend your products at all. Most humans will avoid confrontation. On the whole, aggressively going after a critic is a negative for your organization.

A defensive response is fine, and even expected. Rational, balanced dialogue and discussion are reasonable and even desirable. Going after someone’s reputation, threatening their livelihood, or making it impossible to do future business all because they made an observation about your ugly baby is unreasonable behavior.

Don’t be unreasonable.


Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Book Review: Deep Work by Cal Newport


Deep Work by Cal Newport is highly recommended if you are an information worker who is less productive than you wish you were. I recommended Deep Work even more highly if you feel you are productive, but are not producing the sort of work you desperately want to be.


More about Deep Work

I live in a state of distraction. Even working from home as a small business owner where I have no coworkers nearby, I am deluged with e-mail, Slack messages, iMessages, and other incoming data competing for my attention. This means that it’s easy to allow myself to flit from one thing to another. I’m constantly busy for many hours of the day, yet on some days struggle to accomplish anything I feel good about.

Deep Work uncovers how the information age impacts productivity. The book’s biggest idea is that truly deep work — the sort of work where you make mental breakthroughs and cognitive advances — only come with prolonged periods of focus. Maintaining focus means getting rid of distractions.

Here’s the catch. Our smart phones, inbox notifiers, and related alarms have trained our brains to seek out the quick burst of chemical energy we receive when reading whatever the alert directed us to. Therefore, breaking away from a constantly distracted life is not as simple as deciding to resist temptation.


Deep Work takes on the unlearning process. The book cites many examples from research studies and notable individuals about becoming capable of extended focus, and consequently, deep work.

Deep Work’s Specific Impact on Me

Deep Work is a book that demands action of the reader, or else it’s a mere curiosity you’ll forget about as soon as you put it down. Here’s what I’ve done so far.

  1. Shut off almost all notifications of any kind. Only for specific notifications from my family or direct messages have I kept alerts on. My family is not chatty, so I’m not deluged. That said, Slack remains a disruption (despite shutting down almost all notifications) I don’t quite know how to handle, as it is our company’s primary communications tool.
  2. Invested in anti-distraction software. I have found the urge to check Tweetdeck or LinkedIn almost overwhelming, particularly when I’m tired or finding the work I need to get done unappealing. To help break the addiction, I purchased Anti-Social. This allows me to build a list of sites I don’t want to be tempted by during times of focus. Anti-Social appears to have re-branded as since I purchased it, and is cross-platform.
  3. Invested even more energy in Wunderlist. I’ve been using Wunderlist for a while now, even before reading Deep Work. Now, I’m even more invested in it. Wunderlist defines what I’m supposed to be working on at any given time. When I feel uncertain about what to do next, Wunderlist brings me back into focus.
  4. Reading more and streaming less. I had slipped into a habit of watching 2+ hours of streaming on several nights of the week. I feel that habit was making it harder for me to concentrate during the day where I might want to read or research. I tend to watch nothing or perhaps an hour now, trading in streaming for reading. When I do stream, I mix in documentaries and other non-fiction with pure entertainment. My evidence is only anecdotal here, but I feel that my work day focus when I research or write is improving as a result.
  5. Keeping my phone in my pocket. When tempted to look at my phone, I don’t, at least not without a specific reason. I resist. This is part of the unlearning process mentioned in the book. Not picking up the phone means I’m not disrupting whatever it was that my brain was doing before it derped and said, “Hey! Check your phone for no good reason.”
  6. Deleted apps from my phone. For instance, Twitter is gone from my phone. These days, I only install it for conferences, and find it to be immediately addicting. Thus, when I’m in the airport and have said my last social media goodbyes, I delete Twitter again. And just generally, I try to keep distracting apps off my phone.

What now?

Going forward, I have more changes to make. To reinforce the big ideas, I would like to read Deep Work again, not a huge challenge as it’s an easy-to-read 263 pages. I want to read the book this second time with a more critical eye to how I’ve applied the principles thus far, and see what I can do better.

I have goals. I need to be incredibly productive for a concentrated number of hours per week. Most days, I can do my job effectively in an 8+ hour day, presuming I’m focused during that time. That allows me to write, respond to my business e-mails, take briefings from vendors, plan and record podcasts, organize conference and community events, do lab work and other research, and build presentations. Add to that whatever else I’m called upon to accomplish as a small business owner.

Allowing myself to be distracted means that I don’t accomplish as much as I need to in that 8+ hour day, and thus my day and week may drag on. As much as I enjoy technology, I’d rather be trail running or walking along a beautiful ridgeline on a long distance hike. Only if my work is done do I have that luxury of time in the great outdoors.

However, Deep Work is not merely a reboot of the classic Getting Things Done, which I also own. I want to think more deeply through the issues facing enterprise IT. Technology has no lack of complex implementation issues that present themselves once you get past a vendor’s Pinocchio architecture and turn it into a real boy. But to see the issues, deep thought is required, at least for me. I want to get that sort of thinking done as a regular habit. I want those breakthroughs. I want that insight. I want that depth.

I am reading The Phoenix Project right now, and the latest CCDE Study Guide book is up right after that. Then I’ll tackle Deep Work once again.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Brief Me At Interop?

I’ll be at Interop Las Vegas 2016 along with the rest of the Packet Pushers team. I’ll be busy with our Future of Networking Summit on Monday and Tuesday, but would like to schedule vendor briefings on Wednesday and Thursday. My dance card is not yet full.

Want to brief me about your product or otherwise have a chat? Send an e-mail to while there’s still room on my calendar. I look forward to seeing you at Interop!

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Living With The iPhone 6S+ 128GB

The Apple iPhone 6S+ 128GB is Apple’s current huge flagship phone. If you want dimensions, weight, screen specs, etc., please google them up. I add no value by regurgitating them here. Rather, I want to focus on my user experience, now that I’ve had the thing for several months.

How’s that huge form factor working out?

Fine. Yeah, it’s big — about as big as a checkbook. But it still fits in my pants pocket. I do not use a case for it unless I’m in the woods hiking as it would be too big, in my opinion.

I find it hard to operate one-handed. My thumb can’t reach across or up the screen to get at certain things, and if I try, I often hit something I didn’t mean to. Hilarity ensues, usually accompanied by me sighing heavily.

Update. A few people have mentioned the “reachability” feature to me, where you lightly double-tap (not press) the home key, and the screen slides down about halfway. Yes, I’m aware of it, but I left it out initially because I feel it has limited uses. Items at the top of the screen (north-south) do indeed become easier to reach, but reaching across the screen (east-west) is still hard. Plus, you lose the visibility of half of whatever was on the screen, as that content slides off the bottom. No big deal in some screens, but annoying in others. Reachability is a good idea, but it doesn’t change the fact that this is a big phone.

Don’t forget that to use this phone one-handed, you have to balance the monster in your hand while pressing things with your thumb. This is not an easy task, made all the more peril-fraught considering how much the stupid thing costs. You don’t want to drop it just because you’re stabbing at an icon in a hurry.

Battery life. Tell me the truth.

It’s fine. I get a solid day out of it under almost any circumstance, even hiking in the woods with the GPS running and cell signal marginal, causing the phone to poll often. Even flying across country and watching Amazon Prime videos offline doesn’t have me reaching for a power brick. Guesstimate? I get 12+ hours of heavy use out of the device after 5 months or so.

Radio reception report: LTE, WiFi, Bluetooth.

Cell reception and LTE are fine most of the time, but there are moments where my wife’s 5S has 3 bars of LTE, while my 6S+ is limping along with a bar or two of 3G. We’ve seen the opposite scenario, however, so I don’t have a strong opinion here.

Wifi is fine. The phone has an 802.11ac chipset, and connects either of the 2 Apple Airport Extreme APs in my home with no drama, dropped connections, throughput challenges, or other weirdness. In general, I find that the wifi performance on my phone matches the experience I’m having with my other devices, such as a MacBook Pro. Therefore, I consider it unremarkable. It just works.

I use Bluetooth on the phone most often to pair with audio headunits in my cars. Pairing always works. Signal is always strong. Music and podcasts are always listenable, even at volume. I also have an earpiece that I pair now and then. Again, it just works.

Is the software reliable?

I am running iOS 9.2.1 as of this writing, and have no complaints. Several earlier versions in the iOS 9 family upset me greatly. At the moment, Apple seems to have things sorted. I pray they don’t alter the deal further.

I don’t use many of Apple’s built-in apps. For instance, the Podcasts app is irredeemably broken. Many of the rest they force on me aren’t interesting.

What do I use it for?

Here’s how I use my phone.

  • As a phone, but only occasionally. The iPhone voice app interface is familiar and predictable on those occasions I need it.
  • As a chat client. I run the iMessages, Slack, and Google Hangouts apps.
  • To consume media. I stream video via Amazon Prime, or watch it offline. I stream music via Spotify, or listen to offline copies. I listen to podcasts with Overcast.
  • To read e-mail & compose short responses. I gave up on Apple Mail, and am currently favoring Microsoft Outlook. Outlook lets me program swipe functions, which means I can fly through my inbox as most of the e-mail I get is boring crap sent by people who, by inference, hate me.
  • To read newsfeeds. I use the Feedly app, and consume almost all of my feeds via the phone, most often before going to sleep or getting up in the morning.
  • To read books. I use the Kindle app to consume books. I don’t do much of this on the phone, preferring my Kindle. However, a cat chewed on my Kindle Voyage and cracked the touch screen, and thus I’ve been using the iPhone again. It’s…an okay experience, I guess. The 6S+ screen is big enough to work as a Kindle device without wishing I was dead five pages in.
  • For wilderness navigation. I use the MotionX-GPS app for my outdoors jaunts. With the 6S+ screen being as large as it is, MotionX is a great tool. I’m surprisingly happy with this aspect of the phone, as it’s worked out better than expected. At least until the phone gets too cold (around 20F or so), at which time it shuts down.
  • As a nightstand clock. I use the unlocked version of the Nite Time app for this, and rest the phone on a stand in landscape mode.
  • As external biomemory. I track most of my life in Wunderlist, frequently when I’m out and about with my phone.

Those are the big things. Yes, I have many other apps I use on my iPhone, but they aren’t nearly as key to my iPhone experience.

One comment about apps on the 6S+ is that some apps scale to fit the screen, which makes them look weird — fonts too big, mostly. Apps written for the 6S+ will scale properly, and might also support a landscape view. However, landscape view on the 6S+ is less useful than you’d think it would be for many applications. The screen is big, but not so big that a split pane view makes sense. At least, it doesn’t really work for me.

Do you miss your iPad Mini?

No. For me, the iPhone 6S+ is good enough to do the things I used the iPad Mini for previously. Not a perfect replacement, mind you. But good enough.

Can you type better on the bigger phone?

Not even slightly.

What accessories did you buy?

Highly recommended accessories I use every day.

Do you really use 128GB of storage?

Yes. I shoot video and take a lot of pictures. I also store a lot of video and audio content offline since I travel by jet often enough to care. While the bump from 16GB to 128GB was a ridiculous $200, I’m happy I have the capacity.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Will Public Cloud Make Us Prisoners Of Pricing?

A Packet Pushers listener wrote in with the following theory about public cloud eating all the private data centers of the world. Here’s an excerpt.

I think the more interesting division is the Morlock-Eloi division that is rising through cloud computing. Business people as Eloi can pick from a menu and decide what they need. The Morlocks in the cloud do all  the technical work and make things happen. Once the Morlocks have the Eloi in complete cloud lock-in, they will raise prices. Those of use who are Morlocks will have to switch to providers to find work that is interesting. Only the companies that have the funds to afford their own Morlocks will avoid the price gouging that is to come…

What do you think?


I think the formula is a bit more complex. I believe market pressures exist in the unsettled cloud market that will keep prices under control for the foreseeable future.

Public Clouds Competing For Your Business

I think the Morlocks vs. Eloi comparison is apt. The more consumable we make IT, the easier it will be for the technically ignorant Eloi to use it successfully, but cluelessly. This will put the Morlocks in a position of power. And yes, that could mean that people working in enterprise IT today might find the most interesting job opportunities working for big cloud providers. 

However, I think there’s more in play here than the knowledge haves vs. have-nots. Public cloud is a competitive marketplace, and not a monopoly. Therefore, market forces are at work. As long as there are several public clouds to choose from, there will always be price competition.

That said, cloud is still young, and the competitive landscape isn’t in its final form yet. For example, there are the recent exits by Verizon and HP. And then there are reportedly immature players like SoftLayer. AWS is the strongest public cloud provider today, but Google & Azure offerings are catching up rapidly. Both of these have built-in markets to go after — lots of existing customer relationships they could turn into cloud consumers. Therefore, I believe pricing will remain part of the equation for the long-term.

Private Cloud Becoming Viable (?)

Another monkey wrench in the “public cloud is going to charge whatever they want” notion is OpenStack. OpenStack is hard to implement now, and that’s an adoption barrier for the mass market. But eventually, OpenStack matures. When it does indeed grow up and become predictably easy to setup, reliable to operate, and stable in its codebase, I believe that will be a big factor in helping the Eloi decide to maybe keep their clouds private.

And don’t count out VMware. Sure, running a private cloud on VMware is hard on the budget. But VMware has valid reasons to keep their private cloud product set salable. A VMware private cloud keeps their enormous customer base in the fold, and generates revenue for its own sake. If VMware gets enough market pressure, I think they’ll figure out how to sell private cloud at less than extortionate prices.

The Talent Pool Will Grow

A big problem right now is that there is no one predictable way to build a private cloud — and built into that statement is an assumption that private cloud is desirable for all organizations. I’m not sure it is yet. But because there is no one standard way to go about the brave new IT, finding talent with experience and wisdom to guide an organization is hard to come by — at least for right now.

For sake of argument, let’s assume that private cloud does become a generally accepted approach to IT. Having a generally accepted approach will reduce the barrier of private cloud adoption. Getting to technology stability — i.e. OpenStack or VMware becomes a standardized way to build private cloud — then we end up with products to train the masses on. In that case, we’d have an accessible talent pool to build out private clouds. As a result, we would yet again have downward pressure on public cloud pricing.

Public Cloud Still Has Architectural Issues

And let’s not forget that public cloud still has both latency and security challenges with answers that cost money. Overcoming latency challenges is addressable with enough money thrown at the problem (as well as carefully considered design), but that money adds to the cost of consuming public cloud. Security issues are similar — solvable with money perhaps, but that eats into the public cloud value proposition.

That means that the decision to move to public cloud is, ultimately, complicated. I don’t think the decision will ever be so easy or obvious that everyone will just do it. Thus, public cloud pricing will always be under pressure. And even for those organizations fully committed to public cloud, there will be other cloud alternatives.

I suppose a follow-up post would be a consideration of cloud lock-in. Would it be simply too hard to move to another cloud, and might that cause some price gouging? Could be. Of course, you could always build greenfield and migrate. Perhaps migrate is the new upgrade.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

So, You Want To Be A Manager

As a younger engineer, I was frequently frustrated by co-workers or managers who had control, but lacked my technical ability. At that time, I equated knowledge with responsibility. I felt that if I knew intimately how something worked, I should be the one in charge. I should have control. From some of the frustrated e-mails we get at Packet Pushers, I know that others of you identify with this.

And so it was as a young man that I aspired to be a manager. Management looked like control to me. After all, I worked for a manager. That manager told me what to do, and I did it. That’s a simplification of the relationship, but at the root of it, there it was.

I thought that as I acquired technical expertise in operating systems, security, and networking, I should be the one holding the reins. Since I knew how the systems actually worked, and even understood how they worked together, I should be the one telling everyone else what to be doing.

That’s logical, perhaps. But it’s naive. Management is not engineering. Management is not technical leadership, at least not by default. Management is a skill all its own that, like anything else, must be learned. A good IT manager…

  • is experienced with people.
  • understands how businesses operate.
  • can translate business needs to technical requirements.
  • communicates those technical requirements to engineering.

That’s really what you’re signing up for when you want to be a manager. Managing people, especially hard-headed technical people, is an extraordinarily challenging job. Being a rock star in the data center doesn’t make you a rock star in the office.

Let’s say you choose not to salute my cautionary flag. You believe in your heart of hearts that if you were in charge, things would be better. Maybe you’re right, but consider this. If you are granted managerial responsibility, you are going to have to keep doing the engineering job you’ve always done.

Each time I’ve been a manager with direct reports, I’ve still had to perform engineering duties. And that’s true whether I, in my ignorance, pushed to be a manager, or whether the manager title was hung on me against my will. Doing both is no fun. Engineers think management is no big deal, and trivialize the workload. Don’t make this mistake.

If influence is really what you’re after, you don’t want the manager role. You want a technical leadership role. A technical lead with no direct reports allows you to be the excellent engineer you’ve trained so hard to be, while avoiding the burden of business meetings, budgets, reviews, executive interaction, and (to some degree) project management.

A technical lead role means that you can focus on design, engineering collaboration, and research. You can recommend for and against certain strategies to your manager, who then deals with the business end of things…like getting the solution paid for.

At this point, I feel a disturbance in the Force, as if a thousand readers are all saying, “But if I don’t become a manager, I’ll never get paid more money!” Depending on your employer, that might be true. But folks, it’s a trap. If money is your only reason for accepting a management proposition, it’s the wrong reason. You’ll be unhappy, and the extra money won’t make up the difference.

If you take on a manager role successfully, you’ll focus on management. Again, don’t confuse IT management with engineering. Yes, if you were previously an engineer, that knowledge and experience will come through as a manager. Doing both at the same time is, at best, difficult. Be careful what you wish for.

This piece was originally written for Human Infrastructure Magazine, a Packet Pushers publication. Subscribe to HIM here, and receive it in your inbox every couple of weeks or so.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

Should Monitoring Systems Also Perform Mitigation?

In a recent presentation, I was introduced to Netscout’s TruView monitoring system that includes the Pulse hardware appliance. The Pulse is a little guy that you plug into your network, where it phones home to the cloud. When it comes online, you’ll see it in your TruView console, and can configure it to do what you like. The purpose of a Pulse is to run networking monitoring tests, such as transactional HTTP or VoIP, from specific remote locations, and report the test results centrally. In this way, you can tell when certain outlying sites under your care and feeding are underperforming.

As far as remote network performance monitoring systems go, TruView is similar to NetBeez, ThousandEyes, and doubtless some others. Each of these solutions has their pros and cons. They are useful. They are necessary. They do their jobs well, not unexpectedly for a market that’s got a lot of years behind it. We need monitoring, yes — even in the age of SDN. But I believe monitoring could eventually evolve and couple itself with SDN to become something more powerful.

Historically, monitoring solutions have been very good at alerting you when something has gone awry. Shiny red lights and sundry messages can tell us when a transaction time is too high, an interface is dropping too many packets, database commits are taking too long, or a WAN link’s jitter just went south. That information is wonderful, but doesn’t resolve the issue. A course of action is required.

Perhaps the future of monitoring is not in the gathering of information, but in the actions taken on the information. This is where the more interesting bits of software defined infrastructure come into play. Software defined infrastructure is admittedly immature, lacking in standards, and fraught with vendor contention. But I believe that for monitoring solutions to have long-term viability, they will need to have mitigation engines that can react to certain infrastructure problems. I’m presuming (perhaps laughably so) that we’ll have some modicum of software-defined interfaces eventually. But let’s say that happens, such that it becomes possible for developers to write monitoring solutions with mitigation engines for software defined infrastructure. Isn’t that a logical progression?

To make my point here, some solutions already do this sort of thing. Consider SD-WAN. If WAN links were my only consideration, would I need a separate transactional monitoring system to tell me that a given link fell far below the quality required for a voice call? Not in an SD-WAN world. I would have configured a policy such that my voice call would have been routed across a link capable of meeting a voice SLA. SD-WAN does both monitoring (maybe not transactional, but monitoring all the same) and takes action if required. The transactional monitoring supplied by a standalone system is — well, not uninteresting — but less interesting at that point.

Admittedly, this turns monitoring tools into something else entirely. They go from being polling engines and stats collectors into policy and configuration engines with a great deal of logic and complexity required, especially if they are to work on disparate network topologies. But as the network becomes more programmatically accessible, monitoring seems like mere table stakes – the bare minimum required to have a viable tool. Reconfiguring the network to maintain a predefined SLA seems like a logical long-term goal. Don’t just tell me about the problem. Fix it.

As an aside, SD-WAN is perhaps an unfair role model to set out here. SD-WAN has a limited problem scope. An SD-WAN forwarder only has to monitor the virtual links between it and other forwarders in the network, and make a forwarding decision across those links. In addition, the action set is limited. Forwarding across a large network with complex attributes made up of any number of vendors’ gear is a somewhat different scope than the one the SD-WAN vendors are solving. Still. There’s a startup idea here for someone.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks