1,182 Words. Plan about 5 minute(s) to read this.
The hipster view of data loss seems to be a shrug of the shoulders these days. As a society, we give up financial data to people that steal it so often that we barely care. We shrug our shoulders, sigh at the inconvenience, set up a new account or two, and move on. Yes, sometimes the damage is more serious than unauthorized charges on our Visa cards, but that’s rare enough to avoid societal outrage. Rather, we’ve simply gotten used to the breaches.
A recurring trend in security briefings I’ve taken over the last year is that breaches are assumed on the part of enterprises as well. If you don’t assume your infrastructure has been breached, you’re ignorant, and probably willfully so. Ostrich, meet sand.
A weird response my brain had to this is to ponder that if we’ve lost the war, why are we still fighting? Any security practitioner is rightly smirking and firing up the truck they are about to drive through that logic. Yea and verily, there doth be a hole. And we’ll get there. But first, let’s consider points in favor of this argument.
Call the wah-mbulance! Security doesn’t help!!
Properly securing an IT infrastructure is challenging.
Securing access to an application’s data requires multiple systems, coordination across silos, and IT architects who understand the application delivery system end to end. A complete solution integrates all security systems together, and is well understood by one or more security practitioners who grasp the vulnerabilities and mitigation strategies.
This sort of holistic system understanding is hard to come by on a number of levels.
Security infrastructure is expensive in capex, opex, and human infrastructure.
Many security companies charge dear prices for their products. Products tends to be expensive to acquire, and are consequently expensive to own. Some security vendors prey on the fears of their customers to extract extraordinary sums for the protection they purport to offer.
Once security gear is obtained, properly managing that security infrastructure requires well-trained and capable people. A capable professional with the time to properly configure the infrastructure, maintain that configuration as the business changes, and interpret the output of that infrastructure brings actual value to the security infrastructure purchase.
The company that expends capex and opex on security infrastructure but shirks on people to design a security architecture misses the point entirely.
Security complicates application usage.
Let’s face it. Passwords are annoying. 2FA, more so. CAPTCHA is a nuisance. Hard tokens? Forget about it. Being denied access to a file you know you should have access to? Irritating. What do you mean I don’t have sufficient rights to perform this operation? What do you mean that my code can’t write to this directory or database? And on and on the list of nuisances we experience in the name of security go.
Practically speaking, this means that some folks (often executives) make poor security choices in the name of convenience.
Security practitioners are the “no” people in the room.
Related to the issue of convenience outlined in the previous section, security practitioners are the folks in the design room no one wants to hear from. They present annoying facts that shine a harsh light on an otherwise rosy-colored view of an application delivery infrastructure.
No one wants to hear “no.” And so it is that security practitioners are shushed as paranoid, having ridiculous concerns, or making unreasonable demands.
There is no meaningful penalty for being hacked.
If it doesn’t hurt to get hacked, then why bother being responsible? Consider a security breach as an acceptable risk, and move ahead with such risk built into the budget.
But of course, I’m being ridiculous.
The presumption of breach does not mean that the practice of security is pointless. Levees still play their role even if they break from time to time. Rather, the presumption of breach is an evolution in the security thought process acknowledging the security posture reality of many organization. Detecting a presumed breach is another brick in the wall of defense-in-depth.
By the way, the real problem isn’t in detection technology. That we have in a myriad of products. The real problem is inspecting all traffic in our data centers. Security hardware devices tend to be positioned at network perimeters, leaving traffic contained within the data center perimeter un-inspected. The data center core becomes a massive zone of trust. There’s a simple reason for this: inspecting traffic at the data center core has been too difficult — too much data traveling too quickly. Or if the data center traffic was inspected, that capability came delivered in the form of big iron at a very high cost.
However, a number of changes in how security is being done are addressing this challenge.
1. Distributed firewalls. In virtual environments, small, virtual firewalls that protect individual business applications or units reduce the amount of traffic that must be inspected by a single device. VMware NSX offers a centrally managed distributed firewall solution that operates like this.
2. Service chaining. Several vendors that can perform traffic steering allow traffic to be sent to a firewall or other device for inspection. Guardicore demonstrated this at a Tech Field Day event in an HP environment leveraging OpenFlow. A Cisco ACI network allows operators to construct a service graph between end point groups. Many other solutions in this space exist, as service chaining grows in importance.
3. Visibility fabric. Gigamon and others make visibility fabrics with powerful enough hardware to copy traffic from anywhere in the data center to one or more tools that can inspect that traffic. Gigamon discussed this at length at Networking Field Day 10, including their modular HC2 box that has non-blocking ports up to 40Gbps and inline pass-through capability. Big Switch Networks’ Big Monitoring Fabric also offers a scale-out visibility fabric architecture.
4. Deep packet inspection at scale. x86 horsepower is significant enough that centralized appliances can scale to a reasonable volume of traffic affordably. Light Cyber works in this way, although effective throughput of an individual DPI engine needs to be considered in the evaluation process. But the idea here is to buy a distributed array of DPI engines, peel off the interesting traffic from each inspection point, and send it into a central engine for final analysis and breach detection.
5. Distributed agents. Another way to inspect data throughout the data center is to distribute an agent to shim into a hypervisor switch or end system, shipping the interesting results to a central engine for analysis. I’ve seen at least one system that operates like this, but can’t talk about it yet. Pesky embargoes.
This list is to whet your appetite for companies that could help control, detect, and/or remediate or otherwise positively impact the risk of internal security breaches. This is not meant to be a comprehensive list, but rather a representative list of companies that have briefed me recently. If you have a novel security product addressing the east-west traffic problem in the context of breach presumption and would like to brief me, please get on my calendar.
Note that VMware, Cisco, Big Switch Networks, and Light Cyber have sponsored the Packet Pushers podcast, of which I am a co-founder.
about | subscribe | @ecbanks