ITWorld has an article: “Apps to stop data breaches are too complicated to use”, which itself is a rehash of this article on The Reg. The article makes 2 (obvious to us anyway) points:
1. Security software is too damned complicated to use. No shit.
2. “…the tendency of customers to not use even the security products they’ve already bought.” I think many of these tools don’t get used because they’re complicated, require experts to feed-and-love-it and review logs constantly, and when they get in the way business gets pissed. They cost money directly, they cost operational money, they cost CPU cycles, they cost frustration from users…
(I’m trying desparately, futiley to avoid the assertion in the second article: “…needs to change so that the technology can be up and running in hours rather than months…” Trying to meet that sort of goal is ludicrous…)
Strangely, the article finishes with this odd moment:
Security systems, intrusion protection, for example, are often left in passive mode, which logs unauthorized attempts at penetration, but doesn’t identify or actively block attackers from making another try.
“It’s a mature market – please turn it on,” Vecchi told TheReg.
I’m not going to deny or accept that these are mature markets, but I will say most *businesses* ren’t mature enough to just turn security shit on. There are 2 very common results when you “turn on” technologies to do active blocking or whatever you have in mind.
a. It blocks shit you wanted to allow. This pisses off users, gets your manager in trouble, and requires experts to configure the tools and anticipate problem points, or extra time to figure it out (with the danger of some nitwhit essentially doing an “allow all” setting).
b. It doesn’t get in the way, but doesn’t block much of anything by default. I imagine far too many orgs leave it this way thinking they’re safe, when in fact it’s only blocking the absolute most obvious worm traffic and port probes (31337). In order to get it better tuned, you need experts who know what to look for and block.
The ideal is usually a state where you bounce between those two outcomes: you tune security to butt right up against the point where you’re negatively impacting people, but still providing security protection. Unless you’re a perfect security god, you will bounce in between those two states.
Business doesn’t like that. They want to create a project with a definite start and finish, implement a product, leave it alone, and have it never get in the way of legitimate business.
This is bound to fail. It’s the same concept of a security checkpoint or guard at a door: it’s intended to *slightly* get in the way when something or someone is suspicious, and does so forever. This is why I have yet to buy into “security as enabler.” Security is designed to get in the way; even security to meet a requirement so you can continue business: the requirement is the part that delivers the security and gets in the way.
There are companies that “get” security; but I guarantee they are also companies filled with employees who can tell plenty of stories about how security gets in their way on a daily basis, whether justified or not. That’s how it is, and business hates that. Even something “simple” like encryption on all laptops, is a pain in the ass to support.
To dive into a tangent at the end of this post, let me posit that security tools-makers are just plain doing it wrong. They too often want to make monolithic suites of tools that cover every base and every customer and every use case and every sort of organization. This creates tools that have tons of features that any single org will never ever have a chance in hell of using. This creates bloat, performance issues, overwhelmed staff, and mistakes. It leaves open lots of little holes; chinks in the armor. I’d liken it to expecting a baseball player to perform exceptionally at every position and in every situation. It’s not going to happen. Vendors need to offer solid training as part of their standard sale (not an extra tacked on that is always declined by the buyer).
It starts with staff and they start with smaller, scalpel-like tools. Only when staff and companies are “security-mature” will they get any below-the-surface value out of larger security tools.
Maybe over the long haul we’ll all (security and ops) get used to these huge tools in a way that we can start to really use them properly. Oh, wait, but these vendors keep buying shit and shoving it in there and releasing new versions that change crap that didn’t need changing. And IT in general is changing so fast (new OSes, new tech, new languages, new solutions) that these tools can’t keep up while also remaining useful. So…in my opinion, still doing it wrong. The difference between real useful security tools and crappy monolithic security tools, as a kneejerk though: good tools don’t change their core or even their interface much at all, they just continue to tack on new stuff inside (snort?). Bad tools keep changing the interface and and expanding core uses; essentially reseting analyst knowledge on every yearly release.
Picked this article up via Securosis.