why security pros fail – seven problems

Another old CSOOnline article link I’ve had sitting around is, “Why security pros fail (and what to do about it).” Per usual, here are bullets points and my reactions. Yes, this starts out juicy and hot.

Problem #1: Security Is Thought of as a Disabler – Yes, a touchy subject. When you talk to your local law enforcement, do you think they give a shit whether they’re an enabler or getting in the way of criminals? I’ll give a hint: they don’t get evaluated on their customer service report cards. Basically, I hate the lie we tell ourselves about being enablers. We *do* get in the way. Deal with it.

That’s not to say we should say no and say it proudly and fiercely, and I think the author would ultimately agree with me. We should be involved in business decisions and give guidance as necessary. This is as much an operations or leadership issue as security, though.

This is one place compliance is a good thing: We can point to requirements and use them to say no to things. You want to go to the cloud and that provider doesn’t use SSL or other controls to protect data-in-motion? Our requirements say no.

Yes, talking about enabler vs getting in the way is a touchy subject with me. We ultimately need to deal with the fact that security gets in the way by definition. And move on.

Problem #2: Security Offers Only One Solution – I like this bullet point, and it’s a great approach. As security people, we need to give the low-down on what a perfect situation may require, including the risks. But we should also give a dose or practicality and realism into our discussion. Yes, we could segment the shit out of the network, but we know that’s costly in many ways, but here’s what we’d realistically like to see…

Problem #3: Not Enough Humble Pie – Ok, another touchy subject is that of railing against FUD. In a way, railing against FUD *is* FUD, when you really sit down and get philosophical about it. This is another topic we have to accept and move the fuck on about. Yes, some people/vendors do take this to extremes, but please feel free to let them; sometimes we’re expecting them to since maybe we didn’t know about a particular threat until now. This does underline the need to inject practicality into discussions, though. Sadly, this good bullet point forgot its place and shouldn’t have injected the FUD distration.

Problem #4: Believing the Customer Is Clueless – I don’t actually get this bullet point at all and it probably requires context on his sources and their experiences and what they’re specifically talking about. There are many times where a customer *is* clueless; why else would they bring in outside help? And just because they opt to not listen to certain suggestions, doesn’t mean everyone is failing and dumb; just because you told me not to bet on RED for this spin, doesn’t mean I am stupid if I do anyway. That’s part of the Big Gamble in security.

Problem 5: Personal Cyber Ethics: Are You An Insider Threat? – Not sure I get this bullet point either, and sounds like a source had a personal situation with it. Every insider has ethics temptations. We also should define what security pro is before getting too far into this discussion. Does this include professional consultants or full-disclosure anonymous security “researchers?” I do believe there is a certain level of being above certain restrictions at work, that “normal” users are subjected to. But that is true about any technical or administrative or leadership position. I’m not saying they should be expempt from everything, but this bullet point discussion itself is a bad slippery slope. (A CSO shouldn’t have much more access than any other C-level anyway…)

Problem 6: Career Burnout – This is a great problem to bring up, but the handling of it in this bullet is trash. Security is a high stress IT job, to be honest, for a variety of reasons (you’ll never win, you always have to educate, you’ll never get exactly what you want, you need to be an expert in many things…). No discussion about this should exclude the idea that maybe the career is not for you, if you’re feeling excessively burnt out. And figure out and pursue what makes you happy.

Problem 7: Career Perspective Stuck in a Box – I like this item, but part of it doesn’t sit well with me. I think we again have to define security pro: are we talking a middle-manager-like policymaker or someone in the trenches? That will dictate a huge difference in making efforts in the 5 preferred skills areas (attitude, relationsip, equipping, leadership, technical). I suppose this is in CSOOnline and thus more about CSOs…in which case, I agree.

It might sound like I have an issue with this article, but I really don’t. I like the discussion and bullets, and am just being extra contrarian today.

10 tips for success pen testing programs

A 2010 article on CSOOnline goes over, “Penetration tests: 10 tips for a successful program.” I’ve had this in my “to-read” hopper for way too long. The author goes over 10 tips on getting started with penetration testing in your organization.

Penetration Test Tip 1: Define Your Goals – Unlike the author, I think the reality *is* that some goals are just to tick a compliance check box. Nonetheless, this bullet point should also include discussion on managing the expectations of a pentest. Are you looking for a 2-day blitz, a vuln scan, or deep dive into custom application/software testing?

Penetration Test Tip 2: Follow the data – I do agree with this, but sometimes a pentest is more than just focusing on the data, and rather focusing on access. For instance, if I can attack a system and get admin rights, and then domain admin rights, it really doesn’t matter where your secret data is. I have access to it. But otherwise, yes, this bullet is valid.

Penetration Test Tip 3: Talk to the Business Owners – Can’t really argue with this. Take inventory, get an understanding of software, and align with business, pretty much sums up this bullet with popular buzzphrases. Ok, 2 best practices and a buzzphrase.

Penetration Test Tip 4: Test Against the Risk – When it comes to pentesting, I’m a bit more annoyed when people limit scope based on various factors; in this case data/application value. A development server with no real data is still a risk if I can get into it, drop a keylogger/priv escalation on it, fuck it up enough to get an admin’s attention enough to log in, and then scrape their creds/hash. In this bullet, I like how the author basically illustrates my point in bullet #1: you *can* start out with compliance checklist matching, and then expand from there as you truly value the security.

The rest of the bullet points pretty much stand on their own, and are good. I would add somewhere that pen-testing is an iterative process where you go through rounds of testing, adjust as needed, expand scopes, and dig further. Basically your normal OODA loop.

recent discussion points about security aligning to business

Someone knew what they were doing when they put Nickerson, Ranum, and Hutton on a panel together; strong statements without punches pulled are common from those three (things that need to be said). And I’m not surprised subsequent coverage is getting some mileage.

Rafal Los posted about the talk, and pretty much makes statements one can’t easily argue with (at least *I* can’t since I agree). Also, Alan Shimmel jumped in with, Until You Walk A Mile In Those Shoes. (Note, all of the above links are excellent reads, and I can’t wait to see some video from that panel as I’m sure it’s chock full of points I can agree with.)

First, there is a bit of care that should be taken to give blanket statements that person X sucks and should be fired, based on one brief interaction or point example. You can even take a person who is doing well but has some self-image issues, and suddenly he’ll give up because he was just told he sucks, when in fact he wasn’t. I agree with this sentiment if one takes at *least* a cursory look at their body of work and business security posture and value to the business. But then we also need to look and see if maybe the *business* is just wanting a puppet security guy/team/initiative in the first place… I think Phil makes that point in the commants to Rafal’s post. For all I know, we might collectively be doing a bang-up job of security, all things considered. Sure that 1 server didn’t get patched, but maybe 6 months ago none of them were being patched…

The hot example of Nickerson asking an attendee’s company mission statement is a pretty slick bit of trickery. How many people in any given audience will be able to, out of the blue, recite or otherwise explain their company mission statement? Not many. And how many of those mission statements are going to be shit? Enough of them. My company’s mission statement is posted next to me. I could probably fit a daily hour of Doom/Quake-playing into the goals of the mission statement. Anyway, as Phil somewhat mentioned in respond to Rafal, I think a CxO should keep mission statements in mind, but other lower peons probably don’t need it quite so close at hand; they should trust that their management is giving good enough direction in their own requests and projects handed down. But if that was a CSO who is normally unperturbed about being put on the spot, one could certainly slap his hand for not knowing the mission statement.

The question came out of railing against security by compliance or security by securing “everything” and such topics. The problem there is probably twofold:

1) Corporate networks are slowly built, and only “recently” has compliance been a driver. Sadly, most networks are probably way too flat. This means if compliance mandate A wants server 23 and all its peers secured to a certain level, then everything in that network has to be as well. Segregation for security purposes is well behind the curve, which compliance exposes by way of the economics of sastifying its needs. Scope, scope, scope…and big scope costs big money by way of resources and time.

2) Business-to-business (B2B) relationships are most easily answered by simple questions such as, “Are you in compliance with A?” That’s far better than a vague and silly 23-page security questionnaire filled in by someone the security and IT teams don’t even know. The businesses are pressured for these easy answers from the top down and sideways. Or each organization trying to explain their security approach. Think of it this way. If someone asks you if you have use perimeter firewalls, and you don’t, you’re going to have to spend a good amount of time talking about border router ACLs and various other technologies (and maybe even defend their value over and over) rather than spending 2 seconds to answer, “Yes.”

I like what Alan Shimmel had to say in, Until You Walk A Mile In Those Shoes. We have a lot of security breakers who rail against security defenders, but don’t really *get* the experience of being a long-term defender. (disclaimer: I’m not directly referring to anyone mentioned above, at all.) Security is already a losing proposition where it *will* fail someday and it *will* fail to be comprehensive. That’s just how it is. And that’s even before economics and business questions and human questions are brought up which prevent certain things from being accomplished or introduce new issues. At least I am enthused that many breakers these days are admitting their job is easy compared to the defenders.

stand in the gap

Gunnar Peterson has a great post: “Who Manages App Gateways? Who Indeed? Yo La Tengo – Call in Security DevOps”. I’m going to dive into the 2 basic problems Gunnar has touched on, and also move a bit further overall. (Warning: I use slightly different terms than Gunnar, so App Gateway is analogous to WAF to me.)

1. The basic question is: Who manages the app gateways? Or, who manages the Web-App Firewall? Netops plugs the hardware in and makes sure it talks on the right networks in the right directions. Sysops makes sure it talks in a way that it can interoperate with the systems it need to and gets monitored for health. Then what? In my opinion, this is as far as most organizations who check the box for, “We have a WAF!” go. They set it up, get it successfully in the middle, and then no one has the chops to actually tune and configure the thing beyond crossing their fingers, turning on the defaults, and breathing easy when it doesn’t break anything (all the while, it’s not really doing anything except the barest, most ridiculous attacks like a 1000-character URL request).

I do think Gunnar trivializes this just a bit by comparing these middle-of-the-ground tasks to Texas leaguers; this assumes that either side alone could solve the easy problem/catch the ball. I’d suggest that neither side *can* usually easily solve the problem, and actually requires someone in the middle, or someone on one side with many skills. It might be more like two outfielders playing so far away from each other, than a ball hit fast right between them requires a huge burst of inhuman speed for either of them to actually get under.

The point is, there are these gaps that security cares about, but traditional IT in anything but the smallest (and largest?) shops is not staffed to fill.

2. Security need to be cross-functional experts in a lot of shit. Gunnar totally covered this, so I really don’t need to, but it’s pretty much truth, despite my bias. I tend to illustrate this with the idea that coders are first taught how to copy one string to another. Then they’re taught more modern tricks on how to copy one string to another; the same simple concept, but with a few more tricks added on. Later on, they add the next layer of how to *securely* copy one string to another (if they’re lucky). Getting that far requires extensive knowledge and even experience. And that’s not even getting into being able to understand technologies outside the main one, such as DNS, server configurations, in-transit encryption, firewalls+in-transit communication needs.

Security is expected to understand code and the network and the server in such a way that they can configure very advanced technologies such as a WAF. And it’s not just understanding general code, but understanding the *specific apps* being protected.

I personally measure my opinion of developers based on how they understand those last few things. If they repeatedly just don’t get it, they’re just coders to me. (They might be really good, but they’re one-trick ponies.) But if they understand DNS, server configuration, and firewalls, at least to a general extend of knowing what matters to them, I deeply appreciate it. Those are your potential rock stars. Likewise, I adore sysadmins who understand code and can do cross-functional things like deep-dive into code traces on the servers and tackle memory leaks.

Or either of the above who have security skills and knowledge!

3. I think this lack of cross-functional understanding is another pressure on why the “cloud” (when it is just external hosting rebranded), has gained momentum. Developers have long made overtures to get administrative access to the servers that run their code, and hopefully sysadmins have rebuffed those overtures to do the server work on behalf of the developers. Have you ever walked up to a server that for the last 2 years has been adminned by a developer? Chances are pretty damn good that it looks about as good as a server adminned by a 14-year-old. That’s not a dig, really. If I tried to add code to your app, I’d fuck shit up pretty awful as well, and my code would be heinous. (This could be twisted around to say neither side will get better without being allowed to get experience with the tasks, but that sort of cross-functional short-term inefficiency does not fit anything beyond small business.)

In steps hosting and the cloud, where developers can drive those projects because it involves code, which netadmins/sysadmins don’t quite understand, and it might result in devs having administrative duties on these new servers. Which sounds great short term…

mubix reveals secrets on beating the red team

Mubix has posted up a great set of slides on how to win the CCDC, from the perspective of a red teamer. All of it makes sense, and it’s great to get it down in a nice, concise preso, especially tips on team composition and duties. From a non-competition side, this makes a great exercise in quick response or a security strategy-on-steroids for someone heading into a new org or responds to attacks-in-progress.

As usual, a few items to highlight:

1. You’re not important enough to drop an 0-day on. Most likely your company isn’t either, but at least at a competition like this, there’s no upside to revealing otherwise unknown exploits. Sure, red teamers are going to be ninjas in some tools and definitely have some of their own scripts and pre-packaged payloads, backdoors, and obfuscation tools, but none of that is so super secret that you don’t have a chance to anticipate or prepare for it. As Mubix says later on, know the red team tools.

2. Monitor firewall outbound connections. It’s one (wasteful) thing to look at the noise being bounced off the outside interface on your firewalls, but seeing bad things outbound is a huge giveaway. You’re not doing detection if you’re not somehow monitoring outbound (allows and denies).

3. You still gotta know your own tools and systems. As Mubix repeatedly mentions, you need to know your baselines on the systems (normal operational users, processes, files) and be familiar with the systems and how to work quickly on them, configure them, troubleshoot, interpret the logs, and automate little pieces here and there with your own scripting skills.

lessons from another wordpress breach

Hoff also has two posts about a recent incident his blog suffered: Why Steeling Your Security Is Less Stainless and More Irony…, A Funny Thing Happened On My Way To Malware Removal….

This is that perfect example where sharing information helps people. You get an idea of what failed, what mistakes are made, what human behaviors help or don’t help, how an attack actually worked, etc.

Normally I would bullet through some of the points, but there’s nothing terribly new here, and Hoff’s posts are worth the time to read.

the winning losing security debate

I saw the opening salvo on Twitter that caused the blog post, “You Know What’s Dead? Security…” from Chris Hoff, and he ended up penning a really good read.

I don’t think it is worth much to talk about “winning” or “losing,” ultimately. Security and insecurity are eternally linked to each other. This is maybe the first time where I like Hoff’s blog name: Rational Survivability. It’s really about surviving in an acceptable state, or rather, simply not losing. But there’s no real win going on, and it might be too much to expect a win at any time.

I do think Hoff got a little sidetracked on the commentary on the security industry. I’ll agree, in part, that the security industry isn’t making solutions that are aligned properly. But I’ll go on to say I’m not sure how a “product” of any type will ever truly be aligned enough to feel good about. These are just tools, and none will magically make someone think we’re winning, in whatever context of the word or feeling. If anything, the security industry has a problem in trying to make their tools sound like they solve the world… There’s also just a certain bit of irony stuck in there somewhere that Hoff typically pens about “cloud” stuff… 🙂

If I may dig further, I also dislike the thought of “innovation” in security. Security is a reactionary concept. It reacts to other innovations with attackers, or innovations in the things security is securing, for instance new technology or assets. That may not always happen in practice, but then again some activity just doesn’t end up being rational.

the secrecy game

Adam over at the New School of Information Security posted a good read: “How’s that secrecy working out?”. We have a lot of people all happily about how the government is talking about information sharing. The elephant back in the corner is: When you involve the government, they want you to share with them, but they won’t share with others and you can’t share with others either. Essentially, it doesn’t do any of us in the private area any good; in fact it makes things worse since it ties even more hands and causes people to pussy-foot around issues and details.

…because we (writ broadly) prevent ourselves from learning. We intentionally block feedback loops from developing.

One of the better posts I’ve read this year.

learning from irresponsible disclosure

Gotta link to Robert Graham again over at ErrataSec for the piece: “The Ruby/GitHub hack: translated”. There’s too many good points to pass it up.

1. This is a great example of irresponsible disclosure in action. By attacking GitHub, not only is GitHub now less vulnerable, but more people (hopefully developers and security auditors) are aware of this problem. Sure, more awareness of the problem may mean more people use it against vulnerable sites, but the flaw was still in those sites. Vuln is present, but risk has gone up a bit…

2. The problem is inherent in the feature set that makes Ruby of Rails a boon to developers. Pretty much a great example of a design flaw that has benefits, but also has risk. Usability vs security.

3. It also means a flaw in one tool affects everything/everyone that uses that tool. GitHub was hacked as a reaction to Ruby on Rails rejecting the bug, GitHub’s choice to use that platform, and their lack of securing (understanding?) a hole.

4. Putting the onus on site owners to blacklist and even understand the issues is probably not the right way to do things. I guess it’s a way to go, but it certainly makes me make a disgust facial expression.

staying anonymous online is still hard

Robert Graham has a nice addition to the discussion about Sabu/Lulzsec: “Notes on Sabu arrest”. Maintaining anonymity online is hard. In the (increasingly distant) past I ruminated about staying anonymous online (1, 2, 3, 4). It’s hard, takes a lot of work, and you need to maintain absolute vigilance so you don’t screw up even once. I should really update and add to that series, especially in light of smartphones, GPS, web tracking giants…

self-preservation in the criminal underworld

Via the LiquidMatrix article, “Sabu Rats Out Lulzsec “, I got over to the article on FOX, “EXCLUSIVE: Infamous international hacking group LulzSec brought down by own leader”.

This succinctly illustrates one of the biggest tools that anyone has against criminals: the lack of trust amongst criminals, and their fear of justice system punishments. Clearly this requires there to be laws, and the pursuit of them, but ultimately a criminal always needs to be looking over their shoulder and being distrustful of their peers.

Even the increased level of anonymity the Internet provides is not always enough to keep social* criminals safe.

* Or those that can’t stand on their own. In other words, if you have stolen goods, you still need to sell it to someone/somewhere, or have contacts to do SpecializedTaskB or whathaveyou.

this is why they don’t make you read privacy terms

It’s nice to see useful articles about digital security and privacy starting to grace major media these days. I especially liked this one found around the front page of CNN.com this evening: How to prepare for Google’s privacy changes. I like the steps it shares at the bottom. And I really like this statement:

Google points out that the products won’t be collecting any more data about users than they were before. And, in fairness, the company has gone out of its way to prominently announce the product across all of its platforms for weeks.

In other words: “You mindless sheep, finally you’re going to get pissed about privacy issues that were already flippin’ there!”

As I like to say, if your business model suffers when you have to reveal it to your customers, because of how they react, you need to sit back and do some soul-searching. Be up front about it and let consumers decide if its worth it. Don’t just try to see what you can get away with, whether intentional or by feigned naivety.