a saved comment on security ethics in a questionable situation

Slashdot ran an email from a senior security engineer lamenting his company’s ethics in security auditing. Dan Morrill posted about it, which was my first exposure to it. I posted a comment on his blog, and he sort of lightly guilted me into posting it on my own blog here. Honestly, I had some points in it that I kinda didn’t want to just lose to the ether, and instead save them here for myself.

So read Slashdot first, then Dan, then my post will make more sense. I will concede points that say audits really are a bit about negotiating your security value, but I think it needs to be documented. Risk A, mitigating factors B, accepting C…

I know it’s a cop out, but I would look for work elsewhere. It’s not only a cop out, but also a bit of a cynical approach. But once you drop down this road of fudging the risks/numbers, where do you stop? Where do you re-find that enthusiasm for an industry you’re helping to game? What if your name gets attached to the next big incident? What if the exec that got you to bend throws your name out to others looking for the same leeway? Integrity is maybe our most important attribute in security.

I know strong-arming (or outright lying!) happens, it always happens. I think the only way this won’t happen is to have a very mature, regulated industry much like insurance or the SEC/accounting/financial space.

Of course, this also means we need to remove or greatly reduce subjective measures and focus on objective ones. Those are the ones we hate: checkboxes and linear values. Those suck to figure out, especially when every single org’s IT systems are different. I just don’t think that will happen for decades, if that. Unlike the car industry or even the accounting disciplines, “IT” is just too big and broad and has too many solutions to control it.

This leads to one of my biggest fears with PCI. Eventually it will be something negotiated, and the ASVs will be the ones taking the gambles. Lowest price on a rubber stamp PCI compliancy. Roll the dice that while we roll in the money, our clients don’t get pwned in the goat…good old economics and greed at work.

This also penalizes the many people who are honest, up front, and deal with the risk ratings in a positive fashion. Sure they may get bad scores, but that means there is room for measurable improvement. There are honest officers and people in this space. But there are also those who readily lie and deceive and roll the dice on security, and those are the ones who will drive deeper regulation and scrutiny.

I’m confused by the post itself. I’m not sure if his company is being strong-armed or if his company is doing the strong-arming.

If his company is being strong-armed, then any risk negotiation should be documented. “We rated item #45 as highly important. Client (name here) documented that other circumstances (listed here) mitigate this rating down to a Medium.”

If his company is doing the strong-arming, you might want to just let the senior mgmt do their thing. Ideally, if shit hits the fan, it is the seniors that should be taking the accountability, not others, especially if they’ve been involved in the decision making processes.

With this line of thinking, there is another thing: the geek factor. As a geek, I tend to know about and inflate the value of very geeky issues. It is often up to senior mgmt or the business side to make decisions on the risks. Sometimes, the decision is made to accept the risk. This means possibly not fixing a hole because the cost is too great, even if there is a movie-plot potential for a big issue. It might be an approach to sit back, take some time and reflect on the big picture a little more. Are these strong-arm tactics covering up truly important things? Or are they simply offending our geek ethic?

One could also weigh in on what would be your proper measure of security? It is always a scale between usability and security, and in the words of the poster, there will always be some scale that involves accepting some risk in order to keep one’s job. The alternative is to be so strict about security that you could only get away with that in a three letter agency or contractor thereof!

Ok, after all of that, if the guy wants to keep his job (or not I guess) but yet blow the whistle on such bad practices, I’ll have to put on my less white hat and give some tips.

It sucks to do, but sometimes you do have to skip the chain of command and disclose information to someone up above the problem source. I’d only do this after carefully considering the situation and making sure I have an out. Even an anonymous report to a board of directors is better than silently drowning with the rest of the ship.

If there is a bug or vulnerability in an app or web app, get it reported through your reporting mechanisms internally, like a bug system or ticket system. Get it documented. The worst they can do is delete it, at which point you might want to weigh disclosing it publicly somehow… (of course, by that time, they’ll likely know it was you no matter how anonymous you make it).

If the company is big enough and the issues simple enough, you might get away with publishing anonymous in something like 2600, the consumerist, or a general blog from a third-party. Sadly, when trying to get people to understand technical risk, it can be difficult to be precise, understandable, and concise. If the guy belongs to some industry organizations (Infragard, ISACA, etc) perhaps leaning on some trusted (or NDA-backed) peers can be helpful.