compliance-tested vs field-assessed

Bejtlich has posted a really nice beginning (furtherment?) to the discussion of digital monoculture vs heteroculture (or control-compliance vs field-assessed). I don’t really have strong feelings on either side, but the discussion itself is incredibly interesting to think about. I think there are pros and cons to either side, and I’d be willing to bet various important factors will dictate the value either approach brings. Things like organizational size, need to prove a compliance level (gov’t, defense, or just large and public?), and quality of both internal IT and internal security staff.

While I’ve previously not enjoyed the approach that the Jericho Forum has employed to back their vision of the perimeter-less organization, it does help that position to think of an organization being a heteroculture and using field-assessed measurements for security efforts. Typically my opinion is perimeter-less security (as horrible a term as that is since there is always a perimeter no matter what scope you lay out) and defensible endpoints are something you can only do when you go balls-in all the way, which is rare. Still much of our security industry only goes into an approach like that on the barest of levels, which causes it to make no sense.

That’s not to say you can’t have a middle ground on the actual discussion on Bejtlich’s post. I only bring up the Jericho position because going to the extreme on field-assessed hetergeneous environments fits nicely with their world view. I probably fall into the bucket that says good measures of both approaches will probably bring the most value.

I’ll never be surprised that Bejtlich falls on the “field-assessed” side of this discussion. In fact, I think most trench-friendly security techs will be sypathetic to that side because it deals a bit more in fact and reality and specifics. Compliance is really made to be friendly to non-techs, both on an assessment side, but also on the consumption of the reports. It’s also the side I tend to be more friendly to, as well.

why shodan is scary and not scary at once

I haven’t mentioned SHODAN because I seem to see most everyone else mention it. Robert Graham at ErrataSec has a great, quick post about the site and why it is scary. It really is scary. Think about all that noise from scans you get on your border. Those are people randomly spending hours, days, weeks, months trying to find hosts to attack. SHODAN can change those months of scanning into a search query that takes seconds.

Google hacks already leverage the power of these searches. If a forum software has a hole in it, use Google to search for every known instance of that version.

If you run Server XYZ and tomorrow a remote vulnerability is found, now attacks can find it in seconds.

Now, while this is scary, there is a caveat: This shouldn’t really change your security stance as the host! Yes, attackers can find you faster. But they could find you previously anyway because you’re hosting remotely acccessible servers. This doesn’t make your web server any more vulnerable. But it should influence your time-to-patch and vigilence in keeping abreast of breaking issues.

The rest of what Robert stands firm, though. Attackers salivate over something like this.

a lesson from meeting pci

At work we’re continuing to chip away at dealing with PCI requirements. There are lots of lessons to be learned from such a project. One of the more painful ones: It is relatively easy to say (and even convince an auditor!) you meet each bullet requirement, but it is difficult to have effective security without improving your staff. There are a number of bullets that involve logging, reviews, and monitoring…things that are driving SEIM/SIM and other industries. But these are also things that security geeks realize really need analysts behind the dashboards and GUIs. Otherwise these products only skim off the very slim top x% of the issues, the very easy ones to detect. And miss a hell of a lot else.

infosec management layers illustration

Rybolov has a great graphic depicting the layers in information security management. This is a great graphic to keep in mind, especially the concept that each layer only knows about the layers right next to it. This causes breakdowns the farther up or down you get. Even in private business which may only care about layers 1-4.

If this graphic makes enough sense that you want to learn more, watch Michael’s Dojosec presentation (the first vid).

the blame game of 2010 has already begun

Mogull over at Securosis points out an article on a lawsuit against a POS vendor and implementor for passing on insecure systems that violated PCI. Or something to that effect.

Either way, this is a Big Deal. This is something I’ve been patiently waiting for over the last couple years as PCI has gained traction.

I’m a little early, but I believe 2010 will be the year that The Security Blame Game becomes further legitimized as a business model. In other words, I feel that we’ve long had a quiet blame game when it comes to security, but as more becomes required to disclose and more cost is moved around from party to party, the quiet blame game is going to get very public, very annoying, and very costly.

Which is especially scary because security is not a state or achievement. You’ll end up with impossible contracts and a bigger gulf between what people think is secure and what is actually in place. And it will be shoved deeper into the shadows when possible. And compliance will continue to be questioned despite the improvements and exposure it can provide.

Here are some other observations I expect to hear more about in 2010:

  • more exposure of stupid configurations, implementations, and builds of “secure” systems
  • industry needs to clean out the security charlatans, and cost/lawsuits have to do it
  • more pressure to do security “correctly” which is far more costly than most realize

And one thing I *hope* happens more:

“Turnkey” security tools whose vendors brag that you just turn them on and let them loose (sometimes with one-time tuning) and you’re secure. And you don’t need staff or extra business process or ongoing costs other than licensing. Bullshit. Every security technology needs analysts at the dashboards, at the very least. Hell, even in just plain old IT operations, far too many issues and incidents are found by third parties or by accident when looking at something else. It’s an epidemic (and an indirect product of economics) that will not begin to go away. I really hope the idea of security process continues to be foremost, and the idea that something is “secure” begins to die. I doubt the latter will ever happen, as it has been decades so far in computing; and longer in the realms of security in general. I’m not saying we need to solve security, in fact I want to say we need to solve our perception of it, so that we don’t actually ever ask or expect to “solve” security…