do you need a software security group?

Gary McGraw has written an article explaining why you need a software security group if you want to have a software security initiative. Saw this flew past Twitter via Jeremiah Grossman.

This is a great scenario that I’m sure happens everywhere (yes, everywhere):

…if you go to the development organization and ask about software security they will immediately refer you to the network security people. “We have a department for that security stuff,” they say while ushering you out the door. But when you make your way to the network security department and ask them about software security, they say something like, “Yeah, those developers really should produce code that doesn’t suck. Those guys ruin our perfectly configured network with their broken applications.”

He also throws this gem in:

To make perfectly clear that this is a management issue, ask yourself who gets fired when your firm experiences a software security problem? Who gets rewarded when your firm does a good job with software security? If the answer is “nobody,” then you have a real problem.

The only downside to this set of data and underlying conclusion can be exposed in the last few paragraphs of the article: the size of the companies McGraw deals with are probably pretty large. In other words, they have the opportunity for real security groups. I’d wager a vast majority of firms don’t have that freedom (or budget), and have to make do with, at best, part-time security. Sadly, this either comes in the form of small, annual security tests after the real work is done, or some unlucky bloke’s duties which are shared with his day-to-day tasks and projects. And we all know which half of those duties he will be pressured to do first. (Let alone the arguably expert-level amount of knowledge one needs…)

Even deeper than this being a management issue, this is an issue of how important security is. If it is important, then management will maybe properly do it. If it is not important, then you better just hope you are a big enough org to create a dedicated group for it.

thoughts on the verizon 2009 dbir supplement

A Verizon 2009 DBIR supplement has been released and I’ve finally looked it over. If you don’t have much time, just browse the intro, and then read the Case Example sections (pg 7-21) for each attack type and the Conclusion section (pg 22). While there are still questions on specifics (aren’t there always?), this is far more information than we had before!

A few things pop into mind as I read down the case examples. Egress controls. Policy and administrative access. Noticing strange behavior. Termination policy (That guy must have been laughing at his former employer’s IT staff every time he connected!). Endpoint integrity (and policy). Default passwords (ouch; vendors install your shit better, too!). Endpoint security. SQL injection, network segmentation, egress flow monitoring. SQL injection and egress monitoring (impressed that this org even bothered to parse logs for SQLi, although obviously not an always-on practice with months old findings). Vendor/provider security and notification (#9 is really interesting, and a very hard one to combat/monitor). Proper authentication and help desk training (another team in IT that is evaluated by customer service [i.e. doing what the customer asks] as opposed to doing the right thing; often not paid enough to care, to be honest). Web app flaw. Physical theft and storing of card information (does anyone do regular rounds just to make sure nothing is stolen or added to things like this?). Log/authentication attempt monitoring and web app flaws (another ouch, but another one that so few really do). Endpoint security (RAM scraping…wow!). The phishing attack is interesting, as it involves training and pretty good log watching.

I really like the Conclusion section, especially the bulleted lessons is stresses.

In looking at these specific cases, it really pops out that we have a mix of “easy” and “advanced” security concepts here. Egress and deeper log analysis I consider more advanced topics because they take time and staff to really properly tackle. Other attacks should be easy, but are hard to get people to fix, like SQL injection or application flaws. To most business units, if it appears to work, then the project and capitalization are over. Evaluating, fixing, and testing something that doesn’t immediately appear to affect the bottomline gets pushed off.

on training your system administrators

John Strand has a great, quick blog post over at pauldotcom.com on tapping and training your system administrators as a security asset.

There are actually quite a few similarities and strengths that system (and network) administrators have that parallel or complement security professionals. Disclaimer: I *am* biased since I am a sys/network administrator as my major function in my day job. (And yes, looking to get deeper into security in a more full-time role.) Here are a few of my notes I’d throw down.

1. I’ve said it many times before, but I believe if you truly want the real story about security in an organization, you get as low down into the trenches as you can get. This usually means your IT staff. And often means your desktop support staff, admins, and even app coders; but most probably admins. (I knock the desktop guys [I used to and still pretend to be one] only because they tend to be evaluated by their customer service. They don’t get rewarded for properly configuring a host-based firewall, but they do get rewarded for disabling it and allowing Joe Blow to get back to work.) Incidentally, if you want to know the health of a company, I’d also say your admins are a good source; they touch and know a lot!

2. Administrators are used to being at the end of the shaft, even more so than desktop specialists or coders. More often than not, the admins are the glue keeping all the disparate parts working together and covering for other IT sections that aren’t really up to snuff. They also tend to absorb things like poor applications or bad business process decisions that impact IT. They likely, along with desktop dudes, to see firsthand the people who violate policy. Bottomlines usually end with the administrators, and they’re used to getting it from management and from other IT persons. If anyone in the company is used to putting on the brakes and talking policy and standards and getting people to line up with them, it is your admins.

3. If you want to roll out a security initiative or project of any type, more than likely you’ll need an admin to give you access, gather logs, set up servers, configure the network for your visibility, provide documentation and diagrams, or be next to you during an incident response. Basically, get to know them and get on their side (and they on yours). They’re also the one who will make or break your policies, even more so than desktop dudes, especially if they need to understand and follow the steps on, say, how to harden a server. (Many desktop workers aspire to be systems workers, so their is also a tendency to look to the admins for leadership, formally or informally; compare paygrades if you doubt this.)

4. They also want to make sure things are done right. Admins are usually time-sensitive and risk-averse, and they don’t want things to break, they don’t want to be blamed when things break that they didn’t cause, and they want to troubleshoot intelligently. Even the worst admins tend to have the beginnings of these habits. It’s just a result of a ball rolling downhill.

5. That “A” in CIA is a shared role with the admins. It is also their duty to monitor and maintain availability for the masses.

6. Everyone learns about least privilege and separate of duties, but out here in the real world, business IT is run by admins with godlike access. That’s just how it is. If you think otherwise, you’re not thinking of all the ways they can end up pwning you. This means they really absolutely need to have a mind for security if you expect to get anywhere. If the admins don’t do it, then you’re stuck with a top-down approach which doesn’t always work. Even if bottom-up approaches get mired in budget constraints and buy-in, you can still do a lot to combat insecurity by having security-minded admins. Think about code. To secure code, you need to bake it into the creation with skilled coders and low-level policy. Same thing with systems and admins.

7. It is my really quick, knee-jerk opinion that every real IT security pro needs some practical, hands-on, systems or network or desktop administrator experience. This helps immensely on various levels. This isn’t always true, which is why I always keep this opinion short!

those mysterious undetected data breaches

Wired continues reporting on the Albert “I hacked Hannaford/TJX/more” Gonzales saga. Lines like this are what piss me off:

By identifying intrusions that “had not yet been detected,” his lawyer wrote, Gonzalez helped the companies institute protective measures to secure their data and prevent future breaches.

If true, I’m waiting patiently for those companies to disclose their breaches. And a big thumbs up to those firms living in ignorance.

Also, I would hope that someday even more information on how these recent huge breaches occurred. It really ties our hands to hear that a huge breach occurred, but details that would be helpful to everyone else to not repeat the same mistakes, are pushed under the rug and sealed.

web hacking lawsuit against minnesota public radio

Read an interesting story this morning about a lawsuit from a Texas company accusing a Minnesota Public Radio reporter of hacking into their web system. Read the full article to get a good idea on what all went down, especially the last 3 paragraphs which I feel really get to the heart of this somewhat complex issue. Here’s the last quote from the CEO of the Texas company who had weaknesses in their website:

“… in our contract, we had 60 days to fix any problem. But there was still an unauthorized intrusion, and that was wrong.”

If you ask me, you had completely dumb weaknesses in your site. Just because you offer 60 days to fix something doesn’t mean you get a free pass from even the most assinine security issues. They fucked up.

I’m not a judge, but my kneejerk reaction to this lawsuit would be to have the Texas company thank the reporter for reporting the weaknesses in their web presence; a service tendered for free. The reporter should learn that this isn’t such an easy thing, to just twiddle with a website and call it good. She was stepping into murky waters and should exercise more caution in the future, but at least it does not sound like she had malicious or self-serving intent. And the general public and every employee and customer of that company should thank the reporter for exposing an issue that likely would not have been fixed otherwise.

I would hope this doesn’t even progress past the prosecutor.

my mini-rant on diggnation

I’ve been a very big fan of and loyal watcher of Diggnation since about episode 4 (I got into this from the broken). This is somewhat strange since I don’t use Digg.com and haven’t other than trying it out for a few days. Hard to believe Diggnation has been around since mid-2005. I adore the guys and their work, and I have caught myself thinking of them as friends due to the intimate nature of their podcast. I’ve even had my one email I sent in read during an episode. But in the past year I’ve really gotten less and less loyal and find that I enjoy far fewer shows per month. Where early years had gut-bustingly amazing shows every week, many of the shows from the past year are utterly forgettable, even 10 minutes after watching them. So here’s my mini-rant on why Diggnation has been disappointing lately. (Yes, I understand it has been 5 years and the guys likely aren’t as fresh as they used to be…so also take this as a wistful nostalgic rant rather than something I’m demanding and angry about.)

1. Not enough drinking. Alex and Kevin (and others) are at their most entertaining when they are drinking beer. I have not done any research (although I’m sure some other geek online has done so), but I’d be willing to bet you can absolutely judge the entertaining factor of the show based on what they announce they’re drinking. If it is beer and they’ve obviously had a couple already, thumbs up! If it is tea for no real good reason (like being sick), then the show is almost always bland. Also, whatever the hell happened to drinking interesting beers? Years ago they had interesting stuff every single week. Now, they have interesting stuff maybe once every 3 months. But it is still hard to tell someone to drink more. Their drinking less is obviously healthier. I’m just saying, I’d bet 95% of their best moments in 4 years of Diggnation have been alcohol-induced. They don’t need to be trashed, but they definitely kick it up a notch after a few brews.

2. Horrible preparation. This falls heavily on Kevin far more than Alex, but it is insulting and annoying when either of them is obviously seeing the article for the first time and trying to fake through it (and often failing). What a waste of time for the audience. Even if the story is interesting, chances are one of them is getting it wrong because they’ve not read the story in advance or had any thoughts on it yet. I don’t ask for a script or to not read word-for-word, but at least have a reason to bring the story up! In the early years, the guys always had reasons for the stories, absolutely without fail. This drove good thoughts on the stories, even if the discussion was short. Today, not so much. It is no surprise that the stories with the least time and least audience reaction are the ones they didn’t even know they were covering. Really, they often just phone the segments in these days, which is disappointing because I adore both of the guys and their work.

3. Fewer interesting tech stories. Your audience is tech. I don’t care if it is a robot that smoke-screens the Japanese Prime Minister or zombie dogs. That’s geek and tech! Read on for a reason why there may be less interesting tech stories in some episodes…

4. Taping more than one episode at a time. Ok. I don’t mind this *too* much, I mean, one of them has to travel every week to do the show! But seriously. Your audience is not dumb. We know when your stories are a week stale (and the “next best” stories from the previous week!). We know when you’ve just changed shirts. We know when you’re taping two episodes at once. That’s ok, but don’t freakin try to fake that it’s the next week. Today’s episode 233 is the worst offender in quite some time (Alex has the same t-shirt on…and purposely refers to his beer as the same as “last week.”). We’d truly understand if you were up front and taped two episodes at once. But don’t let that water down how much you drink or mean you have to take less interesting stories for that week, such as the “tier 2” stories. Someone should at the very least be saving up interesting stories on your off week! I just really hate feeling like I’m being had when recently it has been very obvious they’re double-taping or worse. Even just 2-4 hours a week can cover 2 weeks’ worth of stories.

5. Too much time for sponsors, or poorly placed sponsor-copy. I understand sponsors make the world turn, but everyone will understand if you breeze through the sponsor-copy and get on with being interesting. It *is* actually interesting when you have something to say about the sponsor (like some of the games done previously, or even the Zune) but we’re a savvy audience and we know when you’re being a bit fake about the enthusiasm and just adding more, and we know when you really believe what you’re saying and like/use the product. But still, don’t let sponsor-copy dominate or steal time from emails/convo. I’d really like to see 2 or 3 stories yet after the sponsors, rather than sponsors being the start of the homestretch. It always makes the last story seem trivial and quickly covered and moved-on.

6. Less interesting emails. For guys who’ve admitted to having a full gmail email account in the past, we can’t seem to get very interesting emails read on the show anymore. I suspect they’re being picked out at the last minute. I don’t care if an intern does it, but get someone to spend an hour perusing emails and picking out something convo-worthy. Or a hot pic. 🙂 And spend some damn time on the topic rather than saying thanks and that’s it.

7. No Thanksgiving episode? I think this might illustrate that enthusiasm in *making* Diggnation has diminished. No one wanted to do something cute for the Thanksgiving episode anymore? Or at least a best-of reel for the year? Those have always been keep-worthy in the past. It saddened me to not see one this year.

In the end, it really just feels like they’re less interested in doing Diggnation than they used to be. At least they still have the enthusiasm to meet up and be friends and hang out with beers and good geek conversation. I’ve not stopped watching, not do I want to stop watching. I just feel less enthused myself when every new episode comes out, and often think my still watching is as much just a habit as it is me really being interested anymore.

coming to terms with data loss prevention

I’ve been recently digging deeper into network DLP (part of a PCI initiative in an organization). I’ve long narrowed my eyes about the concept of the DLP space, but I feel a lot better about it lately. Let me briefly explain…keeping in mind I am discussing network DLP and not endpoint DLP.

My original thinking about DLP has always been how it has so many limitations. It is fun to make a list of all the ways you can circumnavigate the functions DLP provides. This always made me sigh in exasperation about DLP (data loss prevention!) as a security tool to prevent data loss. It’s not that I wanted DLP to be infallible, but I thought it was silly that even simple things could defeat it, like HTTPS/SSL, or a password-zipped file.

But now I believe DLP is not supposed to strictly be a security tool. DLP should better be labeled as a Sensitive Business Process Identifier. What DLP really does is identify and alert on business processes sending XYZ data over channels they’re supposed to be using, or they should not be using. Instead of stopping a malicious individual from exfiltrating information, DLP really wants to act like bumpers in the gutters of a bowling lane: make sure valid business processes aren’t using poor channels to move data, and somewhat log/assure when it is used properly. My old thinking involved malicious activity; my new thinking involves business-valid activity. That’s a big difference!

Does this satisfy PCI checks? Technically, I guess so. But does it offer much assurance that data is not exfiltrating a network? No. Does DLP make a security geek feel good? Not when considered alone. When considered as an advanced item in a mature security posture, then perhaps it is merely ok.

A very valuable side benefit of DLP’s approach is to drive identifying where sensitive data resides and transits. This is almost worth the cost of DLP to many companies that have no idea where this stuff sits or moves.

trying to agree on risk, or at least fixing what you can

I wanted to just point out two interesting articles that go well together.

First, Rich over at Securosis talks about security possibilities and probabilities. The gist is that some security vulnerabilities are possible, but may not be very probable at all. This argument could delve into the practicality of security in regards to budget and time. Just because an attack is possible, doesn’t mean you *have* to fix it if it is so extreme, specific, and not likely to ever happen. (It is unfortunate that Rich used Mac malware as an example, as that is a pretty passionately inflaming topic…then again, maybe the original topic was Mac malware and it turned into this blog post…I don’t know, but it shouldn’tdectract from the point above.) One other analogy I have heard was a government official taking a tour of a data center and remarking about how a spy could crawl over cage walls or under the raised floor. He was serious. Yes, it might be possible, but hopefully other factors mitigate it, depending on how much value you place on what is being protected.

Second, Chris at the Veracode blog talks about the wall that sometimes (often!) appears where developers want exploit proof rather than fixing presented possible bugs. Do you spend the time to prove an exploit and then fix it, or just fix it? What if it really is esoteric?

I guess my point in highlighting these two other posts is to illustrate the dance that security has to perform, between actually fixing issues and valuing those issues against other factors. Rich and Chris are both making risk decisions, but their audience may not agree with those valuations. Very rarely do we seem to be able to agree on such valuations in this industry. That’s not a knock; it’s simply a fact of life if you ask me.

This leads into one of my few arguments against an audit model like accounting has. Those work because accounting only has a finite number of ways to do things. IT has an infinite number of ways to solve its problems. Of course, that causes even more consternation and argument…

meeting security questions with more questions

I find it interesting that so many security questions are addressed by asking more questions. What level of logging should I have? Well, that depends, what are you protecting? Do you have staff to watch logs? Budget to buy something to watch logs? What do you expect from logs? And so on… One answer usually just doesn’t fit every situation.

I can actually bring this back to my old discussion on security religions. Some people believe in absolute solutions that are secure and cover every situation. Others believe in incremental security, where you may have to layer protections to cover all your bases, and maybe not any layer is all that secure in itself.

This takes on a new dimension when you talk about scope. Are you talking about security on a macroscopic scale (e.g. national, global, internetwide) or microscopic (e.g. any organization, a home office)? Scope can have even more implications such as budget, coverage, and so on, but macro vs micro is the best start.

I often engage in security discussions that can lead to heated argument if the concepts of security religion (of participants) and scope (of the discussion) are not addressed up front. Participants can become violently argumentative, when they’re simply talking about different things (global DNS security vs your SMB DNS presence). Hence, security discussions or questions, to me, almost always begin with more questions, questions designed to fit the scope and religion, while also answering other necessary questions that eventually lead down an informal risk valuation…

hacks moving beyond credit, accounts, identity?

I’m sure you’ve heard about the hacked climate emails. Think about it for a moment. This could be a very public incident that sparks new thinking on the value of hacking. There is value, and it’s not just in stealing money directly from bank accounts, but rather in more niche situations. Go to a climate conference, sniff the wireless networks, harvest and sell or release for personal gains.

Which also makes you wonder where WikiLeaks came across the 9/11 text messages also recently released… Were they stored somewhere that got hacked, or did someone pick them up while eavesdropping on cell transmissions?

compliance-tested vs field-assessed

Bejtlich has posted a really nice beginning (furtherment?) to the discussion of digital monoculture vs heteroculture (or control-compliance vs field-assessed). I don’t really have strong feelings on either side, but the discussion itself is incredibly interesting to think about. I think there are pros and cons to either side, and I’d be willing to bet various important factors will dictate the value either approach brings. Things like organizational size, need to prove a compliance level (gov’t, defense, or just large and public?), and quality of both internal IT and internal security staff.

While I’ve previously not enjoyed the approach that the Jericho Forum has employed to back their vision of the perimeter-less organization, it does help that position to think of an organization being a heteroculture and using field-assessed measurements for security efforts. Typically my opinion is perimeter-less security (as horrible a term as that is since there is always a perimeter no matter what scope you lay out) and defensible endpoints are something you can only do when you go balls-in all the way, which is rare. Still much of our security industry only goes into an approach like that on the barest of levels, which causes it to make no sense.

That’s not to say you can’t have a middle ground on the actual discussion on Bejtlich’s post. I only bring up the Jericho position because going to the extreme on field-assessed hetergeneous environments fits nicely with their world view. I probably fall into the bucket that says good measures of both approaches will probably bring the most value.

I’ll never be surprised that Bejtlich falls on the “field-assessed” side of this discussion. In fact, I think most trench-friendly security techs will be sypathetic to that side because it deals a bit more in fact and reality and specifics. Compliance is really made to be friendly to non-techs, both on an assessment side, but also on the consumption of the reports. It’s also the side I tend to be more friendly to, as well.

why shodan is scary and not scary at once

I haven’t mentioned SHODAN because I seem to see most everyone else mention it. Robert Graham at ErrataSec has a great, quick post about the site and why it is scary. It really is scary. Think about all that noise from scans you get on your border. Those are people randomly spending hours, days, weeks, months trying to find hosts to attack. SHODAN can change those months of scanning into a search query that takes seconds.

Google hacks already leverage the power of these searches. If a forum software has a hole in it, use Google to search for every known instance of that version.

If you run Server XYZ and tomorrow a remote vulnerability is found, now attacks can find it in seconds.

Now, while this is scary, there is a caveat: This shouldn’t really change your security stance as the host! Yes, attackers can find you faster. But they could find you previously anyway because you’re hosting remotely acccessible servers. This doesn’t make your web server any more vulnerable. But it should influence your time-to-patch and vigilence in keeping abreast of breaking issues.

The rest of what Robert stands firm, though. Attackers salivate over something like this.

a lesson from meeting pci

At work we’re continuing to chip away at dealing with PCI requirements. There are lots of lessons to be learned from such a project. One of the more painful ones: It is relatively easy to say (and even convince an auditor!) you meet each bullet requirement, but it is difficult to have effective security without improving your staff. There are a number of bullets that involve logging, reviews, and monitoring…things that are driving SEIM/SIM and other industries. But these are also things that security geeks realize really need analysts behind the dashboards and GUIs. Otherwise these products only skim off the very slim top x% of the issues, the very easy ones to detect. And miss a hell of a lot else.

infosec management layers illustration

Rybolov has a great graphic depicting the layers in information security management. This is a great graphic to keep in mind, especially the concept that each layer only knows about the layers right next to it. This causes breakdowns the farther up or down you get. Even in private business which may only care about layers 1-4.

If this graphic makes enough sense that you want to learn more, watch Michael’s Dojosec presentation (the first vid).

the blame game of 2010 has already begun

Mogull over at Securosis points out an article on a lawsuit against a POS vendor and implementor for passing on insecure systems that violated PCI. Or something to that effect.

Either way, this is a Big Deal. This is something I’ve been patiently waiting for over the last couple years as PCI has gained traction.

I’m a little early, but I believe 2010 will be the year that The Security Blame Game becomes further legitimized as a business model. In other words, I feel that we’ve long had a quiet blame game when it comes to security, but as more becomes required to disclose and more cost is moved around from party to party, the quiet blame game is going to get very public, very annoying, and very costly.

Which is especially scary because security is not a state or achievement. You’ll end up with impossible contracts and a bigger gulf between what people think is secure and what is actually in place. And it will be shoved deeper into the shadows when possible. And compliance will continue to be questioned despite the improvements and exposure it can provide.

Here are some other observations I expect to hear more about in 2010:

  • more exposure of stupid configurations, implementations, and builds of “secure” systems
  • industry needs to clean out the security charlatans, and cost/lawsuits have to do it
  • more pressure to do security “correctly” which is far more costly than most realize

And one thing I *hope* happens more:

“Turnkey” security tools whose vendors brag that you just turn them on and let them loose (sometimes with one-time tuning) and you’re secure. And you don’t need staff or extra business process or ongoing costs other than licensing. Bullshit. Every security technology needs analysts at the dashboards, at the very least. Hell, even in just plain old IT operations, far too many issues and incidents are found by third parties or by accident when looking at something else. It’s an epidemic (and an indirect product of economics) that will not begin to go away. I really hope the idea of security process continues to be foremost, and the idea that something is “secure” begins to die. I doubt the latter will ever happen, as it has been decades so far in computing; and longer in the realms of security in general. I’m not saying we need to solve security, in fact I want to say we need to solve our perception of it, so that we don’t actually ever ask or expect to “solve” security…