doing infosec right from shmoocon

“The reality is that there’s a lot of fame in doing one little tiny thing [as a security offense researcher] and somehow being a hero for it. There’s not a lot of fame in slogging through the shit, day in, day out, and *not* making the news. And when you’re a defender, the goal is to not make the news.”Myrcurial, Shmoocon 2012.

This quote comes from a great presentation called Doing InfoSec Right from Shmooncon 2012, which itself is chock full of truths. Call me a fanboy, but in my catching up on videos/presos this past month, I’ve caught several talks including James Arlen, and I gotta say the man rocks. (I was already a Potter fan, so I don’t need to declare that.)

Doing InfoSec Right Pt 1 and Doing InfoSec Right Pt 2

Here are some bullet points that are whole-evening discussions in themselves:

– It’s hard to get experience in defense, and the tools lag behind. This topic is important, but it should be prefaced with some role definitions. There is a place for offensive-minded security defenders, but also you should have admins and developers and QA who are admins first but are baking security in (or the service desk guys). These two general roles can easily be separate lives. This came up later as red team guys and blue team guys.

– Lack of innovation in defense. While I broadly agree with this, it’s hard to agree too much when there are no ideas on what constitutes innovation. *What* should we be innovating? I might even buy that we *don’t* need innovation, we just need more emphasis on security and better efficiencies (which modern mega-suite tools fail with).

– Lack of sharing in defense / lack of cons and presentations with defense.

– We have all these awesome tools, but no one knows how to use them right, nor has the time.

– knowledge of analysts vs the knowledge of the tools. This should be a bigger discussion because I could argue either way. We do need bigger tools, but I also believe we need the talent to fill in the cracks and be able to play with the packets when they need to.

– The people with heavy experience are the ones who are “above” the roles that are in the trenches. This also feeds into the smarter tools/dumber analysts discussion.

– The burning pain behind you.

– Offensive side is very good at sharing; defensive is not.

– Junkyard Wars analogy for defensive guys: time boundaries and limited things. I think this is an interesting analogy to inject into the above bullet about better tools and using them better and stuff. Or better yet, about having smart tools so we can have dumber analysts.

– Forensics vs defense. I just wanted to plop this down, since this is an interesting discussion point that was brought up twice quite briefly.

– No evidence of what works or doesn’t work.

I think there is distance that can be had by injecting the idea that business, IT, and thus security is infinitely varied across all the orgs, businesses, and people out there. This might help explain why our solutions aren’t as sexy as attacks against system A, etc. Things like patching may or may not work, but it certainly doesn’t work for those who got pwned due to a lack of a patch. It’s too late to make that point more succinctly and understandably…

managing security is like… (quick thoughts)

Back in high school I spent some extracurricular time to build a river model as a Biology project. Basically a huge sandbox that could be elevated and have a water pump circulate/recycle water through it to simulate the effects of a river. To me, security is like building that sandbox and planning it all out, but once you turn that water on, it goes damn well wherever it wants to go, taking paths of least resistance using decisions you didn’t even know were possible. (Hence the usefulness of the model!)

People do the same thing in business and technology. Security puts down security measures (roadblocks, direction signs, suggestions, speed bumps, rules…), but people that want to do things a certain way will do them that way. The classic example is our current situation of using port 80/443 as universal tunneling ports. Security is blocking ports? Use the one they do open. Ops limits email attachments? Security filters out CC/PII in emailed attachments? Well, use your personal email account. Security/HR has web filtering? Use your smart phone and the guest wireless network to do your personal stuff.

And on, and on. It’s not so much a security problem as it is just a path of least resistance problem. Like driving through a parking lot outside the lines and risking not seeing the car to your side, rather than driving within the ‘rules.’

Which is funny, since we should also value creativity, outside-the-box thinking, innovation, and doing things new ways. Which is easily at odds with such rules.

This is why I’ve come to like Hoff’s blog name: Rational Survivability. If you/your business continue to survive, that’s really the goal, isn’t it? Every now and then, I think he knows what he’s talking about.

some dangers in logging and mssp adoption

Jack Daniel has a blog post about logging and MSSPs, “Wait, what? Someone has to look at those logs? “. He essentially makes one point that I (and others) have made for years about Managed Security Service Providers:

…in spite of some MSSP’s theoretical threat intelligence and perspective advantages, they simply do not understand the businesses they serve well enough to provide enough value to justify their expense.

Looking at an MSSP to do something you don’t already do is one thing, but to replace an internal process (or something you *can* do internally) with an MSSP needs to have the risks weighed out. Too often an MSSP is looked at just to save money or just because the internal team isn’t perfect (an expectation that is bad to have).

An MSSP will have dedicated people with a certain level of expertise and efficiency in monitoring your (and many other client) logs. But…

– you’re just one of many clients (probably)
– they won’t know what’s really important to you or a throwaway system
– they will either require elevated rights into your systems to troubleshoot/assess
– or they will be so far removed that they burden your team more than normal with all sorts of pings/tickets on things to look at
– the only valid events will be the absolutely most painfully obvious issues; like an IDS or AV screaming about something. But anything subtle or normal-but-bad like a terminated employee VPNing in the day after they were terminated or a local system account in the DMZ suddenly trying to connect internally, is going to be missed in the noise.
– they’re not going to act on your custom/strange logs

And I can pretty much guarantee that the MSSP will raise false positives and will miss true positives. Just like an internal team. But at least an internal team can learn, but business will probably just scream at the MSSP and either leverage SLA/credit or just sever the relationship and start the whole bloody thing over with someone else.
One last thing: Having looked at SIEMs/logs for a while now (sort of a part-time duty in my current job), I’m pretty convinced they’re best used to improve knowledge of the environment and for supporting operations. But for eventing on security issues? They’re only as good as the logs you gather, and the only real benefit is sucking in IDS/AV/mail gateway logs and raising events on those (things that can already raise their own events anyway); a sort of meta-security tool. Or super-custom things you put in, like special filters on your web server logs, or whathaveyou. Still, good luck getting that all to gel properly without full time staff.
That said, watching your logs is still one of the best things you can do, but it also must be combined with other things such as regular inventory and various vulnerability/change detection (like new local admin accounts or new AD accounts)….the list is endless on what can be useful.

the briefest of glances into large corp sec teams

Got passed this excellent quick article from the Seattle Times, “Buddies have ‘awesome’ job trying to crack Boeing security”. The article talks about 2 of Boeing’s security staff (would knowing who they even are be a security problem?) and a bit more broadly about security staff hiring practices in today’s digital landscape.

…said last year at a conference that her company’s most impressive cybersecurity hires have come from outside of traditional recruiting outlets.

why security pros fail – seven problems

Another old CSOOnline article link I’ve had sitting around is, “Why security pros fail (and what to do about it).” Per usual, here are bullets points and my reactions. Yes, this starts out juicy and hot.

Problem #1: Security Is Thought of as a Disabler – Yes, a touchy subject. When you talk to your local law enforcement, do you think they give a shit whether they’re an enabler or getting in the way of criminals? I’ll give a hint: they don’t get evaluated on their customer service report cards. Basically, I hate the lie we tell ourselves about being enablers. We *do* get in the way. Deal with it.

That’s not to say we should say no and say it proudly and fiercely, and I think the author would ultimately agree with me. We should be involved in business decisions and give guidance as necessary. This is as much an operations or leadership issue as security, though.

This is one place compliance is a good thing: We can point to requirements and use them to say no to things. You want to go to the cloud and that provider doesn’t use SSL or other controls to protect data-in-motion? Our requirements say no.

Yes, talking about enabler vs getting in the way is a touchy subject with me. We ultimately need to deal with the fact that security gets in the way by definition. And move on.

Problem #2: Security Offers Only One Solution – I like this bullet point, and it’s a great approach. As security people, we need to give the low-down on what a perfect situation may require, including the risks. But we should also give a dose or practicality and realism into our discussion. Yes, we could segment the shit out of the network, but we know that’s costly in many ways, but here’s what we’d realistically like to see…

Problem #3: Not Enough Humble Pie – Ok, another touchy subject is that of railing against FUD. In a way, railing against FUD *is* FUD, when you really sit down and get philosophical about it. This is another topic we have to accept and move the fuck on about. Yes, some people/vendors do take this to extremes, but please feel free to let them; sometimes we’re expecting them to since maybe we didn’t know about a particular threat until now. This does underline the need to inject practicality into discussions, though. Sadly, this good bullet point forgot its place and shouldn’t have injected the FUD distration.

Problem #4: Believing the Customer Is Clueless – I don’t actually get this bullet point at all and it probably requires context on his sources and their experiences and what they’re specifically talking about. There are many times where a customer *is* clueless; why else would they bring in outside help? And just because they opt to not listen to certain suggestions, doesn’t mean everyone is failing and dumb; just because you told me not to bet on RED for this spin, doesn’t mean I am stupid if I do anyway. That’s part of the Big Gamble in security.

Problem 5: Personal Cyber Ethics: Are You An Insider Threat? – Not sure I get this bullet point either, and sounds like a source had a personal situation with it. Every insider has ethics temptations. We also should define what security pro is before getting too far into this discussion. Does this include professional consultants or full-disclosure anonymous security “researchers?” I do believe there is a certain level of being above certain restrictions at work, that “normal” users are subjected to. But that is true about any technical or administrative or leadership position. I’m not saying they should be expempt from everything, but this bullet point discussion itself is a bad slippery slope. (A CSO shouldn’t have much more access than any other C-level anyway…)

Problem 6: Career Burnout – This is a great problem to bring up, but the handling of it in this bullet is trash. Security is a high stress IT job, to be honest, for a variety of reasons (you’ll never win, you always have to educate, you’ll never get exactly what you want, you need to be an expert in many things…). No discussion about this should exclude the idea that maybe the career is not for you, if you’re feeling excessively burnt out. And figure out and pursue what makes you happy.

Problem 7: Career Perspective Stuck in a Box – I like this item, but part of it doesn’t sit well with me. I think we again have to define security pro: are we talking a middle-manager-like policymaker or someone in the trenches? That will dictate a huge difference in making efforts in the 5 preferred skills areas (attitude, relationsip, equipping, leadership, technical). I suppose this is in CSOOnline and thus more about CSOs…in which case, I agree.

It might sound like I have an issue with this article, but I really don’t. I like the discussion and bullets, and am just being extra contrarian today.

10 tips for success pen testing programs

A 2010 article on CSOOnline goes over, “Penetration tests: 10 tips for a successful program.” I’ve had this in my “to-read” hopper for way too long. The author goes over 10 tips on getting started with penetration testing in your organization.

Penetration Test Tip 1: Define Your Goals – Unlike the author, I think the reality *is* that some goals are just to tick a compliance check box. Nonetheless, this bullet point should also include discussion on managing the expectations of a pentest. Are you looking for a 2-day blitz, a vuln scan, or deep dive into custom application/software testing?

Penetration Test Tip 2: Follow the data – I do agree with this, but sometimes a pentest is more than just focusing on the data, and rather focusing on access. For instance, if I can attack a system and get admin rights, and then domain admin rights, it really doesn’t matter where your secret data is. I have access to it. But otherwise, yes, this bullet is valid.

Penetration Test Tip 3: Talk to the Business Owners – Can’t really argue with this. Take inventory, get an understanding of software, and align with business, pretty much sums up this bullet with popular buzzphrases. Ok, 2 best practices and a buzzphrase.

Penetration Test Tip 4: Test Against the Risk – When it comes to pentesting, I’m a bit more annoyed when people limit scope based on various factors; in this case data/application value. A development server with no real data is still a risk if I can get into it, drop a keylogger/priv escalation on it, fuck it up enough to get an admin’s attention enough to log in, and then scrape their creds/hash. In this bullet, I like how the author basically illustrates my point in bullet #1: you *can* start out with compliance checklist matching, and then expand from there as you truly value the security.

The rest of the bullet points pretty much stand on their own, and are good. I would add somewhere that pen-testing is an iterative process where you go through rounds of testing, adjust as needed, expand scopes, and dig further. Basically your normal OODA loop.

recent discussion points about security aligning to business

Someone knew what they were doing when they put Nickerson, Ranum, and Hutton on a panel together; strong statements without punches pulled are common from those three (things that need to be said). And I’m not surprised subsequent coverage is getting some mileage.

Rafal Los posted about the talk, and pretty much makes statements one can’t easily argue with (at least *I* can’t since I agree). Also, Alan Shimmel jumped in with, Until You Walk A Mile In Those Shoes. (Note, all of the above links are excellent reads, and I can’t wait to see some video from that panel as I’m sure it’s chock full of points I can agree with.)

First, there is a bit of care that should be taken to give blanket statements that person X sucks and should be fired, based on one brief interaction or point example. You can even take a person who is doing well but has some self-image issues, and suddenly he’ll give up because he was just told he sucks, when in fact he wasn’t. I agree with this sentiment if one takes at *least* a cursory look at their body of work and business security posture and value to the business. But then we also need to look and see if maybe the *business* is just wanting a puppet security guy/team/initiative in the first place… I think Phil makes that point in the commants to Rafal’s post. For all I know, we might collectively be doing a bang-up job of security, all things considered. Sure that 1 server didn’t get patched, but maybe 6 months ago none of them were being patched…

The hot example of Nickerson asking an attendee’s company mission statement is a pretty slick bit of trickery. How many people in any given audience will be able to, out of the blue, recite or otherwise explain their company mission statement? Not many. And how many of those mission statements are going to be shit? Enough of them. My company’s mission statement is posted next to me. I could probably fit a daily hour of Doom/Quake-playing into the goals of the mission statement. Anyway, as Phil somewhat mentioned in respond to Rafal, I think a CxO should keep mission statements in mind, but other lower peons probably don’t need it quite so close at hand; they should trust that their management is giving good enough direction in their own requests and projects handed down. But if that was a CSO who is normally unperturbed about being put on the spot, one could certainly slap his hand for not knowing the mission statement.

The question came out of railing against security by compliance or security by securing “everything” and such topics. The problem there is probably twofold:

1) Corporate networks are slowly built, and only “recently” has compliance been a driver. Sadly, most networks are probably way too flat. This means if compliance mandate A wants server 23 and all its peers secured to a certain level, then everything in that network has to be as well. Segregation for security purposes is well behind the curve, which compliance exposes by way of the economics of sastifying its needs. Scope, scope, scope…and big scope costs big money by way of resources and time.

2) Business-to-business (B2B) relationships are most easily answered by simple questions such as, “Are you in compliance with A?” That’s far better than a vague and silly 23-page security questionnaire filled in by someone the security and IT teams don’t even know. The businesses are pressured for these easy answers from the top down and sideways. Or each organization trying to explain their security approach. Think of it this way. If someone asks you if you have use perimeter firewalls, and you don’t, you’re going to have to spend a good amount of time talking about border router ACLs and various other technologies (and maybe even defend their value over and over) rather than spending 2 seconds to answer, “Yes.”

I like what Alan Shimmel had to say in, Until You Walk A Mile In Those Shoes. We have a lot of security breakers who rail against security defenders, but don’t really *get* the experience of being a long-term defender. (disclaimer: I’m not directly referring to anyone mentioned above, at all.) Security is already a losing proposition where it *will* fail someday and it *will* fail to be comprehensive. That’s just how it is. And that’s even before economics and business questions and human questions are brought up which prevent certain things from being accomplished or introduce new issues. At least I am enthused that many breakers these days are admitting their job is easy compared to the defenders.

stand in the gap

Gunnar Peterson has a great post: “Who Manages App Gateways? Who Indeed? Yo La Tengo – Call in Security DevOps”. I’m going to dive into the 2 basic problems Gunnar has touched on, and also move a bit further overall. (Warning: I use slightly different terms than Gunnar, so App Gateway is analogous to WAF to me.)

1. The basic question is: Who manages the app gateways? Or, who manages the Web-App Firewall? Netops plugs the hardware in and makes sure it talks on the right networks in the right directions. Sysops makes sure it talks in a way that it can interoperate with the systems it need to and gets monitored for health. Then what? In my opinion, this is as far as most organizations who check the box for, “We have a WAF!” go. They set it up, get it successfully in the middle, and then no one has the chops to actually tune and configure the thing beyond crossing their fingers, turning on the defaults, and breathing easy when it doesn’t break anything (all the while, it’s not really doing anything except the barest, most ridiculous attacks like a 1000-character URL request).

I do think Gunnar trivializes this just a bit by comparing these middle-of-the-ground tasks to Texas leaguers; this assumes that either side alone could solve the easy problem/catch the ball. I’d suggest that neither side *can* usually easily solve the problem, and actually requires someone in the middle, or someone on one side with many skills. It might be more like two outfielders playing so far away from each other, than a ball hit fast right between them requires a huge burst of inhuman speed for either of them to actually get under.

The point is, there are these gaps that security cares about, but traditional IT in anything but the smallest (and largest?) shops is not staffed to fill.

2. Security need to be cross-functional experts in a lot of shit. Gunnar totally covered this, so I really don’t need to, but it’s pretty much truth, despite my bias. I tend to illustrate this with the idea that coders are first taught how to copy one string to another. Then they’re taught more modern tricks on how to copy one string to another; the same simple concept, but with a few more tricks added on. Later on, they add the next layer of how to *securely* copy one string to another (if they’re lucky). Getting that far requires extensive knowledge and even experience. And that’s not even getting into being able to understand technologies outside the main one, such as DNS, server configurations, in-transit encryption, firewalls+in-transit communication needs.

Security is expected to understand code and the network and the server in such a way that they can configure very advanced technologies such as a WAF. And it’s not just understanding general code, but understanding the *specific apps* being protected.

I personally measure my opinion of developers based on how they understand those last few things. If they repeatedly just don’t get it, they’re just coders to me. (They might be really good, but they’re one-trick ponies.) But if they understand DNS, server configuration, and firewalls, at least to a general extend of knowing what matters to them, I deeply appreciate it. Those are your potential rock stars. Likewise, I adore sysadmins who understand code and can do cross-functional things like deep-dive into code traces on the servers and tackle memory leaks.

Or either of the above who have security skills and knowledge!

3. I think this lack of cross-functional understanding is another pressure on why the “cloud” (when it is just external hosting rebranded), has gained momentum. Developers have long made overtures to get administrative access to the servers that run their code, and hopefully sysadmins have rebuffed those overtures to do the server work on behalf of the developers. Have you ever walked up to a server that for the last 2 years has been adminned by a developer? Chances are pretty damn good that it looks about as good as a server adminned by a 14-year-old. That’s not a dig, really. If I tried to add code to your app, I’d fuck shit up pretty awful as well, and my code would be heinous. (This could be twisted around to say neither side will get better without being allowed to get experience with the tasks, but that sort of cross-functional short-term inefficiency does not fit anything beyond small business.)

In steps hosting and the cloud, where developers can drive those projects because it involves code, which netadmins/sysadmins don’t quite understand, and it might result in devs having administrative duties on these new servers. Which sounds great short term…

mubix reveals secrets on beating the red team

Mubix has posted up a great set of slides on how to win the CCDC, from the perspective of a red teamer. All of it makes sense, and it’s great to get it down in a nice, concise preso, especially tips on team composition and duties. From a non-competition side, this makes a great exercise in quick response or a security strategy-on-steroids for someone heading into a new org or responds to attacks-in-progress.

As usual, a few items to highlight:

1. You’re not important enough to drop an 0-day on. Most likely your company isn’t either, but at least at a competition like this, there’s no upside to revealing otherwise unknown exploits. Sure, red teamers are going to be ninjas in some tools and definitely have some of their own scripts and pre-packaged payloads, backdoors, and obfuscation tools, but none of that is so super secret that you don’t have a chance to anticipate or prepare for it. As Mubix says later on, know the red team tools.

2. Monitor firewall outbound connections. It’s one (wasteful) thing to look at the noise being bounced off the outside interface on your firewalls, but seeing bad things outbound is a huge giveaway. You’re not doing detection if you’re not somehow monitoring outbound (allows and denies).

3. You still gotta know your own tools and systems. As Mubix repeatedly mentions, you need to know your baselines on the systems (normal operational users, processes, files) and be familiar with the systems and how to work quickly on them, configure them, troubleshoot, interpret the logs, and automate little pieces here and there with your own scripting skills.

lessons from another wordpress breach

Hoff also has two posts about a recent incident his blog suffered: Why Steeling Your Security Is Less Stainless and More Irony…, A Funny Thing Happened On My Way To Malware Removal….

This is that perfect example where sharing information helps people. You get an idea of what failed, what mistakes are made, what human behaviors help or don’t help, how an attack actually worked, etc.

Normally I would bullet through some of the points, but there’s nothing terribly new here, and Hoff’s posts are worth the time to read.