wafs require a wide skill set

This is going to be preaching to the choir, but I don’t get to link enough to Hoff these days (my head is not up in the cloud, unless you count my 90% virtualized environment), and he gave an easy-to-agree-with post about WAFs:

How many infrastructure security folks do you know that are experts in protecting, monitoring and managing MBeans, JMS/JMX messaging and APIs? More specifically, how many shops do you know that have WAFs deployed (in-line, actively protecting applications not passively monitoring) that did not in some way blow up every app they sit in front of as well as add potentially significant performance degradation due to SSL/TLS termination?

I’d add to this even further. What team should be involved in the WAFs? Developers. But which team does this duty get shouldered upon because of the ease to tie a WAF onto the chokepoint appliance in the network? The network team. Maybe even the web systems team. Which team has no idea how to tune a WAF? All of the above. Why? Because it spans layers that none of those teams are wholly familiar with, from data to app to protocols to network. Maybe QA should be the answer, but I don’t have the feeling that many shops value their QA that way…

Of course, Hoff’s last statement is really a business cultural issue, and unfortunately the only way anyone has a chance to learn how to get any value out of a WAF. Sure, put it in a test environment, but how many developers, infrastructure, or security guys have the time to interrupt the others as they bounce around in the WAF? I don’t know about all sysadmins out there, but even “test” environments for developers are “production” in my eyes. If I make a change and bring a test server down for an hour, I get developer managers giving me the hairy eye…

(Side note: Want to gain some real good experience? Adopt a real, honest, working WAF, get permission to implement and toy with it, and go to town. Figure out what kills apps, figure out what attacks look like to the app you’re protecting, and figure out how to block them. Sure, this might mean a whole “sub”-project in learning attacks on web apps, but all of this is great practice to gain some experience in an area I think is still sorely lacking in real experts beyond pointing WebInspect at a URL. You can do this at home, but I feel like you’re bounded by your knowledge and use-cases, and not all the things that happen in day-to-day dev/qa/network/internet traffic. But if you *do* do it at home, start with all those vulnerable-web-apps-in-a-box setups and learn the attacks. Then put a WAF in front and block said attacks…hmm maybe I should do that again!)

re: truth and disinformation

The venerable Rothman has started a fire (see what I did there?) talking about truth and (dis)information.

I normally don’t dwell on hacker or criminal or even hacktivist groups very much (I prefer to keep my head down), but I’m not surprised at all about the current state of affairs that Rothman speaks of, for two very broad reasons.

First, there’s distrust amongst criminals. Let’s be clear: there are two broad types of hacker groups, those that break laws and those that are really just nuisances. Sort of like home thieves vs. train car graffitti vandals. Once you start breaking laws, you get into a whole new game where you are collectively wanted people with penalties over your heads. And not everyone will have the same fortitude and acceptance of those risks. This makes actual criminal groups very unstable and distrusting amongst each other. You never know when someone is LEO,or has been caught and made a deal to be an informant, or if you’ll just plain overstay your welcome and become another loose end to tie off. You also never know when you’ll be screwed in some way or other.

I would venture to say once a group breaks laws, they’ve crossed that grey ethical line, and escalating from there isn’t so hard. Somewhat like breaking into your first business or beating that first person to a bloody pulp; doing it a second time is far easier. As is escalating. It only takes one splinter group (cell? wut?) or even person to escalate things for the whole group, which means even more distrust.

Second, it would be folly for law enforcement and even governments to *not* have their undercover fingers in these sorts of groups for a variety of reasons: Sow discord, find criminals, discover incidents no one’s reported. But also to gain information into how these groups work, what their tech and methods are, and also gain assets. The latter goes for local agencies as well as foreign, as they attempt to gain talent and bodies and knowledge. Even people like Brian Krebs are involved as observers…

I would even argue that large corporations may have some interest in keeping their noses in the underworld like this, on a purely secure and non-active level. Then again, I doubt many orgs even get that far, as securing their own networks and people is tough enough. (I still have these moments where I think of the worlds painted in Back to the Future 2 or the Shadowrun universe where corporations are a dominant force, and they are quite involved in the under world.)

Now, do I think that someone like NATO has informants at all levels? I’d guess not directly; maybe via proxy when looking at cooperation from member nation agencies and even then counting their tenuous informants.

What I do guess is there are plenty of less-skilled persons in these hacker groups that make for great headlines when they get pinched en masse because they’re kids sitting at home making poor security decisions and being traced easily. The more popular they are, the more hangers-on there will be, and collectively the less safe they’ll be.

embrace the value, any value, you can find

(I’ve sat on this for half a day, but wanted to post it since there isn’t enough blog-to-blog talk these days. I may be wrong, I may have good points. Some people learn to shut their mouth over the years, but others of us are learning to actually speak up! So, if nothing else, this is therapy for me! Oh, and I wouldn’t even bring this post up if I didn’t respect Jay and his blog and his thoughts.)

Go read the post “Yay! We Have Value Now!” by Jay Jacobs. Then look back over here. Let’s put on our diving suit and…err…dive in!

I really should say I hate the idea of saying, “I told ya so.” It’s insulting, demeaning, whining, and so on. But that doesn’t mean I like the situation that may lead to thinking, “I told ya so,” and then effecting some change without saying so much. Truly, we will actually never get anywhere if we don’t get business leaders to say, “We were wrong,” or “We need guidance.” These are the same results as, “I told ya so,” but a little more positive, if you ask me. But if leaders aren’t going to ever admit this, then we’re not going to get a chance to be better, so I’d say let ’em fall over.

Besides, you could do security action _____, and I’ll always be able to someday say, “I told ya so!” It’s totally an attitude thing. Moving on! 🙂

(Aside: I think the whole topic of ‘secure enough’ or ‘there is no state of security’ and such is akin to that old Usenet idea of every argument devolving into a Hitler analogy at some point. I find that there is not much we can discuss in security without eventually hitting that point, either implied or explicit.)

RE: Problem #1: It assumes there is some golden level of “secure enough” that everyone should aspire too. – I personally don’t see that assumption. Pointing out Lulzsec popping other people simply means those businesses had some deficiencies and they were attacked by some criminals. Criminals out for a good time and laughs. Some of those guys certainly have some skills, but lord help those companies who may have had even more talented and insidious attackers finding and leveraging those weaknesses first. (Then again, maybe a business will prefer to be attacked by their competitors as opposed to hooligans out for laughs?) Still, I’m not sure what Jay was meaning when he turns this into the loss of credibility.

I should propose there are two broad types of security engagements. The first is where a business leader wants reasonable security advice. This almost always begs some metrics, defensibility, economics, and business process/culture consideration. In other words, what should we personally be doing as a business to be secure? The second is where a business leader wants to know what security improvements there are to make. Some of them might end up not being reasonable, but that’s for the business leader to decide. The latter probably won’t ever lose credibility in the face of public digital hacks. But they will walk out of board rooms rejected on a more regular basis.

RE: Problem #2: Implies that security people know the business better than the business leaders. – Business leaders are talented; I’ll make no effort to dispel that blanket belief. However, if you want to know the real value in a business or how the business is doing, talk to the accountants. If you want to know business process, most likely you’ll talk to the IT teams. If you want to know the digital risks, you talk to the security teams. The trend here is that while business leaders (at the risk of not defining “business leaders”) can be very good leaders, managers, entrepreneurs, salespeople, and strategic thinkers, that does not necessarily mean they have any grasp on digital risks or what that might mean, or how their important assets are being protected (either physical or digital). They might not even know of some of the damaging assets such as the database storing CC/PII or that their secret formula is on a test server somewhere. Is that bad? No. It just means they’re not omniscient.

Does that mean IT or accountants or security know the business? Nope. But that doesn’t mean they know the business less than business leaders either. Mostly, I agree with Jay. I think security gets a little too loud with the, “I have your data, your business is over!” ranting. It’s possible the business owner knows the cost of security, has a good idea the cost of any realized risks, and has actually chosen a spot on the balancing beam that is security. Maybe that means there is risk left open, and some attacker leverages that opening and pops data off into the public world. Sometimes, the end result is, “So what?” Some people may lose their jobs, media might make things fun for a while, and you might even get to talk in front of a Congressional commmittee that has some ideas on making some digital regulations (how’s that for an attack effecting change?). But it truly is just a part of doing business and an incident for leadership and PR to handle and move on through. Now, did Sony understand that if they host a porous web application that, if attacked, might result in not only losing the data it houses on customers but also result in extended periods of downtime and public smearing? Perhaps.

Still, the end point is that this isn’t about knowing the business more than the business leaders, but just knowing the digital security posture more. Refer back above to my 2 types of security engagements. It’s hard to tell a security geek to stop being a security geek for a bit and be reasonable. Not because of a character flaw, but because of our passion and desire and (often) knowledge. Now, I bet Jay would agree that the security pro who can have that passion but also temper it into ‘reasonable security,’ is golden!

RE: Problem #3: This won’t change most people’s opinion of the role of corporate information security. – Diving into this point is not going to be fun, and Jay probably knows it since he admitted the problems in this point. I’ll try not to linger…

Statistics just flat out BEG to be manipulated and presented in strange ways that may paint things in an opposite light. How many of those 200 million domain names are even web sites? That have significant enough value behind them to ever even begin to have the potential to be the face of a “large breach?” Not 200 million. And so on. How may were public? (Of course, won’t this mean there is such a thing as “good enough” security? Oh shit, I did it…)

Strangely, Jay sort of makes the opposite point he set out to do: “We need more tangible proof to really believe in hard-to-fix things like global warming: we fix broken stuff when the pain of not fixing something hurts more than fixing something.” Wait, what? Watching Sony’s network get made into Swiss cheese isn’t tangible proof enough? I’m being an ass there, since I think Jay means it needs to happen to us directly.

Here’s my favorite security analogy (cue the emotional language!). If you find out and hear the stories and see the shaking and tears of your neighbors who have been victims in a string of home break-ins and theft, will that have any bearing on your own home security posture? Even if for just the short-term?

On the other side of the coin, if you haven’t heard a lick about theft or suspicious persons or strange things going on around your neighborhood, I wonder how many such residents will see their security posture loosen over a complacent time.

One might argue that perhaps there is crime there, and they aren’t targets, and even zero effort/time/money spent on security would have the same results. Perhaps! That’s where I get into the whole Security Gamble issue. You might have zero security, and be a victim. You might have 95 units of security, and be a victim.

But I would say that in the face of a rash of incidents in your neighborhood, you’ll take at least a superficial look at your own posture, and maybe raise just the right questions for a security pro to actually effect some positive difference.

Jay’s core point, though, is still true as any security or even IT ops professional can attend to: shit usually doesn’t get fixed until there’s a problem. That includes flaky servers, poor code, insecure practices, database hashing, vuln scans, and so on. But I’d still say public scapegoats do have a positive impact.

RE: Problem #4: Companies are as insecure as they can be (hat tip to Marcus Ranum who I believe said this about the internet). To restate that, we’re not broken enough to change. – I honestly don’t have much I can say about Point 4. 🙂 I pretty much agree, but it’s also general enough to be somewhat unarguable. The one thing I can say: I think Sony is making changes due to these incidents. *shrug* Just sayin’…

By the way, I am entirely neutral with this whole LulzSec thing; but I will certainly use any opportunity I can get to promote security initiatives in people or organizations that I may influence.Yeah, pointing out the inevitable insecurities in others is about as evil and head-shaking as any other FUD, but security is ultimately what we’re asked to do.

steal like an artist preso

In a twist of irony, I’m also stealing this preso link from Gunnar’s interview with Brian Chess: How To Steal Like An Artist, by Austin Kleon. Not a ton of this is new, but it’s a new way to look at things, and almost every item pertains to life/career outside of art.

Just in case the site ever dies, I thought I’d also reproduce the raw list of steps here. But this is certainly no substitute for checking out the original post and probably the book. Anything in [square brackets] is me adding to the item, either my own thoughts or the author’s subitems.

1. Steal like an artist. [So many subitems!]
2. Don’t wait to know who you are to make things. [Just do it!]
3. Write the book you want to read.
4. Use your hands.
5. Side projects and hobbies are important.
6. The secret: do good work and put it where people can see it. [Give away your secrets!]
7. Geography is no longer our master.
8. Be nice. The world is a small town. [Matches a core philosophy of mine.]
9. Be boring. It’s the only way to get work done.
10. Creativity is subtraction. [Concentrate on what’s important]

if it doesn’t sting sometimes, you’re not doing it right

I don’t yet have a full-on crush on Gunnar, but a bit in a post/interview of his reminded me of a concept I like to drive home. In the third part of his Brian Chess interview with my emphasis:

Ever wonder why so many programmers are so bad at security? Part of the problem is that most of them don’t know they’re bad. Generally speaking, people are bad at assessing their own strengths and weaknesses (read this). That means you need to seek objective measures of your work. If it doesn’t sting sometimes, you’re not doing it right.

In the (too distant) past I’ve done some weight-lifting, and very recently have taken up trying to get into the running habit. Even in those realms, I darn well know that unless it hurts (in a good way), you’re not making progress. This is the same in self-serice car repair. This is the same in learning a new game. This is the same in IT ops (we learn the most when shit is broken.) This is the same in security.

I have this “in-progress” unpublished post (for like the past 2 years!) that is just this beefy list of security “laws” or other rules for those of us in security. One of the most recent ones I added was something similar:

Doing security right means finding that happy medium where you are bouncing in between transparent security and breaking things. For instance, tuning the perfect firewall rule means tightening it down until it breaks something, then loosening it just enough to make it work again.

Brian Chess wasn’t necessarily meaning it strictly that way, but we all certainly need to be ok with having something sting now and then, because that is when we get better and learn and also reinforce that we’re not just sitting in some ignorant funk where everything is wrong and we just don’t know that it’s wrong. “Wow, our logs sure are clean,” which glosses over the fact that your logging has been broken for weeks.

patco vs ocean bank vs reasonable security

Brian Krebs has an article up on the case of Patco vs Ocean Bank. The implications of this case could have important industry ramifications as the key point of contention is what technically constitutes “good enough” security from the bank’s perspective.

I don’t suggest reading too many of the comments. This is a very delicate and not-clear situation, which many commentors don’t seem to grasp very well. While some of the angst may center on whether the bank really had 2-factor authentication or possibly the out-dated guidance from teh FFIEC.

Side note 1: I’ve not read the actual case file, but from Brain’s article, I’d say Ocean Bank isn’t using 2-factor authentication.

Side note 2: Always asking security questions on every transaction reduces the security value? Actually, sort of, when your attackers are employing keyloggers and you normally don’t have transfers that trigger the asking of those questions. Then again, any attacker who runs into those barriers will just keep lowering their transfer amount until they’re under the threshold. Hopefully that would trigger some fraud alerts…

Site note 3: At some point consumers (business) need to put their own diligence in doing their banking on trusted systems. If you hire a courier or some other proxy to run to the bank and make transfers for you, if that person ends up skipping town with extra money because they inflated the transfer amount and sent it to themselves, do you blame the bank or your own hiring practices/trust? In this way, computers are a sort of proxy, granted, a proxy that answers to anyone with the right handshake, so to speak…

Site note 3a: Unlike the “simple” maintenance and safety and security of a car or other vehicle, the care and safety and security of a computer system or network is still going to be far above the head of most consumers and workers. Telling people they need to put forth their own effort in maintaining a trusted computing platform is often going to be met with tears of anguish and outrage…as they then turn their eyes to app/OS vendors and their security track records…or to the government’s lack of “internet jurisdiction” in keeping foreign attackers out or at least under threat of arrest..and on and on.

Site note 3b: All of this ends up raising questions of what is reasonable in a highly-technical globally-connected digital world? I’m not sure anyone will ever be happy with where the decisions fall in such a discussion.

the enemy knows the system, and the allies do not

Go read Gunnar’s quick piece (and the comment) about Jay Jacob’s insight on Shannon’s Maxim (can I make this sentence more awkward?): The enemy knows our systems, but the good guys don’t.

Even looking at it from the network perspective, the enemy knows your firewall rules, yet so many internal folks do not. It sucks to look at a firewall and ask why rule #267 is present. Only to have no one able to answer it.

Or to have a developer look at the security person who wants security, but the developer has no idea and no one else to talk to on how to fit that in without potentially breaking everything else. As Jay says, “…people aren’t motivated to evaluate all the options and pick the best, they are motivated to pick the first option that works and move on.” (Coders/developers are notorious for this, but so are sysadmins and users as well!)

Essentially, security is often covertly treated as the experts in…everything internal. Which really is a tough requirement to ever meet. Really, the organization needs to know its own stuff intimately.

Before the enemy does it. This is still why I consider pen-testing activities to be valuable; since they often expose exactly what an attacker is learning that an organization hasn’t.

As Marcus in the comment to the linked article essentially says, I’m sure the revolving door of (questionably skilled) outsourced and contractor IT doesn’t help at all.

and there is still craving about rsa hack details

Lockheed Martin recently suffered a hacking incident. In the days that followed, it was reported by the NY Times that the attack was indeed linked directly to a previous RSA hack that stole what is still unidentified information from RSA. CNET has posted more information and links and Wired has a blurb about L3.

As I mentioned on Twitter, how much better would we all be if RSA had divulged full details to the public or affected parties? Were they just going to wait and hope nothing came of whatever was stolen from them?

Of course, with something like this the worst should be assumed, but that’s not a great strategy to tell your boss or use to formulate your budgets and risk postures. No one assumes the worst; if they (or we) did, we’d have far better security initiatives…

I understand they are certainly fixing whatever was broken and replacing what needs to be replaced, but it’s still irresponsible in my book.

management vs technical ramblings

Jarrod Loidl has an interesting discussion on the topic Management vs Technical Career. This is always definitely something to keep in mind as a career moves forward, and I think he really does end up hitting most of the milestone points in such a thought process. It’s a long post, but it keeps firing the cylinders even at the end.

I really like the ending tandem points of, “do what you love,” and (in a Wolverine voice), “be the best at what you do.” Combine that with, “don’t be an ass,” and you really have a simple guide to work and life.

If I were to look at my own lot, I’d say it certainly is hard to keep current with the skillsets. I remember starting out my career around Windows XP and I still feel like I know it inside and out. Windows Vista/7? I fully doubt I’ll ever be as intimate (then again, I don’t do desktop support right now). On the managerial side, I feel like I have excellent organization, attention to detail, high degree of problem-solving/troubleshooting skill, and I make accurate decisions quickly (backed by confidence in those skills) when I need to get things done. My downside is that I’m not entirely a people person. Oh, once I get going, I’m fine, but it really takes significant effort and time for me to find my voice socially in a given group, as any introvert is likely to echo.

That said, at this point in my life and career, I could probably swing management, but I get far more enjoyment out of the technical side of the equation, for a variety of reasons that I won’t dump out here quite yet.* Management is one of those things I accept I’ll do someday simply because of the decision-making support and anlysis skills I have, but I have the luxury of allowing that “someday” to not be tomorrow quite yet. Perhaps if I snag some security consulting gigs that would be enough… 🙂

The end thought is one Jarrod mentioned: At least spend the time to do this reflection on who you are, what you are, what makes you happy, why it makes you happy, and so on. Too many people never ask these introspective questions, and they should.

* Updated to add this: This isn’t to say I wouldn’t actually find myself happier in the right managerial position. It’s hard to tell since I’ve not been in a situation other than a team lead/senior sort of role. While I might not look at managerial want ads, that’s not to say I’d shy away from the right one whose doors opened for me.

a monitoring lesson from fraud correlation

While this article on the Michaels breach is nothing special, I did like the very opening paragraph:

…a sign that strong transaction monitoring and behavioral analytics are the best ways to curb growing card-fraud schemes.

Remove the word, “best,” and I really like the application of a paradigm like this in many aspects of digital security.

Of course, none of that level of monitoring and analysis is really new in concept, but organizations still have an issue in both realizing this approach but also doing it effectively with a blend of technology and talent.

It’s important to note that without customer reports, even analysis can’t tell a bad transaction from a good one….

wild leaps of logic from tech journalists

Cringely has a strange article which continues the RSA SecureID attack mystery: InsecureID: No more secrets?. I can’t say I’ve ever read Cringely before, so maybe he’s just some tech commentator with no real insight here other than a wide following and sensational, wild speculations… (After writing this, scanning his recent articles pretty much shows me he’s just a tech blogger and that’s it. Yes, I mean it when I say that’s “it.” And yes, I’m being ornery today and particularly spiteful in my dislike of tech commentators dipping a crotchety toe into deeper discussions than they’re suited for.)

It seems likely that whoever hacked the RSA network got the algorithm for the current tokens and then managed to get a key-logger installed on one or more computers used to access the intranet at this company.

Wow, that’s quite the leap in logic there (on multiple fronts), especially since RSA hasn’t revealed what was pilfered from their network. Common discussion tends to speculate that the master list that maps the token seed list to organizations that are issued those tokens (probably keyed by serial number) is the most likely divulged piece.

How would a keylogger assist in this? Well, first, a keylogger alone could be enough to divulge login credentials, although any captured credentials are quite ephemeral when using a secureid token. Second, it could reveal any static PIN numbers used and usernames. I assume the PIN number is the “second password” mentioned in the article. *If* (and that’s a big if) the attacker was the same one who *may* have the seed list and algorithm, that attacker could theoretically match up that user and their fob based on keylogged information.

Does this mean “admin files have probably been compromised?” No; that’s an even bigger leap in logic. Possible, sure. But only with the correct access and/or expanded control inside the network. Hell, I’m not even sure Cringely knows what he means by “admin files.”

Of bigger concern is how a keylogger got installed on such a system long enough to caused this issue. Granted, something was detected (though I suspect it was *after* a theft or attempted VPN connection), but being able to spot such incidents on the endpoints or in the network itself should be a big priority.

microsoft’s waca v2.0 released: web/app/sql scanner

Microsoft has recently released their Microsoft Web Application Configuration Analyzer v2.0 tool. This is such a straight-forward tool to use, and includes rather clear checks and fixes, that it’s really not acceptable to *not* run something like this, especially if you run Microsoft IIS web servers or SQL instances.

The tool has a nice array of checks when pointed against an IIS box, and even does decent surface checks against SQL. While this tool does include “web app” in the name, I don’t think it goes much beyond inspecting a site’s web.config file on that front. It also requires Microsoft .NET 4.0 on the system you install the tool on, and predictably needs admin rights on any target systems it scans. If you’re curious about any checks, they’re pretty clearly spelled out. Also, if you want to supress any checks because they don’t apply, you can do so. The report then mentions the presence of suppressions (yay!), and you can even take off the supressions after the fact, since the tool still does the checks but just doesn’t include them in the end tallies.

This does make a great companion scan tool to add to your toolbelt for appropriate systems, even if it has a herky-jerky interface.

As a sort of cautionary piece of advice, I wouldn’t be totally surprised if some organizations request this tool be run by potential vendors/service providers whose systems meet the tool’s criteria. Which means you hopefully will have run this tool before such a request! It’s much more palatable to request something like this as part of an initial security/fit checkbox when it is an official Microsoft tool. Just sayin’…

some security practice hypotheses

I’m not sure if I jotted these notes down here or not, but wanted to move these from a napkin to something more permanent.

What is the hardest part of security? My thought: Telling someone they’re doing it wrong when they don’t know how to do it right, and you can’t explain it properly. The more technical, the worse it is?

Two examples: First, someone makes a request that is just kinda dumb and gets denied. They come back with, “Why?” And you have to figure out why the value of saying no is higher than the value of just doing it, or what it would cost to accomplish the request while maintaining security and administrative efficiency. (i.e. You want *what* installed on the web server?!) This can be highly frustrating in a non-rigid environment. It’s also the source of quite a lot of security bullshitting.

Second, a codemonkey makes poor code and you point it out. Codemonkey asks how it should be done. If you’re going to point it out, I’d really kinda hope for some specific guidance appropriate to the tech level of your audience. This brings up the pseudo-rhetorical question: Should you be pointing out poor code if you don’t know how to give suggestions on fixing it? (Answer: depends. On one hand, don’t be a dick. On the other, anyone should be able to point our security issues, otherwise people wouldn’t point them out! It’s extremely nice when someone *can* help those with questions, though, with actionable answers beyond just “go read OWASP.”)

And here’s a hypothesis: You’re not doing security if you’re not breaking things, i.e. pushing boundaries. Follow-up: Security pursuit breaks things unless you have expert knowledge and experience.