stand in the gap

Gunnar Peterson has a great post: “Who Manages App Gateways? Who Indeed? Yo La Tengo – Call in Security DevOps”. I’m going to dive into the 2 basic problems Gunnar has touched on, and also move a bit further overall. (Warning: I use slightly different terms than Gunnar, so App Gateway is analogous to WAF to me.)

1. The basic question is: Who manages the app gateways? Or, who manages the Web-App Firewall? Netops plugs the hardware in and makes sure it talks on the right networks in the right directions. Sysops makes sure it talks in a way that it can interoperate with the systems it need to and gets monitored for health. Then what? In my opinion, this is as far as most organizations who check the box for, “We have a WAF!” go. They set it up, get it successfully in the middle, and then no one has the chops to actually tune and configure the thing beyond crossing their fingers, turning on the defaults, and breathing easy when it doesn’t break anything (all the while, it’s not really doing anything except the barest, most ridiculous attacks like a 1000-character URL request).

I do think Gunnar trivializes this just a bit by comparing these middle-of-the-ground tasks to Texas leaguers; this assumes that either side alone could solve the easy problem/catch the ball. I’d suggest that neither side *can* usually easily solve the problem, and actually requires someone in the middle, or someone on one side with many skills. It might be more like two outfielders playing so far away from each other, than a ball hit fast right between them requires a huge burst of inhuman speed for either of them to actually get under.

The point is, there are these gaps that security cares about, but traditional IT in anything but the smallest (and largest?) shops is not staffed to fill.

2. Security need to be cross-functional experts in a lot of shit. Gunnar totally covered this, so I really don’t need to, but it’s pretty much truth, despite my bias. I tend to illustrate this with the idea that coders are first taught how to copy one string to another. Then they’re taught more modern tricks on how to copy one string to another; the same simple concept, but with a few more tricks added on. Later on, they add the next layer of how to *securely* copy one string to another (if they’re lucky). Getting that far requires extensive knowledge and even experience. And that’s not even getting into being able to understand technologies outside the main one, such as DNS, server configurations, in-transit encryption, firewalls+in-transit communication needs.

Security is expected to understand code and the network and the server in such a way that they can configure very advanced technologies such as a WAF. And it’s not just understanding general code, but understanding the *specific apps* being protected.

I personally measure my opinion of developers based on how they understand those last few things. If they repeatedly just don’t get it, they’re just coders to me. (They might be really good, but they’re one-trick ponies.) But if they understand DNS, server configuration, and firewalls, at least to a general extend of knowing what matters to them, I deeply appreciate it. Those are your potential rock stars. Likewise, I adore sysadmins who understand code and can do cross-functional things like deep-dive into code traces on the servers and tackle memory leaks.

Or either of the above who have security skills and knowledge!

3. I think this lack of cross-functional understanding is another pressure on why the “cloud” (when it is just external hosting rebranded), has gained momentum. Developers have long made overtures to get administrative access to the servers that run their code, and hopefully sysadmins have rebuffed those overtures to do the server work on behalf of the developers. Have you ever walked up to a server that for the last 2 years has been adminned by a developer? Chances are pretty damn good that it looks about as good as a server adminned by a 14-year-old. That’s not a dig, really. If I tried to add code to your app, I’d fuck shit up pretty awful as well, and my code would be heinous. (This could be twisted around to say neither side will get better without being allowed to get experience with the tasks, but that sort of cross-functional short-term inefficiency does not fit anything beyond small business.)

In steps hosting and the cloud, where developers can drive those projects because it involves code, which netadmins/sysadmins don’t quite understand, and it might result in devs having administrative duties on these new servers. Which sounds great short term…

mubix reveals secrets on beating the red team

Mubix has posted up a great set of slides on how to win the CCDC, from the perspective of a red teamer. All of it makes sense, and it’s great to get it down in a nice, concise preso, especially tips on team composition and duties. From a non-competition side, this makes a great exercise in quick response or a security strategy-on-steroids for someone heading into a new org or responds to attacks-in-progress.

As usual, a few items to highlight:

1. You’re not important enough to drop an 0-day on. Most likely your company isn’t either, but at least at a competition like this, there’s no upside to revealing otherwise unknown exploits. Sure, red teamers are going to be ninjas in some tools and definitely have some of their own scripts and pre-packaged payloads, backdoors, and obfuscation tools, but none of that is so super secret that you don’t have a chance to anticipate or prepare for it. As Mubix says later on, know the red team tools.

2. Monitor firewall outbound connections. It’s one (wasteful) thing to look at the noise being bounced off the outside interface on your firewalls, but seeing bad things outbound is a huge giveaway. You’re not doing detection if you’re not somehow monitoring outbound (allows and denies).

3. You still gotta know your own tools and systems. As Mubix repeatedly mentions, you need to know your baselines on the systems (normal operational users, processes, files) and be familiar with the systems and how to work quickly on them, configure them, troubleshoot, interpret the logs, and automate little pieces here and there with your own scripting skills.

lessons from another wordpress breach

Hoff also has two posts about a recent incident his blog suffered: Why Steeling Your Security Is Less Stainless and More Irony…, A Funny Thing Happened On My Way To Malware Removal….

This is that perfect example where sharing information helps people. You get an idea of what failed, what mistakes are made, what human behaviors help or don’t help, how an attack actually worked, etc.

Normally I would bullet through some of the points, but there’s nothing terribly new here, and Hoff’s posts are worth the time to read.

the winning losing security debate

I saw the opening salvo on Twitter that caused the blog post, “You Know What’s Dead? Security…” from Chris Hoff, and he ended up penning a really good read.

I don’t think it is worth much to talk about “winning” or “losing,” ultimately. Security and insecurity are eternally linked to each other. This is maybe the first time where I like Hoff’s blog name: Rational Survivability. It’s really about surviving in an acceptable state, or rather, simply not losing. But there’s no real win going on, and it might be too much to expect a win at any time.

I do think Hoff got a little sidetracked on the commentary on the security industry. I’ll agree, in part, that the security industry isn’t making solutions that are aligned properly. But I’ll go on to say I’m not sure how a “product” of any type will ever truly be aligned enough to feel good about. These are just tools, and none will magically make someone think we’re winning, in whatever context of the word or feeling. If anything, the security industry has a problem in trying to make their tools sound like they solve the world… There’s also just a certain bit of irony stuck in there somewhere that Hoff typically pens about “cloud” stuff… 🙂

If I may dig further, I also dislike the thought of “innovation” in security. Security is a reactionary concept. It reacts to other innovations with attackers, or innovations in the things security is securing, for instance new technology or assets. That may not always happen in practice, but then again some activity just doesn’t end up being rational.

the secrecy game

Adam over at the New School of Information Security posted a good read: “How’s that secrecy working out?”. We have a lot of people all happily about how the government is talking about information sharing. The elephant back in the corner is: When you involve the government, they want you to share with them, but they won’t share with others and you can’t share with others either. Essentially, it doesn’t do any of us in the private area any good; in fact it makes things worse since it ties even more hands and causes people to pussy-foot around issues and details.

…because we (writ broadly) prevent ourselves from learning. We intentionally block feedback loops from developing.

One of the better posts I’ve read this year.

learning from irresponsible disclosure

Gotta link to Robert Graham again over at ErrataSec for the piece: “The Ruby/GitHub hack: translated”. There’s too many good points to pass it up.

1. This is a great example of irresponsible disclosure in action. By attacking GitHub, not only is GitHub now less vulnerable, but more people (hopefully developers and security auditors) are aware of this problem. Sure, more awareness of the problem may mean more people use it against vulnerable sites, but the flaw was still in those sites. Vuln is present, but risk has gone up a bit…

2. The problem is inherent in the feature set that makes Ruby of Rails a boon to developers. Pretty much a great example of a design flaw that has benefits, but also has risk. Usability vs security.

3. It also means a flaw in one tool affects everything/everyone that uses that tool. GitHub was hacked as a reaction to Ruby on Rails rejecting the bug, GitHub’s choice to use that platform, and their lack of securing (understanding?) a hole.

4. Putting the onus on site owners to blacklist and even understand the issues is probably not the right way to do things. I guess it’s a way to go, but it certainly makes me make a disgust facial expression.

staying anonymous online is still hard

Robert Graham has a nice addition to the discussion about Sabu/Lulzsec: “Notes on Sabu arrest”. Maintaining anonymity online is hard. In the (increasingly distant) past I ruminated about staying anonymous online (1, 2, 3, 4). It’s hard, takes a lot of work, and you need to maintain absolute vigilance so you don’t screw up even once. I should really update and add to that series, especially in light of smartphones, GPS, web tracking giants…

self-preservation in the criminal underworld

Via the LiquidMatrix article, “Sabu Rats Out Lulzsec “, I got over to the article on FOX, “EXCLUSIVE: Infamous international hacking group LulzSec brought down by own leader”.

This succinctly illustrates one of the biggest tools that anyone has against criminals: the lack of trust amongst criminals, and their fear of justice system punishments. Clearly this requires there to be laws, and the pursuit of them, but ultimately a criminal always needs to be looking over their shoulder and being distrustful of their peers.

Even the increased level of anonymity the Internet provides is not always enough to keep social* criminals safe.

* Or those that can’t stand on their own. In other words, if you have stolen goods, you still need to sell it to someone/somewhere, or have contacts to do SpecializedTaskB or whathaveyou.

this is why they don’t make you read privacy terms

It’s nice to see useful articles about digital security and privacy starting to grace major media these days. I especially liked this one found around the front page of CNN.com this evening: How to prepare for Google’s privacy changes. I like the steps it shares at the bottom. And I really like this statement:

Google points out that the products won’t be collecting any more data about users than they were before. And, in fairness, the company has gone out of its way to prominently announce the product across all of its platforms for weeks.

In other words: “You mindless sheep, finally you’re going to get pissed about privacy issues that were already flippin’ there!”

As I like to say, if your business model suffers when you have to reveal it to your customers, because of how they react, you need to sit back and do some soul-searching. Be up front about it and let consumers decide if its worth it. Don’t just try to see what you can get away with, whether intentional or by feigned naivety.

interview with rsa’s art coviello

It’s been a year, but you can read some more about RSA’s woes last year from an article/interview with Art Coviello, Executive Chairman, RSA who is giving a keynote sometime around now over at the RSA conference.

I’m personally not sure I’m buying the part of the attacker not getting entirely what they wanted, or the parts about replacing tokens just because of the perception of lost faith from customers, and not because some secret sauce was stolen, putting customers at risk. I think this is continued smoke and capitalizes on the continued lack of actual detail on what was taken, which RSA has done since day 1. And covered up misdirected a bit by saying that people still buy their solution and they still sell them. In my guess, they changed the wrong things they were doing (keeping a seed list), which makes this true, but not relevant to the breach/response.

Misdirection…clearly I’ve been watching too many magic-related stuff online these days (I have!). Something involving Reddit questions with Penn & Teller on YouTube and reading an article some months ago about Teller and a little red ball trick… (Side note: I love how the Internet can stoke these almost childlike moments of learning and interest so efficiently.)