mid-week rambling on being pragmatic with security

If someone in security isn’t yet convinced it is as much an art as it is a science, I’d expect they’ve not done security long enough (or they’ve been lucky to work in a high security environment or focus solely in academic computer science).

For as much as security wants credibility and to make a difference, it dashes our efforts when someone runs into a room waving around an automated vulnerability report and demanding that every (every!) item be fixed or business will be denied. …including such idiotic things like “hiding” http 403 errors because they give away directory presence or a single weak cipher is enabled or something else so low as to be valueless to any attacker. Or at least less value than it costs to mitigate the low issue! It hurts worse when this report-waving person is another “security” dude. To those people, way to sour everyone’s grapes.

I also read a recent post on Bejtlich’s blog as well as the links and comments for the post. Some great thoughts in there.

I’m convinced of a few things…

First, there are very few (if any) correct answers that work on a global or universal or even “just really large” scale. What works for one organization may not work for another, for any countless reasons. We have lots of great ideas, collectively, in security, most of which probably work. The biggest problem is inertia and getting someone to actually devote some time and resources to the cause in the first place.

Second, the only way to combat the crap being passed around is to be an expert in security (in as many veins as possible) and being able to maintain credibility to educate management. This means being pragmatic and yet effective. It means being able to talk to someone and explain why issue #87 is not the Big Deal they’re running around trying to make it be, just because it appeared on an automated scan. This means not making ultimatums over useless low risk issues and actually tackling issues and initiatives that will actually have some value (even if you don’t understand fully how to measure and prove that).

I really think good security geeks know in their gut when something is useful to the cause or not, even if it is hard to actually justify it every time.

pci debate podcast is quote-irific

Check out the ‘Great PCI Security Debate of 2010’ podcast pieces. Part 1 is hosted at CSOOnline. Part 2 is hosted at the Network Security Podcast. Everyone is quote-irific. Everyone has great points and I find myself agreeing with most (but not all) of what every person is saying, which itself indicates the challenges we have in security. It is not about finding the ultimate answer to the universe and everything, but rather still a very subjective view on what you’d think is a very objective discipline (IT).
Josh Corman early on had some great quotes:

“What a strange twist of fate that we now fear the auditor more than the attacker.”

“We’ve reached a level of completely unacceptable and unsustainable cost and complexity.”

And Jack Daniel:

“There are a lot of people just trying to get past [PCI].”

“Their [network admins and systems admins] goal is for the network to work and the systems to work, and that’s what they’re judged on. That means getting PCI out the door.” <--this reminds me of the paradigm difference between security in the trenches and security in the exec rooms. It also reminds me of Rybolov's Infosec Mgmt graphic. It might also exemplify the difference in perspective between macroscopic (global/universal) and microscopic (1 network) security…

thoughts on the google/china incident of 2010

Praetorian Prefect has a video posted demonstrating the Aurora attack against IE6. It also shows how easy Metasploit is to use once you get some experience with it. While nothing new to sec geeks, I think it is mind-boggling to norms who have no idea how slickly you can own a system.

This incident centering around Google has raised tons of discussion. I really can’t add too much more to what has already been said in various corners of the net, but I can at least add my own voice to the cacophony…

First, Google is a large, public company. They, like most any company, will not come out with a declaration like this without a firm economic reason to do so. I think the best response I’ve seen was Moxie’s over on the DailyDave list.

Second, lots of people rightly diss on these companies for probably using IE6 widely. This is an easy argument (just like saying ‘why are you insecure?’ after someone is hacked…), but not one I tend to take too deeply because, quite honestly, it takes time and effort (i.e. MONEY!) to change things in an IT environment. Good point, but don’t bandy this too hard.

Third, stop being surprised that Google has automated systems to dump your data to authorities. Don’t be naive, both about Google and about economic entities.

Fourth, Google uncovered several attacks to something like 30 other large companies. Wait…does that mean all of them didn’t detect the attacks? Pass the whiskey…

Fifth, defense in depth and detection helps. Having operators/analysts keeping their fingers on the pulse of networks and systems helps (or more appropriately properly augments automated tools). Signatures (and automation) do help and have their place, but nothing will be able to interpret suspicious or strange behavior like a human.

Sixth, speaking of defense in depth, we’ve all seen the vectors of initial attack. We’ve all heard rumors about just how deeply that attackers got inside their targets. But who is connecting the dots? Exactly how did owning the clients pivot over to the servers or systems? I’m not saying I don’t believe those rumors, but I am saying it sounds like we still have a non-secure interior. I know security is reactionary in nature and economically-bound, but what the hell?

Sixth, attackers were originally curious and self-serving in a non-financial way. Then they realized they can make money stealing directly from accounts in a very liquid fashion, and a subset who directly utilized CPU cycles collectively. I think now we’re seeing more realization that there is value in information held by corporations; on the level of corporate espionage. This is far less liquid to most people, but to nation-states or other corps… I’m not saying this is cyberwarfare! But less-liquid espionage is the next natural step…should we be surprised that Google reportedly had a team ready to attack the attackers? Shadowrun, anyone?

some scattered links on the google v china affair

If you’re sick of Google v. China, then skip this post. This is just me hoarding a few more links for reference.

Researchers identify command servers behind Google attack.

Google’s internal spy system was Chinese hacker target (which references ComputerWorld:

This [internal spy system used to fulfill warrants] reveals that Google collects information about all of its users all of the time and in a format that enables it to easily had it over to any government agency that orders a search warrant. This is an embarrasing revelation.

recent high profile hacks expose internet explorer 0day

I hadn’t mentioned the Google/China drama because pretty much everyone else has, but new details have emerged on this topic in regards to an IE 0day exploit in the wild. Both Brian (“How-Many-Bloggers-Can-Say-They-Have-Sources?”) Krebs and The H Security have good posts on it, and both link to Microsoft’s new advisory on the problem.

The H Security has an interesting comment:

The advisory states that, while the hole affects versions 6, 7 and 8, the current attacks only appear to have targeted version 6 – which raises a question as to how current the affected companies’ software inventory is.

Indeed. They also mention what is becoming the attack method du jour:

The attackers apparently used the flaw to inject a trojan downloader into compromised computers. The downloader then proceeded to retrieve further modules, including a back door that gave the attackers remote access to the computer, from a server via an SSL-encrypted connection. Links to the crafted web pages were likely sent in emails to selected employees of the targeted firms.

Outbound SSL-encrypted connection. Take that firewall egress filters! This gets back to how prevention eventually fails, and you’re down to relying on your layers of defense to detect the issue and respond appropriately (unless you aggressively whitelist, I guess). And while we often pass off 0days as exotic and not a threat, tell that to the high-profile targets that just got hit. And we all know that once high-profile targets don’t look as juicy anymore, attackers will go after their partners, vendors, providers, contractors, and smaller shops that have far less ability to prevent and detect these attacks.

security metaphysics: when is a vuln a vuln

Reading articles like this one from Krebs regarding a firm to release a slew of previously undisclosed vulnerabilities, stokes a few latent thoughts of mine which I’ve probably expressed quietly on Twitter (or even here and I don’t remember it).

First, it is naive to think the only vulnerabilities that exist are those that are found and popularly disclosed.* There are people who find and sit on their vulns, and I’m not just referring to black hats or gov’t espionage/cyberwarfare players who want to keep their attacks as secret as possible (or their condoned backdoors [coughskypecough]). Even white hat hackers who find a vuln and even responsibly report it may be sitting on a very important finding. Maybe they get fixed, maybe not. Hopefully it does eventually get disclosed. Who knows how many vulns a group like iDefense is sitting on!

Second, any vulnerability found and/or disclosed today, has existed since it was born either in the current version of a product, or when the underlying code was first written. This includes vulns that aren’t even found yet. Tomorrow’s Windows root is a Windows root that may have existed for 8 years. Kind of sobering, that thought.

Third, this is why checklist-styles of vulnerability management are usually backwards-looking; they look for things that are known. Some things like, “turn off service X when not in use,” is a little different, but auditing for patches certainly is backwards-looking. I’m not saying there is no value in audits like that, but they should not be confused with the ability to say a server is secure. It just means we are patched against known issues and taken some steps to mitigate future risk…

I’d chalk the first two up in a list of “security laws” that help define an approach to digital security, right up next to other “laws” like, “You will be breached.” A fundamental baseline of belief, mind you…

* Tangential discussion can break out on this topic by talking about Apple fanboys, or even the fact that Apple positions its Mac product line (OS and devices) as premium products (i.e. they don’t have to price-match, among other characteristics). Is the Mac target demographic the type of demographic that wants to patch every month? Or even admit their product has a flaw?

reinforce the damn cockpit doors

SecurityMonkey has a post regarding new TSA guidelines, along with a video/link demonstrating how a small explosive may be created and hidden that is probably pretty darn undetectable.

He also hits on one of the major things I’ve felt since 9/11: reinforce the damn cockpit doors. *What makes plane bombings so different in scope to bus, subway, or even boat bombings is the ability to take control of the vehicle to do even more damage. Therefore, protect what you can. Safety on a public plane will never be assured, although you can help minimize some obvious things like guns or stupid terrorists.

* This also does include policy and thought on what can be passed to and from the cockpit, especially on long flights on emergencies, either mechanical or medical in nature (pee breaks?), and so on. Basically, you also don’t want pilots to be coerced into opening the door, or have something slipped to them (a sedative through a food tray?) that jeopardizes the operation of the plane because no one else can get in. (Then again, if you drug the pilots, the plane is probably doomed anyway…)

new home for brian krebs news

In case anyone missed it, Brian Krebs (just formerly from the Washington Post’s Security Fix blog) has his own blog opened up that is well worth the bookmark. Krebs’ blog on the Post site has long been one of the few truly useful “mainstream” outlets for security news that I keep in my RSS feeds. In reading his latest posts, I’m excited for his new venue, especially that he is his own editor now, and we don’t only get the cream of the crop as far as news stories, but also the difficult-to-explain-in-a-mainstream-site issues.

In short, we need people like Krebs who can sit quite comfortably between three parties: the technical geeks, the business people who may either act as his sources or his subjects, and the people who make up the journalism entity. By that last part, I mean he knows the ropes about what he can and cannot write, has demonstrated journalistic integrity, and has contacts and knowledge of the laws and protections he may enjoy. We don’t have all that many people like him in the security “blogosphere.”

failure makes us stronger

David Bianco (twitter) ends a year-long break by posting a great piece on “Why your CIRT should fail!” David talks about tackling the natural biases that may form when investigating incidents, specifically by having a diverse team.

I like to remember my days in hard sciences back in college. You didn’t do experiments to necessarily prove every hypothesis you made. A vast majority of your experiments were failures that you learned from. We learn the most from failures (mistakes, being wrong…), failure is inevitable, and failure is often unpredictable.

mogull’s guiding security principles

Rich Mogull has been around in security for 20 years, and he posts about his guiding security principles. I think I agree with them all ( to varying degrees), but there are a couple I’d like to build on.

1. “Don’t expect human behavior to change. Ever.” – Fine, you *can* change human behavior to an extent, but we in security can’t *expect* it to change. Otherwise it just becomes an excuse for insecurity and we start taking steps away from reality. We have to work with human behavior or find ways to influence it that are not like flatly telling someone, “No.” Positive and negative conditioning should be general vocabulary terms for security geeks, let alone the other influencers (economics, psychology, politics, etc). We social engineer on a weekly basis, or at least should.

2. “…keep it simple and pragmatic.” – Yeah, we all get sick of the KISS principle after even one semester of technical coursework. But it absolutely must be a guiding principle in what we do, not just in security, but IT in general. Keep it simple. Keep it simple. Keep it simple. The more complex something is, the larger the total cost of ownership, the worse the security will be, and the more annoying it will be to anyone involved. Keep it simple. This becomes easier when you agree to Rich’s other principles. That way you stop yourself from trying to block every last % of vulnerabilities that have a miniscule chance of occuring or account for every possible action a human employee may take. Keep it simple. There is a reason this permeates so many personal philosophies in every facet of business and life.

As one of my favorite quotes, “Simplify, simplify.” -Thoreau.

google chrome and noscript

Quick link to a short post by Giorgio Maone on why Chrome does not have NoScript. This sparks two thoughts of mine, both of which appear in the comments of that post.

First, even a company as large and purposeful as Google, building and releasing a very important (to them) piece of software like Chrome, is just building it first and securing it later. It isn’t about building it up secure from the start. This is part of human behavior (imo) and as Rich Mogull recently mentioned (in a post worthy of separate mention!), don’t expect human behavior to change. (I understand this can be an argued topic, particularly on the part where I say building it securely first is not human behavior; maybe it’s just the way we’re taught that forms this bad habit…you learn how to assign a variable before you learn how to assign a variable securely.)

Second, keep in mind that a majority of the things NoScript disables in daily browsing are web ads. Yes, the ads that Google lives by. They have simply no interest in allowing them to be blocked. And even if they figure out some proprietary way to whitelist their own ads (possibly not legal…), we all know that plenty of malware rides in through those ads or the holes to enable those ads.

value in fixing symptoms, but tackle the problems, too

I was a little excited to see the headline, “Good Guys Bring Down the Mega-D Botnet” over at PCWorld, as the article promised that researchers have gone on the offensive to bring down a bonet. To go on the offensive against a botnet, to me, means targeting the actual perpetrators or actually taking over the botnet and disassembling it.

Ok, well, not quite. Thank you editors making strange headlines and taglines.

Turns out the researchers did perform some excellent hard work in blackholing the C&C servers for this particular botnet, at least enough to reduce it to a fraction of its power, by contacting registrars, server hosts, and even taking over some of the unused domains the bots would check.

But they’ve done nothing except put their fingers into holes in a leaky dam (or maybe sticking a hose in every hole in the dam and siphoning it back on up over the dam and back behind it). Or put a fairly thick blanket over a raging bull’s face. Or cleaning up the spills in your store while some stranger somewhere in the store is running amok dropping bottles everywhere. The botnet is still there. The attackers are still there. The bots are still there. The vulnerabilities are still there.

I would rather have seen the researchers actually usurp control over the botnet by using one of those domains they snatched up. I know that’s a grey area of defense/attack research, but at least I would personally find more value in it. Or maybe not even take it over, but masquerade as a C&C server and see if you can trace back the activities. Then hopefully once you have control of the botnet, issue a kill order on the malware if that feature was coded in (as long as it does not do something destructive on the host like format the system) or issue an update that permanently has it check the loopback address for commands.

There is value in this effort, but let’s not get ahead of ourselves. They didn’t “take down a botnet,” at least in the way I envision it, and they haven’t done a ton that will absolutely have a long-term effect; at least not without ongoing investment in time and money. Perhaps they will do this long enough to choke off this botnet, which is great, but what do you have left but to just do it again next year?

philosecurity on our google government

Sherri over at Philosecurity has done some legwork in posting an article about the move many state governments are making to Google. This article is a good, thought-provoking one in its own right, but the comments make this really a good read.

I’m not quiet about my mistrust of Google. But I’m also not being necessarily shy about my use of Gmail or Google Reader. My biggest issue is they’re a public company that has to, first and foremost, answer to the money (i.e. their stakeholders). And they make a lot of their money via data mining, tracking, controlling, and/or logging what you search for and see and do. Google has not necessarily been like the other third-party providers and contractors whose money comes from exactly what they’re offering in terms of IT support or government service.

Google is basically the 2.0 version of AOL; they want a walled garden. But where AOL tried to focus on bringing in users first and make a separate garden, Google is focusing on bringing in everything the users already use anyway including all the data with it, and take over the existing garden.

Strangely, I would feel slightly better if Google managed things on equipment and in locations that the government actually owned, rather than basically offering it all as a service of some sort. Maybe it has to do with a company seemingly bigger and more important than even a government, hypothetically?

It is also interesting that we’ll (as in I, probably) trust RIM/Blackberry, homed in Canada, but not Google, homed in the US. That might say a lot about image, perceived use (data mining), or actual scope of use (just text/mail/voice communication).

Still, it is hard to tell state governments a flat-out, “No,” on a situation like this, especially in the face of falling budgets and rising debts. That sort of situation is ripe for someone to swipe in with low bids…for whatever monetary reason they may have, and I can guarantee it isn’t altruistic, philanthropic, or patriotic. It’s economic, in Google’s favor.

One thing I don’t like about data being housed in strange locations, is our human tendency to be nosy. If Britney Spears is in a hospital, we have plenty of people who will nose through her files. If someone paid you to nose through them, the incentive becomes very real for internal espionage. This won’t be new with Google, as every government contractor should feel this issue, but it would certainly feel new in perception.

Commentors in that article make great points on all sides of the d-20 (amongst those that are simply very myopic). I actually find it very hard to make solid points on either side of the argument, hence a lot of feeling and perception in my above assertions.

ghost services using single packet authorization

I knew when I finally got around to reading this post, it would be cool. Michael Rash posted last month about a fun way to use single packet authorization to create what he calls “ghost services.” Basically, you send an SPA packet to the target server on a port that is already in use, such as port 80. The firewall then sends just you over to the service you really want, such as SSH, but everyone else still sees the regular port 80.

This can be useful when on a network that only allows certain ports outbound (such as 80/443/53). It can also be useful to just thwart any future investigators who try to recreate your connection but only see the service everyone else sees. I’d find this less suspicious than an actual port 22 connection or strange port connection that no longer is listening, to be honest. Yes, there are plenty of other ways to skin this cat, but I really dig creative thinking like this.