security elephants aren’t endangered

If you read nothing else each week as far as infosec blogs, always check out the weekly Incite at Securosis and weekly reviews at Infosec Events. Yeah, it’s kinda cheating since both branch out and point elsewhere, but at least it’s not nearly as static a list of links as any of our RSS feeds end up being.

Over on the Incite, I particularly like a piece by Rich Mogull which I’ll blatantly steal and repost here because, well, it’s truth (emphasis is mine):

…But if you want to quickly learn a key lesson, check out these highlights from the investigation report – thanks to Ira Victor and the SANS forensics blog. No logging. Flat network. Unpatched Internet-facing systems. Total security fundamentals FAIL. …Keep in mind this was a security infrastructure company. You know, the folks who are supposed to be providing a measure of trust on the Internet, and helping others secure themselves. Talk about making most of the mistakes in the book! And BTW – as I’ve said before I know for a fact other security companies have been breached in recent years and failed to disclose. How’s that for boosting consumer confidence? – RM

I’ve recently been talking about elephants sitting in our infosec rooms. There’s a lot of them. The first bit I bolded above is one of them, and I really feel that very few organizations get the fundamentals even started, let alone tight (that’s as much a statement of ecnomics reality than a criticism). Still, DigiNotar’s state is pretty egregious.*

But Rich’s point drives home: DigiNotar is a friggin’ security industry company (maybe they forgot that, maybe that should be their mission statement). Yes, utter fail. (Now, back to who audited them in the face of such fail, or who lied to the auditors?)

The second bolded statement is also something I have to reluctantly agree with: Reported incidents are just a tip of the iceberg. And we’re not talking solely executive decision to hush up events for fear of public humiliation, but also middle management and even techies staying quiet about things. I absolutely am not surprised whenever I hear at the bar the inevitable tales from auditors and security folks about incidents that are hidden up or poor security that is hidden with smoke and mirrors.

From top down, this is classic negative conditioning: you get slapped for action X, so you either stop doing or try to hide action X. If you try to stop it, but it costs money that you get slapped for…

* As a bonus discussion, Richard Bejtlich has been talking a lot recently about threat-centric security vs vulnerability-centric security. DigiNotar is clearly an entity that needs to apply threat-centric principles (who are your threats, what do they want that you have?). But can you do that when you’re not even doing the fundamental vuln-centric stuff?**

** For those who’ve played StarCraft II, I could use an analogy for you. Perhaps threat-centric security would work, but I feel like it is definitely a sort of “all in” approach you have to take in order to be effective. There’s no doing some things here, and some things vuln-centric. You’ll just spread your resources too thin and not be good at either side. Sort of similar to multiplayer SC2. You could build a few of every unit, but you’re going to get trounced; you really want to focus all of your efforts on one strategy, and adapt/change only as a reaction to what your opponent is doing. <--There's seriously a big blog post comparison waiting to happen there.

thought: replace diginotar with network solutions or verisign

One point I’ve not effectively made that I should before I stop adding nothing to the discussions about CAs and DigiNotar: scope.

It’s one thing for this to happen to DigiNotar over in the Netherlands. But think about the impact of this if you live in the Netherlands.

Or what if this had been Network Solutions or Verisign or Thawte? And suddenly browser vendors shunned their roots or CNet and other journalists gave your userbase instructions on shunning root certs. Think of the impact to your users if you run websites, to your own users who browse other websites, and your own desire to buy something off Amazon whose cert may now not be trusted for a few days.

I know tons of blogs posts and articles explaining how to block trust (or untrust) DigiNotar roots. But that’s a pretty damaging, somewhat “scorched earth,” approach to addressing the problem.

Besides which, other than an incident currently happening, what reason should Network Solutions be given more trust than DigiNotar? Of the 600 CAs, how do you stratify which are better than others?

tinfoil hats and web of trust chatting

Lots of talk recently about DigiNotar and Iran. I’d posit this problem is more impacting than people think, but not for reasons that are being bandied about. I don’t usually don quite so big a tinfoil hat, but I certainly don’t want to act naive about realistic risks. I’ll try to keep my statements brief, though a bit rambling.

Hypothesis: Iran made legitimate requests of DigiNotar for certificates. This is normal business for a CA. (This may or may not be true at all, but it still stands to illustrate a point.)

Iran cares about intercepting communications for governmental security purposes.

Every dang nation in the world cares about intercepting communications for governmental security purposes, though in some cases we really hope it is with documented procedures and reasons (i.e. like we hope for the US).

Every CA has a way to request any sort of cert you want to aid governmental interception. You really think any CA that does business in country X will be able to still conduct business if they rebuke the host government? No. (Apply this thinking to things like Skype or Google’s portals to request data on people of interest for some precedence.)

The government(s) isn’t going to let there be some completely private global (or even national) means of communication without leaving them the ability to tap into it if needed. I’d posit that this partially explains various not-optimized communications security like CDMA and such.

The web of trust for SSL/CA/web infrastructure is weak, and maybe even broken, but that’s unfortunately part of the (mostly accidental) design, if you ask me. Granted, this was all devised long ago when scale wasn’t a huge concern. Before having 600 CAs in the world that most every browser just inherently trusts because it is good for business because it eases user frustration and efforts (if you run an e-commerce website, just think how awful it will be to work with every user when their browser won’t trust everything inherently). Sadly, it is inherent that a “web of trust” is only as trustworthy as the least trusted part of it, and it only takes one mistake to let that in. Maintaining that trust amongst general public does not outweigh business health/profits

At some point I have to trust something, because I am not smart enough to really be able to intelligently verify my trust in most things encryption. It’s a quandary, certainly.

Getting back to DigiNotar, what’s the best way to cover your ass when someone finds out you’ve been giving shit away to other governments when they force you to or pay you enough? Pre-existing hack proof to give you deniability.

Anyway, that’s one way to look at it. Honestly, I’m sympathetic to typical LEO thinking: the simplest solution is almost always the correct one: someone broke into DigiNotar and issued themselves certs. But I’m also sympathetic to the idea that govs require access, even if the common person thinks they’re communicating securely.

zeltser’s tips on detecting web compromise

Lenny Zeltser goes over 8 tips for detecting website compromise. There are far too few security writers who have technical chops enough to not stand out amongst a pack of geeks at a security conference, but Lenny is one of the better ones, so any criticism I have for his writing is done very lovingly. Snuggly-like, even. I figured I could make mention of this nice article and give my own reactions.

Lenny starts out with 3 overly obtuse best practices. Sort of like saying, “If you want to get into shape, you should run more,” which *sounds* easy. I think he takes these items like I do: you’d be remiss if you left them out, but can’t do them justice by being less obtuse about it.

1. Deploy a host-level intrusion detection and/or a file integrity monitoring utility on the servers. – This biggest problem with this (collective) item is how much activity this is going to generate for an analyst. And if you do tune it down to a quiet level, I’d argue that you’re not going to see what you want. But, as said already, a necessary evil to a security posture, whether you like it or not. At a bare minimum, you should know every time a file is changed and a new file appears in your web root (with exceptions for bad apps that need to write temp files and other crap to itself- the bane of web root FIM…).

2. Pay attention to network traffic anomalies in activities originating from the Internet as well as in Internet-bound connections. – This part, “Internet-bound connections,” really needs to be implemented as soon as possible before an organization has so much going on that you can’t ever close it down without preparing everyone for the inevitable breaking of things no one knew were needed. Watching traffic coming in? Well, not so easy, and you’ll probably just end up looking for stupidly large amounts of traffic (which may be normal if you service a large client sitting all behind 2 proxy IPs) or the most obvious Slapper/Slammer IDS alerts. But you absolutely want to know that you just had 500 MB of traffic exfiltrate from your web/database server to some destination in the Ukraine, and it contained a tripwire entry in an otherwise unused database table. I would drop the buzzword, “app firewall,” in this item as well. You should also know what is normal coming out of your web farm (DMZ), and anything hitting strange deny rules on the network should be checked into.

3. Centrally collect and examine security logs from systems, network devices and applications. – Collect: Yes. Examine: Yes, with managed expectations. I really want to say yes, but having done some of this, 99% of the examined stuff that bubbles up to the top is still not valuable. There’s a reason I think gut feelings from admins catch lots of incidents and strangeness on a system/network, and it’s not because it shows up clearly in logs. Especially with application logs, if you want them to be trim and tidy, you’re looking at a custom solution which includes custom manhours, overhead, and future support resources.

Side note on custom app logs: If you can, log/alert any time a security mechanism routine gets used, for instance if someone attempts to put a ‘ into a search field (WAF).

4. Use local tools to scan web server’s contents for risky contents. – Certainly should do this as much as you can. You can scan for even deeper things like any new file at all (depending on the level of activity normally on your web root) or files created by/owned by processes you don’t expect to be writing files.

5. Pay attention to the web server’s configuration to identify directives that adversely affect the site’s visitors. – This can be a bit easier in Apache or even IIS 7.0+, but some web servers like IIS 6 are hideous for watching configurations. Yet, definitely keep this in mind. Thankfully, attackers have extra hurdles than just writing a file in the web root, when it comes to such servers.

6. Use remote scanners to identify the presence of malicious code on your website. – I agree with the spirit of this, but I think internal scanners need to be done as well. If a bad file is present but not discoverable from an existing link on your site or easily-guessed name, then an external scan will totally miss it.

I should take a moment to stress that many of these items include signatures or looking for known badness, but none of that should replace actually looking, at some point, at every page you have for things that are clearly bad to a human, such as a defacement calling card or something.

7. Keep an eye on blacklists of known malicious or compromised hosts in case your website appears there. – It’s hard to say this shouldn’t be done, but it certainly offers little return on the investment of time. If you *do* happen to show up before any other alarms sound, then clearly it is nice.

8. Pay attention to reports submitted to you by your users and visitors. – I’d personally overlook this item 9 times out of 10, but it is really oh so necessary.

If I wanted to add an item or two:

9. Scorched earth. – While not a detection mechanism, you do run certain risks if your web code sits out on the Internet for a long period of time and grows stale. You should refresh from a known good source as often as you think you need to. This can include server configs and the like. (For instance, I roll out web root files every x minutes on my web servers at work, and I regularly wipe out web server configs and rebuild them via automation.) Don’t be that company that had a backdoor gaping open to the world for 2 years, when a simple code refresh would have closed the hole. A diff comparison mechanism may satisfy the ‘detection’ criteria of this list.

excellent diginotar incident summary over at isc

Swa Frantzen (ISC) has a great discussion of recent DigiNotar drama going on. I do take minor exception to this statement:

I for one would love to know who that external auditor was that missed defaced pages on a CA’s portal, that missed at least one issued fraudulent certificate to an entity that’s not a customer, and what other CAs and/or RAs they audit as those would all loose my trust to some varying degree. This is not intended to publicly humiliate the auditor, but much more a matter of getting confidence back into the system. So a compromise that an unnamed auditor working for well known audit company X is now not an auditor anymore due to this incident is maybe a good start.

I totally understand this sentiment, and actually do agree with it. But we do have to be careful that we don’t set every single security auditor/expert up for failure, where one mistake causes the hammer to drop. (Speaking of elephants in rooms, the seeking or assumption of perfection is a ‘subtle’ one…)

Granted, repeatedly missing defaced pages hits the facepalm category, but I think this oversight (from tripwires on attacks to page inventory reviews to edit/ownership times to web app sec checks, etc) can happen to literally every organization if they’re not rigorous in their testing, though it still comes down to knowing what is valuable in the eyes of a threat, and being extra careful around those processes (i.e. issuing a trusted certificate!).

Sitting back and pondering this scenario while nursing some scotch illustrates all sorts of things that are wrong with the security mindset in our world, ya know? Maybe “wrong” is a bad word for it, but rather the challenges we face and will eternally face, as a function of reality.

some general thoughts about blogging

McKeay wrote a great post about blogging this past weekend, and I think any security blogger should check it out. I really like his subpoints about blogging and working and balancing both:

I’ve learned a number of lessons about blogging the hard way. I’ve learned that no matter what I think I’m writing, what’s important is how other people are reading it… I’ve realized that people are reading and judging what I write, for good and for ill. And when I write something people read, it can get back to my employer.

I also like this:

More often than not, my employers have maintained an air of benevolent ignorance towards my blog, but every so often I’ve gotten the “we’ve read your blog and are not happy” conversation. Not often, but it has happened and it’s never comfortable talk. I’ve actually told at least one manager that my blog and podcast are more important to me than my job.

For me certainly, blogging is a personal thing, a way to organize my own thoughts, record something for the future, or vent a little bit. It’s also a way to dive deeper into what would always be a hobby for me, even if not a job. Even if I didn’t have a single reader my blogging habits wouldn’t change a bit.

Anyway, here are some points of my own that I try to follow.

1. Separate work from personal if you need to. This is a big deal in the past 5 years, where work and play time are blending together, largely because something you “say” (digitally) during your personal time can now easily persist for years for people at work to discover. Things you could say with buddies or at a bar don’t just stay with buddies or at the bar in a single point in time. Therefore, with blogging especially, I try to keep work separate. I don’t hide my identity on here, but likewise I don’t advertise my blog to work colleagues (they can easily find it on their own if they want) and I don’t mention my employer anywhere on here. I also leave deep personal things aside, though some incidents/anecdotes if read by the right people, would recognize themselves in them, but I also try to make sure they’re generic enough and have enough of a point to not be uncomfortable. Besides, if I piss someone off, I hope they have my own viewpoint and just move on with life. It’s a big deal to be able to agree to disagree; a very useful skill. I like dark grey cars and don’t like white cars. You might not agree. And it would be silly to get pissed about that. Same goes for what I post on my blog or elsewhere on the Internet with my screenname.

I admit, my hard divide between work and personal is slowly going away, in part to my next point, but also partly because security work and play is a career goal.

2. Don’t present false faces. I don’t like when people “front,” or present themselves in a way that isn’t in line with who they really are. Life is way too short and precious to not be yourself in anything you do, and if being yourself gets in the way, make changes to be someone better. In that regard, I don’t typically pick my words carefully on my blog; if I have an opinion, I’ll be out with it. (Though it does help that I’m an easy-going kind of guy anyway…and this is also easy to say for someone who thinks of himself as a very decent guy who is sympathetic to objectivist beliefs…)

3. It’s easy to apologize or admit to being wrong. I don’t mean this to sound like a copout for bad behavior, but it is easy to apologize for or admit to being wrong. I find it’s more important to put your opinions out there and be contritely wrong, than to bottle everything up and stew. And this is a tough thing for an INFP to say! (And it’s something my risk-averse nature will always fight with me about.) Granted, that doesn’t mean you can be an asshole and then be contrite about it and things are fine…be reasonable!

4. Remember the important things in security: integrity and privacy. This also applies to IT work in general. Typically we are in positions to know very deep secrets and have access (or get access) to very sensitive things. The same principles that prevent me from perusing my CEOs mailbox are the same that dictate what I divulge on a blog, or anywhere on the Internet. Hopefully most people in white hat security are at least aware of these principles in every facet of their lives.

your ca is now untrusted, and hacker calling cards

In DR/BCP, we plan for natural events beyond our control all the time. But what about cyber events that are beyond our control? For instance, if a certificate authority makes a high-enough-profile mistake in issuing a fraudulent certificate, which then causes browsers to automatically update their software (and your users) to no longer trust any certs issued by that CA? Oh, and what if you use that CA for your shit? A situation beyond your control just gut-punched you.

For more information on the DigiNotar incident(s), F-Secure has a great post about it. Pretty lame to have your pants yanked down, then find out they’ve been yanked down several times in the past, and even though you told people you pulled them back up, you actually didn’t, and still had them down. GG for hacker calling cards. 🙂

a roleplay exercise based on rsa example

More information about the RSA hack has been uncovered. In the article, I especially liked this:

The email attack is not particularly complex, F-Secure says. “In fact, it’s very simple. However, the exploit inside Excel was a zero-day at the time, and RSA could not have protected against it by patching their systems.”

This should be a classic scenario for role-playing in any security operation. The first question from any manager: “What do we do to prevent, detect, or mitigate this?”

aaron barr, defcon, and anonymity

Really excellent article on ThreatPost from Aaron Barr: “Five Questions About Aaron Barr’s DEFCON (by Aaron Barr).” I must say, it is very well-written and he’s definitely got a brain in his head, but it’s nice to see him in and amongst people of the sort that attend Defcon (not that we’re that much different these days than any other group) and hear him talk to and learn more about the more greyish side of Anonymous and security and people in general, rather than just Washington boy clubs. His tentative behavior at Defcon is a bit amusing.

As many commentors pick up on, I don’t necessarily agree with his views in question 4 about anonymity, but I think he does a great job in illustrating the two sides of the problem: freedom vs criminal intent. While I may disagree, that doesn’t mean I have a better answer or argument to spit out. I think he and I would simply differ on our acceptable middle ground; where he’d prefer less anonymity, I’d prefer more, despite wide agreement on discussion points.

I like some points in #5 as well, as I really don’t think it is possible to have a better Anonymous. Wouldn’t that like asking for a better 4chan? The very concept steals away what they are, which is unfortunate. It is quite possible that Anonymous is a great idea, but is actually corrupted by that very anonymity and decentralized leadership. On the other hand, I do think we need the sort of greyish societal function that Anonymous fills. The function is important, even if the group itself fades into childishness. It’s kind of like making a statement through graffiti, but eventually losing sight of the point and instead just throwing graffiti everywhere no matter how dangerous (Stop signs?) or silly, just because you can.

Then again, I wonder how many activist groups like this ever *don’t* fall into that problem of slope-slipping? It’s probably more pronounced when you talk about less personal accountability… Still, this does happen with protests where sheer numbers help promote anonymity, or masks/hoods, or something.

The one item I thought Barr would bring up in point 5, but doesn’t, though, was how the efforts of Anonymous to poke at poor security may in fact give fuel to world/national leaders to reduce internet anonymity. Sort of like a child protesting his being grounded…ends up being grounded for longer with even worse punishments.

visa slide deck on logging and incident detection

Visa has a slidedeck posted Identifying and Detecting Security Breaches. Sounds fun! If you’ve been around security for a while, nothing will be new in this deck, but it’s a nice and short to breeze through for ideas if something is missing in your enterprise security posture. Every bullet point also makes for a decent item to review or ask your team (if you have one) to describe how it is handled. (I do believe in role-playing!)

Of course, the danger in a slide deck like this is how deceptively easy it makes all of this sound! 🙂

general insights, security context, and learning from mistakes

Two general lessons in infosecurity came past in articles via infosecnews today. These will sound familiar, since I’m sure I mention them often, but I’m feeling particularly introspective this week (usually this happens in the autumn; I’m a little early this year) and getting back to simpler basics in life and thought for a bit.

Federal Air Marshall Service Blackberry enterprise servers are behind on patches. First, welcome to the real world, and good job raising the issue of missing patches. Second, how big of a deal is this? For instance, are they BES patches or Windows patches on a system that can’t be reached via vulnerable ports (or the monthly critical IE patches)? In one case I care, in the other, it’s less a problem. This illustrates how contextual so much infosecurity is, and how easy non-technical (or technical yet misguided) people can warp efforts and perceptions. This is why checklists and scores can be a hindrance.

Hacked cybersecurity firm HBGary storms back after ridicule fades. This is a neat story, and I’m not entirely surprised by the results, considering the drama occurred in a a separate sister company. But it does illustrate that we learn from mistakes, and our security will improve after insecurity incidents. At least, we hope so. I think this is still hard in an institutionalized large enterprise, though (i.e. how much will Sony truly improve versus an HBGary?). Of courses, there are many lessons here, like make sure if you sell security you practice what you preach, you know your threats even as they change, know what security incidents may impact your company and how they will be felt, and so on.

this is why the dumb ones get caught

In a new bit of detail that I hadn’t read previously, Dave Lewis posted about the recent IT admin “hacking” incident that occurred via free wifi at a McDonald’s: “An information-technology administrator has pleaded guilty to crippling his former employer’s network after FBI agents traced the attack to the Wi-Fi network at a McDonald’s restaurant in Georgia. The administrator was caught after he used his credit card to make a $5 purchase at the restaurant about five minutes before the hacks occurred.” Yeah, brilliant.

So, what should this guy have done? I have ideas, and I’ll assume we’ll stick to a McDonald’s.

location
– don’t go to any store you’ve been to before or will ever go to again.
– don’t do this in your own city; go to some other large city; day trip!
– legally park blocks away from the McDonald’s
– or park districts away and take public transporation (paid for in cash)
– do this at normal, busy hours and especially if you see other wifi users present
– en route, don’t speed, don’t do anything to get your location logged
– don’t go through tollbooths (if possible) and try to avoid cameras
– if you can discreetly do it, maybe rent a car

equipment
– use completely generic laptop and gear; nothing you can’t part with
– don’t name your computer anything that reflects you
– change the mac address (just because you can)
– don’t install customized stuff on the laptop; reduce the amount you may leak on the wire
– hopefully it is cool but sunny so you can go with a hat, sunglasses, popped collar…
– truly lose or “lose” your computer after (wipe it, sticker it up, etc)
– leave your cell phone at home (or turned off)

you
– don’t draw any attention to yourself; be invisible
– don’t wear your favorite clothes; be generic or even disposable
– buy a small meal or drink to go (no trays)
– for the love of god, pay in cash; pay for everything en route in cash (no ATM stops!)
– take your trash with you and dispose later
– don’t hide in a corner, but don’t let cameras or employees see your screen without you knowing it

– don’t browse the internet or check your email; do your business and leave
– remove jewelry or cover any tattoos or recognizable marks/traits you have

I’m sure there are more ideas if I spent more time, and I normally don’t think about how to stay off the grid like this, but this is a decent start for being mischievious at open wifi.