mogull’s guiding security principles

Rich Mogull has been around in security for 20 years, and he posts about his guiding security principles. I think I agree with them all ( to varying degrees), but there are a couple I’d like to build on.

1. “Don’t expect human behavior to change. Ever.” – Fine, you *can* change human behavior to an extent, but we in security can’t *expect* it to change. Otherwise it just becomes an excuse for insecurity and we start taking steps away from reality. We have to work with human behavior or find ways to influence it that are not like flatly telling someone, “No.” Positive and negative conditioning should be general vocabulary terms for security geeks, let alone the other influencers (economics, psychology, politics, etc). We social engineer on a weekly basis, or at least should.

2. “…keep it simple and pragmatic.” – Yeah, we all get sick of the KISS principle after even one semester of technical coursework. But it absolutely must be a guiding principle in what we do, not just in security, but IT in general. Keep it simple. Keep it simple. Keep it simple. The more complex something is, the larger the total cost of ownership, the worse the security will be, and the more annoying it will be to anyone involved. Keep it simple. This becomes easier when you agree to Rich’s other principles. That way you stop yourself from trying to block every last % of vulnerabilities that have a miniscule chance of occuring or account for every possible action a human employee may take. Keep it simple. There is a reason this permeates so many personal philosophies in every facet of business and life.

As one of my favorite quotes, “Simplify, simplify.” -Thoreau.

google chrome and noscript

Quick link to a short post by Giorgio Maone on why Chrome does not have NoScript. This sparks two thoughts of mine, both of which appear in the comments of that post.

First, even a company as large and purposeful as Google, building and releasing a very important (to them) piece of software like Chrome, is just building it first and securing it later. It isn’t about building it up secure from the start. This is part of human behavior (imo) and as Rich Mogull recently mentioned (in a post worthy of separate mention!), don’t expect human behavior to change. (I understand this can be an argued topic, particularly on the part where I say building it securely first is not human behavior; maybe it’s just the way we’re taught that forms this bad habit…you learn how to assign a variable before you learn how to assign a variable securely.)

Second, keep in mind that a majority of the things NoScript disables in daily browsing are web ads. Yes, the ads that Google lives by. They have simply no interest in allowing them to be blocked. And even if they figure out some proprietary way to whitelist their own ads (possibly not legal…), we all know that plenty of malware rides in through those ads or the holes to enable those ads.

value in fixing symptoms, but tackle the problems, too

I was a little excited to see the headline, “Good Guys Bring Down the Mega-D Botnet” over at PCWorld, as the article promised that researchers have gone on the offensive to bring down a bonet. To go on the offensive against a botnet, to me, means targeting the actual perpetrators or actually taking over the botnet and disassembling it.

Ok, well, not quite. Thank you editors making strange headlines and taglines.

Turns out the researchers did perform some excellent hard work in blackholing the C&C servers for this particular botnet, at least enough to reduce it to a fraction of its power, by contacting registrars, server hosts, and even taking over some of the unused domains the bots would check.

But they’ve done nothing except put their fingers into holes in a leaky dam (or maybe sticking a hose in every hole in the dam and siphoning it back on up over the dam and back behind it). Or put a fairly thick blanket over a raging bull’s face. Or cleaning up the spills in your store while some stranger somewhere in the store is running amok dropping bottles everywhere. The botnet is still there. The attackers are still there. The bots are still there. The vulnerabilities are still there.

I would rather have seen the researchers actually usurp control over the botnet by using one of those domains they snatched up. I know that’s a grey area of defense/attack research, but at least I would personally find more value in it. Or maybe not even take it over, but masquerade as a C&C server and see if you can trace back the activities. Then hopefully once you have control of the botnet, issue a kill order on the malware if that feature was coded in (as long as it does not do something destructive on the host like format the system) or issue an update that permanently has it check the loopback address for commands.

There is value in this effort, but let’s not get ahead of ourselves. They didn’t “take down a botnet,” at least in the way I envision it, and they haven’t done a ton that will absolutely have a long-term effect; at least not without ongoing investment in time and money. Perhaps they will do this long enough to choke off this botnet, which is great, but what do you have left but to just do it again next year?

philosecurity on our google government

Sherri over at Philosecurity has done some legwork in posting an article about the move many state governments are making to Google. This article is a good, thought-provoking one in its own right, but the comments make this really a good read.

I’m not quiet about my mistrust of Google. But I’m also not being necessarily shy about my use of Gmail or Google Reader. My biggest issue is they’re a public company that has to, first and foremost, answer to the money (i.e. their stakeholders). And they make a lot of their money via data mining, tracking, controlling, and/or logging what you search for and see and do. Google has not necessarily been like the other third-party providers and contractors whose money comes from exactly what they’re offering in terms of IT support or government service.

Google is basically the 2.0 version of AOL; they want a walled garden. But where AOL tried to focus on bringing in users first and make a separate garden, Google is focusing on bringing in everything the users already use anyway including all the data with it, and take over the existing garden.

Strangely, I would feel slightly better if Google managed things on equipment and in locations that the government actually owned, rather than basically offering it all as a service of some sort. Maybe it has to do with a company seemingly bigger and more important than even a government, hypothetically?

It is also interesting that we’ll (as in I, probably) trust RIM/Blackberry, homed in Canada, but not Google, homed in the US. That might say a lot about image, perceived use (data mining), or actual scope of use (just text/mail/voice communication).

Still, it is hard to tell state governments a flat-out, “No,” on a situation like this, especially in the face of falling budgets and rising debts. That sort of situation is ripe for someone to swipe in with low bids…for whatever monetary reason they may have, and I can guarantee it isn’t altruistic, philanthropic, or patriotic. It’s economic, in Google’s favor.

One thing I don’t like about data being housed in strange locations, is our human tendency to be nosy. If Britney Spears is in a hospital, we have plenty of people who will nose through her files. If someone paid you to nose through them, the incentive becomes very real for internal espionage. This won’t be new with Google, as every government contractor should feel this issue, but it would certainly feel new in perception.

Commentors in that article make great points on all sides of the d-20 (amongst those that are simply very myopic). I actually find it very hard to make solid points on either side of the argument, hence a lot of feeling and perception in my above assertions.

ghost services using single packet authorization

I knew when I finally got around to reading this post, it would be cool. Michael Rash posted last month about a fun way to use single packet authorization to create what he calls “ghost services.” Basically, you send an SPA packet to the target server on a port that is already in use, such as port 80. The firewall then sends just you over to the service you really want, such as SSH, but everyone else still sees the regular port 80.

This can be useful when on a network that only allows certain ports outbound (such as 80/443/53). It can also be useful to just thwart any future investigators who try to recreate your connection but only see the service everyone else sees. I’d find this less suspicious than an actual port 22 connection or strange port connection that no longer is listening, to be honest. Yes, there are plenty of other ways to skin this cat, but I really dig creative thinking like this.

do you need a software security group?

Gary McGraw has written an article explaining why you need a software security group if you want to have a software security initiative. Saw this flew past Twitter via Jeremiah Grossman.

This is a great scenario that I’m sure happens everywhere (yes, everywhere):

…if you go to the development organization and ask about software security they will immediately refer you to the network security people. “We have a department for that security stuff,” they say while ushering you out the door. But when you make your way to the network security department and ask them about software security, they say something like, “Yeah, those developers really should produce code that doesn’t suck. Those guys ruin our perfectly configured network with their broken applications.”

He also throws this gem in:

To make perfectly clear that this is a management issue, ask yourself who gets fired when your firm experiences a software security problem? Who gets rewarded when your firm does a good job with software security? If the answer is “nobody,” then you have a real problem.

The only downside to this set of data and underlying conclusion can be exposed in the last few paragraphs of the article: the size of the companies McGraw deals with are probably pretty large. In other words, they have the opportunity for real security groups. I’d wager a vast majority of firms don’t have that freedom (or budget), and have to make do with, at best, part-time security. Sadly, this either comes in the form of small, annual security tests after the real work is done, or some unlucky bloke’s duties which are shared with his day-to-day tasks and projects. And we all know which half of those duties he will be pressured to do first. (Let alone the arguably expert-level amount of knowledge one needs…)

Even deeper than this being a management issue, this is an issue of how important security is. If it is important, then management will maybe properly do it. If it is not important, then you better just hope you are a big enough org to create a dedicated group for it.

thoughts on the verizon 2009 dbir supplement

A Verizon 2009 DBIR supplement has been released and I’ve finally looked it over. If you don’t have much time, just browse the intro, and then read the Case Example sections (pg 7-21) for each attack type and the Conclusion section (pg 22). While there are still questions on specifics (aren’t there always?), this is far more information than we had before!

A few things pop into mind as I read down the case examples. Egress controls. Policy and administrative access. Noticing strange behavior. Termination policy (That guy must have been laughing at his former employer’s IT staff every time he connected!). Endpoint integrity (and policy). Default passwords (ouch; vendors install your shit better, too!). Endpoint security. SQL injection, network segmentation, egress flow monitoring. SQL injection and egress monitoring (impressed that this org even bothered to parse logs for SQLi, although obviously not an always-on practice with months old findings). Vendor/provider security and notification (#9 is really interesting, and a very hard one to combat/monitor). Proper authentication and help desk training (another team in IT that is evaluated by customer service [i.e. doing what the customer asks] as opposed to doing the right thing; often not paid enough to care, to be honest). Web app flaw. Physical theft and storing of card information (does anyone do regular rounds just to make sure nothing is stolen or added to things like this?). Log/authentication attempt monitoring and web app flaws (another ouch, but another one that so few really do). Endpoint security (RAM scraping…wow!). The phishing attack is interesting, as it involves training and pretty good log watching.

I really like the Conclusion section, especially the bulleted lessons is stresses.

In looking at these specific cases, it really pops out that we have a mix of “easy” and “advanced” security concepts here. Egress and deeper log analysis I consider more advanced topics because they take time and staff to really properly tackle. Other attacks should be easy, but are hard to get people to fix, like SQL injection or application flaws. To most business units, if it appears to work, then the project and capitalization are over. Evaluating, fixing, and testing something that doesn’t immediately appear to affect the bottomline gets pushed off.

on training your system administrators

John Strand has a great, quick blog post over at on tapping and training your system administrators as a security asset.

There are actually quite a few similarities and strengths that system (and network) administrators have that parallel or complement security professionals. Disclaimer: I *am* biased since I am a sys/network administrator as my major function in my day job. (And yes, looking to get deeper into security in a more full-time role.) Here are a few of my notes I’d throw down.

1. I’ve said it many times before, but I believe if you truly want the real story about security in an organization, you get as low down into the trenches as you can get. This usually means your IT staff. And often means your desktop support staff, admins, and even app coders; but most probably admins. (I knock the desktop guys [I used to and still pretend to be one] only because they tend to be evaluated by their customer service. They don’t get rewarded for properly configuring a host-based firewall, but they do get rewarded for disabling it and allowing Joe Blow to get back to work.) Incidentally, if you want to know the health of a company, I’d also say your admins are a good source; they touch and know a lot!

2. Administrators are used to being at the end of the shaft, even more so than desktop specialists or coders. More often than not, the admins are the glue keeping all the disparate parts working together and covering for other IT sections that aren’t really up to snuff. They also tend to absorb things like poor applications or bad business process decisions that impact IT. They likely, along with desktop dudes, to see firsthand the people who violate policy. Bottomlines usually end with the administrators, and they’re used to getting it from management and from other IT persons. If anyone in the company is used to putting on the brakes and talking policy and standards and getting people to line up with them, it is your admins.

3. If you want to roll out a security initiative or project of any type, more than likely you’ll need an admin to give you access, gather logs, set up servers, configure the network for your visibility, provide documentation and diagrams, or be next to you during an incident response. Basically, get to know them and get on their side (and they on yours). They’re also the one who will make or break your policies, even more so than desktop dudes, especially if they need to understand and follow the steps on, say, how to harden a server. (Many desktop workers aspire to be systems workers, so their is also a tendency to look to the admins for leadership, formally or informally; compare paygrades if you doubt this.)

4. They also want to make sure things are done right. Admins are usually time-sensitive and risk-averse, and they don’t want things to break, they don’t want to be blamed when things break that they didn’t cause, and they want to troubleshoot intelligently. Even the worst admins tend to have the beginnings of these habits. It’s just a result of a ball rolling downhill.

5. That “A” in CIA is a shared role with the admins. It is also their duty to monitor and maintain availability for the masses.

6. Everyone learns about least privilege and separate of duties, but out here in the real world, business IT is run by admins with godlike access. That’s just how it is. If you think otherwise, you’re not thinking of all the ways they can end up pwning you. This means they really absolutely need to have a mind for security if you expect to get anywhere. If the admins don’t do it, then you’re stuck with a top-down approach which doesn’t always work. Even if bottom-up approaches get mired in budget constraints and buy-in, you can still do a lot to combat insecurity by having security-minded admins. Think about code. To secure code, you need to bake it into the creation with skilled coders and low-level policy. Same thing with systems and admins.

7. It is my really quick, knee-jerk opinion that every real IT security pro needs some practical, hands-on, systems or network or desktop administrator experience. This helps immensely on various levels. This isn’t always true, which is why I always keep this opinion short!

those mysterious undetected data breaches

Wired continues reporting on the Albert “I hacked Hannaford/TJX/more” Gonzales saga. Lines like this are what piss me off:

By identifying intrusions that “had not yet been detected,” his lawyer wrote, Gonzalez helped the companies institute protective measures to secure their data and prevent future breaches.

If true, I’m waiting patiently for those companies to disclose their breaches. And a big thumbs up to those firms living in ignorance.

Also, I would hope that someday even more information on how these recent huge breaches occurred. It really ties our hands to hear that a huge breach occurred, but details that would be helpful to everyone else to not repeat the same mistakes, are pushed under the rug and sealed.

web hacking lawsuit against minnesota public radio

Read an interesting story this morning about a lawsuit from a Texas company accusing a Minnesota Public Radio reporter of hacking into their web system. Read the full article to get a good idea on what all went down, especially the last 3 paragraphs which I feel really get to the heart of this somewhat complex issue. Here’s the last quote from the CEO of the Texas company who had weaknesses in their website:

“… in our contract, we had 60 days to fix any problem. But there was still an unauthorized intrusion, and that was wrong.”

If you ask me, you had completely dumb weaknesses in your site. Just because you offer 60 days to fix something doesn’t mean you get a free pass from even the most assinine security issues. They fucked up.

I’m not a judge, but my kneejerk reaction to this lawsuit would be to have the Texas company thank the reporter for reporting the weaknesses in their web presence; a service tendered for free. The reporter should learn that this isn’t such an easy thing, to just twiddle with a website and call it good. She was stepping into murky waters and should exercise more caution in the future, but at least it does not sound like she had malicious or self-serving intent. And the general public and every employee and customer of that company should thank the reporter for exposing an issue that likely would not have been fixed otherwise.

I would hope this doesn’t even progress past the prosecutor.

my mini-rant on diggnation

I’ve been a very big fan of and loyal watcher of Diggnation since about episode 4 (I got into this from the broken). This is somewhat strange since I don’t use and haven’t other than trying it out for a few days. Hard to believe Diggnation has been around since mid-2005. I adore the guys and their work, and I have caught myself thinking of them as friends due to the intimate nature of their podcast. I’ve even had my one email I sent in read during an episode. But in the past year I’ve really gotten less and less loyal and find that I enjoy far fewer shows per month. Where early years had gut-bustingly amazing shows every week, many of the shows from the past year are utterly forgettable, even 10 minutes after watching them. So here’s my mini-rant on why Diggnation has been disappointing lately. (Yes, I understand it has been 5 years and the guys likely aren’t as fresh as they used to be…so also take this as a wistful nostalgic rant rather than something I’m demanding and angry about.)

1. Not enough drinking. Alex and Kevin (and others) are at their most entertaining when they are drinking beer. I have not done any research (although I’m sure some other geek online has done so), but I’d be willing to bet you can absolutely judge the entertaining factor of the show based on what they announce they’re drinking. If it is beer and they’ve obviously had a couple already, thumbs up! If it is tea for no real good reason (like being sick), then the show is almost always bland. Also, whatever the hell happened to drinking interesting beers? Years ago they had interesting stuff every single week. Now, they have interesting stuff maybe once every 3 months. But it is still hard to tell someone to drink more. Their drinking less is obviously healthier. I’m just saying, I’d bet 95% of their best moments in 4 years of Diggnation have been alcohol-induced. They don’t need to be trashed, but they definitely kick it up a notch after a few brews.

2. Horrible preparation. This falls heavily on Kevin far more than Alex, but it is insulting and annoying when either of them is obviously seeing the article for the first time and trying to fake through it (and often failing). What a waste of time for the audience. Even if the story is interesting, chances are one of them is getting it wrong because they’ve not read the story in advance or had any thoughts on it yet. I don’t ask for a script or to not read word-for-word, but at least have a reason to bring the story up! In the early years, the guys always had reasons for the stories, absolutely without fail. This drove good thoughts on the stories, even if the discussion was short. Today, not so much. It is no surprise that the stories with the least time and least audience reaction are the ones they didn’t even know they were covering. Really, they often just phone the segments in these days, which is disappointing because I adore both of the guys and their work.

3. Fewer interesting tech stories. Your audience is tech. I don’t care if it is a robot that smoke-screens the Japanese Prime Minister or zombie dogs. That’s geek and tech! Read on for a reason why there may be less interesting tech stories in some episodes…

4. Taping more than one episode at a time. Ok. I don’t mind this *too* much, I mean, one of them has to travel every week to do the show! But seriously. Your audience is not dumb. We know when your stories are a week stale (and the “next best” stories from the previous week!). We know when you’ve just changed shirts. We know when you’re taping two episodes at once. That’s ok, but don’t freakin try to fake that it’s the next week. Today’s episode 233 is the worst offender in quite some time (Alex has the same t-shirt on…and purposely refers to his beer as the same as “last week.”). We’d truly understand if you were up front and taped two episodes at once. But don’t let that water down how much you drink or mean you have to take less interesting stories for that week, such as the “tier 2” stories. Someone should at the very least be saving up interesting stories on your off week! I just really hate feeling like I’m being had when recently it has been very obvious they’re double-taping or worse. Even just 2-4 hours a week can cover 2 weeks’ worth of stories.

5. Too much time for sponsors, or poorly placed sponsor-copy. I understand sponsors make the world turn, but everyone will understand if you breeze through the sponsor-copy and get on with being interesting. It *is* actually interesting when you have something to say about the sponsor (like some of the games done previously, or even the Zune) but we’re a savvy audience and we know when you’re being a bit fake about the enthusiasm and just adding more, and we know when you really believe what you’re saying and like/use the product. But still, don’t let sponsor-copy dominate or steal time from emails/convo. I’d really like to see 2 or 3 stories yet after the sponsors, rather than sponsors being the start of the homestretch. It always makes the last story seem trivial and quickly covered and moved-on.

6. Less interesting emails. For guys who’ve admitted to having a full gmail email account in the past, we can’t seem to get very interesting emails read on the show anymore. I suspect they’re being picked out at the last minute. I don’t care if an intern does it, but get someone to spend an hour perusing emails and picking out something convo-worthy. Or a hot pic. 🙂 And spend some damn time on the topic rather than saying thanks and that’s it.

7. No Thanksgiving episode? I think this might illustrate that enthusiasm in *making* Diggnation has diminished. No one wanted to do something cute for the Thanksgiving episode anymore? Or at least a best-of reel for the year? Those have always been keep-worthy in the past. It saddened me to not see one this year.

In the end, it really just feels like they’re less interested in doing Diggnation than they used to be. At least they still have the enthusiasm to meet up and be friends and hang out with beers and good geek conversation. I’ve not stopped watching, not do I want to stop watching. I just feel less enthused myself when every new episode comes out, and often think my still watching is as much just a habit as it is me really being interested anymore.

coming to terms with data loss prevention

I’ve been recently digging deeper into network DLP (part of a PCI initiative in an organization). I’ve long narrowed my eyes about the concept of the DLP space, but I feel a lot better about it lately. Let me briefly explain…keeping in mind I am discussing network DLP and not endpoint DLP.

My original thinking about DLP has always been how it has so many limitations. It is fun to make a list of all the ways you can circumnavigate the functions DLP provides. This always made me sigh in exasperation about DLP (data loss prevention!) as a security tool to prevent data loss. It’s not that I wanted DLP to be infallible, but I thought it was silly that even simple things could defeat it, like HTTPS/SSL, or a password-zipped file.

But now I believe DLP is not supposed to strictly be a security tool. DLP should better be labeled as a Sensitive Business Process Identifier. What DLP really does is identify and alert on business processes sending XYZ data over channels they’re supposed to be using, or they should not be using. Instead of stopping a malicious individual from exfiltrating information, DLP really wants to act like bumpers in the gutters of a bowling lane: make sure valid business processes aren’t using poor channels to move data, and somewhat log/assure when it is used properly. My old thinking involved malicious activity; my new thinking involves business-valid activity. That’s a big difference!

Does this satisfy PCI checks? Technically, I guess so. But does it offer much assurance that data is not exfiltrating a network? No. Does DLP make a security geek feel good? Not when considered alone. When considered as an advanced item in a mature security posture, then perhaps it is merely ok.

A very valuable side benefit of DLP’s approach is to drive identifying where sensitive data resides and transits. This is almost worth the cost of DLP to many companies that have no idea where this stuff sits or moves.

trying to agree on risk, or at least fixing what you can

I wanted to just point out two interesting articles that go well together.

First, Rich over at Securosis talks about security possibilities and probabilities. The gist is that some security vulnerabilities are possible, but may not be very probable at all. This argument could delve into the practicality of security in regards to budget and time. Just because an attack is possible, doesn’t mean you *have* to fix it if it is so extreme, specific, and not likely to ever happen. (It is unfortunate that Rich used Mac malware as an example, as that is a pretty passionately inflaming topic…then again, maybe the original topic was Mac malware and it turned into this blog post…I don’t know, but it shouldn’tdectract from the point above.) One other analogy I have heard was a government official taking a tour of a data center and remarking about how a spy could crawl over cage walls or under the raised floor. He was serious. Yes, it might be possible, but hopefully other factors mitigate it, depending on how much value you place on what is being protected.

Second, Chris at the Veracode blog talks about the wall that sometimes (often!) appears where developers want exploit proof rather than fixing presented possible bugs. Do you spend the time to prove an exploit and then fix it, or just fix it? What if it really is esoteric?

I guess my point in highlighting these two other posts is to illustrate the dance that security has to perform, between actually fixing issues and valuing those issues against other factors. Rich and Chris are both making risk decisions, but their audience may not agree with those valuations. Very rarely do we seem to be able to agree on such valuations in this industry. That’s not a knock; it’s simply a fact of life if you ask me.

This leads into one of my few arguments against an audit model like accounting has. Those work because accounting only has a finite number of ways to do things. IT has an infinite number of ways to solve its problems. Of course, that causes even more consternation and argument…

meeting security questions with more questions

I find it interesting that so many security questions are addressed by asking more questions. What level of logging should I have? Well, that depends, what are you protecting? Do you have staff to watch logs? Budget to buy something to watch logs? What do you expect from logs? And so on… One answer usually just doesn’t fit every situation.

I can actually bring this back to my old discussion on security religions. Some people believe in absolute solutions that are secure and cover every situation. Others believe in incremental security, where you may have to layer protections to cover all your bases, and maybe not any layer is all that secure in itself.

This takes on a new dimension when you talk about scope. Are you talking about security on a macroscopic scale (e.g. national, global, internetwide) or microscopic (e.g. any organization, a home office)? Scope can have even more implications such as budget, coverage, and so on, but macro vs micro is the best start.

I often engage in security discussions that can lead to heated argument if the concepts of security religion (of participants) and scope (of the discussion) are not addressed up front. Participants can become violently argumentative, when they’re simply talking about different things (global DNS security vs your SMB DNS presence). Hence, security discussions or questions, to me, almost always begin with more questions, questions designed to fit the scope and religion, while also answering other necessary questions that eventually lead down an informal risk valuation…

hacks moving beyond credit, accounts, identity?

I’m sure you’ve heard about the hacked climate emails. Think about it for a moment. This could be a very public incident that sparks new thinking on the value of hacking. There is value, and it’s not just in stealing money directly from bank accounts, but rather in more niche situations. Go to a climate conference, sniff the wireless networks, harvest and sell or release for personal gains.

Which also makes you wonder where WikiLeaks came across the 9/11 text messages also recently released… Were they stored somewhere that got hacked, or did someone pick them up while eavesdropping on cell transmissions?