experts opine on various security questions for networkworld

Via Rothman, I read a piece on NetworkWorld where several security experts are asked their reaction to various subjects. I thought I would give my opinion on those opinions below.

1. There’s security in obscurity.
David Lacey: 7 – I think Lacey’s opinion is a practical one. There really is measurable deterrent and even prevention due to some level of obscurity. Lacey leaves open some argument on the definition of security, whether obscurity assures actually stopping anything over time, and whether he meant obscurity alone is enough.
Nick Selby: 4 – I understand the point, but it was made poorly. Selby essentially takes the side of saying obscurity offers no real security value over time.
Bruce Schneier: 8 – Schneier goes realistic and brings up that, on some level or other, all security is based in some form of obscurity. Even my password is “obscurity” because it can also be discovered by someone else. I sympathize with this viewpoint, as a start to the road Lacey goes down. Schneier goes on to say despite this, security increases when you can minimize the reliance on obscurity.
Peter Johnson: 5 – Johnson echoes Selby that obscurity has no value over time.
John Pescatore: 8 – Pescatore hits my opinion square: obscurity certainly brings value, but security should not rely solely on obscurity.
Richard Stiennon: 2 – I think Stiennon is talking about security through ignorance/ostrich-hole.
Andrew Yeomans: 7 – Yeomans also sides with obscurity having no value over time. He brings up the great point that once obscurity is lost, it is game over.

Conclusion: I think most agree on three pieces to this. 1) Obscurity offers some value (or is part of the bedrock of security) early on, 2) Obscurity falls in value over time, and 3) decreased reliance on obscurity is better. I wonder if, really, the opinion is that obscurity adds value until it is broken…sort of a, “I’ll get away with it and it’s good, until I’m caught.”

2. Open source software is more secure than closed source.
Andrew Yeomans: 7 – Neither is more secure, but at least you can have some power in your own hands to review code or fix it with open source.
David Lacey: 6 – Neither is more secure.
Bruce Schneier: 6 – Schneier rides the fence by saying open source can potentially be more secure.
Peter Johnson: 2 – Unfortunately, I’m not sure where Johnson was going with this. You can see open source, but have more hoops when supporting it?
John Pescatore: 5 – Pescatore bounces around as well. Open source typically has less of an SDLC, but may deter developers from including junk.

Conclusion: Open source is not magically more secure just because it is open source, but it certainly has potential. And at least with open source, we all can see what we’re getting and maybe even improve upon it. with closed source, we have to trust the creators.

3. Regulatory compliance is a good measure of security.
David Lacey: 8 – Honestly, I have to find it important that Lacey backs his opinion with his own experience. He might be correct that compliancy could indicate a tendency to be more secure. And is that maybe the real value for compliance? Not to be ultimately secure, but to promote a culture to get there?
Nick Selby: 6 – Not a bad response! But certainly not helpful. 🙂
Richard Stiennon: 7 – I think it is dubious that one can be extremely secure but not compliant; but the opposite is certainly true.
Bruce Schneier: 9 – Schneier attacks the regulations as opposed to the act of meeting compliancy. If you’re compliant to a great regulation, then it is a good measure. Really, that’s not a bad approach!
Andrew Yeomans: 9 – I think Yeomans is saying that compliance helps raise the bottomline, but it may misguide people who rely on it too much and not on their own expertise; some security measures could get removed while others with no value get implemented.
Peter Johnson: 8 – Johnson goes after regulation quality as well, but also acknowledges that even regulations can be followed in varying ways, some poor.
John Pescatore: 8 – Pescatore basically says do your own security first, and then fit into compliance requirements after that. I think this is great…if you know what you’re doing.

Conclusion: The initial responses from “No” to “Yes” are interesting, but I think it all comes down to how well the regulations are made. Unfortunately, I have to side a bit with Johnson and Pescatore who seem more inclined to still do their own security measures and use compliance as a business afterthought. I mean, really, how specific and secure can we make regulations over an industry that has so many different people and ways of doing things and systems and homegrown… Yeah. I think Compliance will remain a way to raise the bottom line, but for anyone with expertise, it will remain an afterthought.

4. There’s no way to measure security return on investment
David Lacey: 9 – I really like Lacey’s response. He doesn’t really say you can or cannot get security ROI, but you can analyze your own history and make predictions based on that. It’s still not guaranteed, but at least you can gain some measures.
Bruce Schneier: 6 – Schneier takes the agnostic approach; nothing works now, but someday we might solve this. Not terribly helpful, but maybe realistic.
Andrew Yeomans: 7 – Yeomans makes a distinction I like to make when talking about business security approaches and even ROI: If your *business* is actually security, you’re different than the millions of other businesses. Yeomans then dances on the fringe of enabling business through security (itself very arguable for non-security businesses), so I’m not sure where he’s going there.
John Pescatore: 7 – My only problem with Pescatore’s argument is the problem of determining when a security issue is a business need and how to value that. I’d counter that very, very few security spends results in meeting a business need, at least in the cyber aspects.
Richard Stiennon: 4 – I just didn’t buy this. Probably me being dense.

Conclusion: I think we can say if there is a way to measure security ROI, we don’t know it yet. I’d agree with Scheier’s agnostic approach. However, that’s not really an answer, so I’d side more with Lacey’s approach of analyzing history and trying to be consistent over time, while realizing this isn’t an exact science; more like an educated guess. I would also think about what Yeoman says about security issues becoming minimum requirements to business. Kind of lik ea roof or maybe the eventual need for security guards based on your business sector? The more one thinks about security ROI, the more one becomes like Schneier!

5. The Russian cybermafia is to blame for the worst online crime.
Richard Stiennon: 5
David Lacey: 5
Andrew Yeomans: 5
Bruce Schneier: 6
John Pescatore: 7
Peter Johnson: 4

Conclusion: In my mind, it is interesting to think about what is worst or who is worst when it comes to threat profiling, but I sympathize most with Pescatore and even Schneier’s unspoken point: I really don’t know or care, and neither knowing nor caring should change how I secure my assets. There are, however, people who should and do care about threats and tackling them head on, but I’m not in one of those organizations.

6. Antivirus software is essential to prevent malware.
David Lacey: 7
Andrew Yeomans: 7
Bruce Schneier: 7
Peter Johnson: 7
John Pescatore: 7
Richard Stiennon: 7

Conclusion: I think everyone basically says that antivirus software helps, but is not perfect. It is just another piece in a blended approach to security. It is a common best practice and part of everyone’s short list of security “needs.”

7. Outsourcing security is riskier than staying in-house.
David Lacey: 8 – I agree, you lose control and visibility!
Bruce Schneier: 8 – I agree, people are risky!
Peter Johnson: 7 – At first I don’t agree, but really, from a higher level I do have to agree. Basically Johnson says if it is done correctly, then it doesn’t matter which side does it.
John Pescatore: 8 – I agree, for many businesses, an MSSP can make sense!
Richard Stiennon: 5 – I slightly agree. I’m not sure I would say outsourcers can hire better people; I think some certainly can, but there are plenty of resourceful, talented people that can be found inhouse. Are they better at reacting? Yes, at a large scale like analyzing malware and issuing signatures, but are they better at reacting in my network, for instance? Maybe not. If they detect something wrong, it is still on inhouse persons to do something about it.
Andrew Yeomans: 10 – I agree, and Yeomans has the best wholistic comment of the group!

Conclusion: It still comes down to, “It depends.” Yes, you can make gains from outsourcing such as 24/7 response, leveraging actual specialized experts, etc. I would also throw in that not all outsourcers are the same. Much like Jerry Maguire would say, “Less client, more personal interactions,” outsourcers can fall into the same boat trying to service too many clients with substandard analysts and tools that don’t scale. Extreme example: McAfee’s HackerSafe brand.

8. Biometrics is the best authentication.
John Pescatore: 4 – Sadly, in the movies it seems biometrics is the most-often broken!
Andrew Yeomans: 8 – Yeah, I see biometrics still being a nuisance for large-scale use; see Selby below.
David Lacey: 8 – I agree with Lacey, it is an ideal approach. It is something we all have and it should be unique and not necessarily easy to physically fake (notwithstanding the digital representation of it). But it is not perfected, nor do I know where I stand when it comes to privacy. Going down the biometrics road long enough leads us to DNA sequencing. But that obviously has privacy drawbacks…
Peter Johnson: 5 – I’m not sure I’d go down the road Johnson is, in blaming implementation or how ease of changing it.
Bruce Schneier: 8 – Fair enough!
Nick Selby: 8 – Yeah, biometrics is not a reality for large scales yet, nor in the next 10 years if I may throw down a number.

Conclusion: Biometrics should theoretically be viable, but we’re just not there yet with false positives, how secure it is, and how easy it is.

9. Digital certificates identify a Web site.
Richard Stiennon: 5 – I obviously don’t reward the joke answers, even if I do guffaw.
Andrew Yeomans: 9 – Good points!
Bruce Schneier: 9 – Exactly! No one really cares or understands it.
David Lacey: 9 – Exactly!
Peter Johnson: 9 – Exactly, and yet still major sites don’t use EV SSL certs. EV SSL was a poor solution but it certainly sounds good to those selling them! Props to the first major web browser that STOPS forcing this.
John Pescatore: 9 – Good points!

Conclusion: This was a poor solution to a problem I don’t think we properly understood. It certainly makes money for the certificate authorities, and I know newer browsers are alarming on lack of EV SSL, but come on. Digital certs have a place in the enterprise for things like VPNs, but the only people who care or understand them are the tech experts; everyone else couldn’t care less.

10. Employees can be trained to behave securely and resist social engineering online.
Richard Stiennon: 4
John Pescatore: 5
Andrew Yeomans: 8
Nick Selby: 8
Bruce Schneier: 9
Peter Johnson: 6 – I didn’t really understand this answer.
David Lacey: 8

Conclusion: I think we accept that people are human and we will always make human mistakes. We can train and try to raise the bar, but ultimately, even with our best intentions we still will make mistakes. I think the only place you get a high degree of success with this will be defense/government facilities where not following the strict rules could result in human death (social engineer your way past the MPs?)

bonus: Compare and contrast social engineering with security through obscurity. If neither will ever ensure security wholly, do you still give it value because it will never be perfect?

11. Don’t worry, the government has a secret cyber-defense capability.
Nick Selby: 5
David Lacey: 7 – Us civilians can only guess at the capabilities of the government, but I would be willing to bet they are at least 5 years ahead of what most of us think, and 10 times more successful than we’d like to think.
Andrew Yeomans: 7
Bruce Schneier: 6
Peter Johnson: 9 – In fact, I’d go so far as to say the relationship between our privacy and our cyber-defense is indirect. When one goes up, the other goes down. A sad truth, which is why we have so much friction right now between the two and finding that sweet spot (or at least getting people to believe we’re at that sweet spot).
John Pescatore: 7

12. The longer the key length, the stronger the encryption.
Andrew Yeomans: 8
David Lacey: 8
Bruce Schneier: 8 – When I read this question, I knew Schneier would be all over this.
Peter Johnson: 8
John Pescatore: 8

Conclusion: The situation is far more complicated than just key length.

doing business in the grey areas of the rules

This week I read an article on ESPN about college basketball coaches and recruiters doing pretty much everything they can to find loopholes in regulations and otherwise leverging the questionably unethical grey areas of the rules. A couple passages caught me as being applicable beyond just college basketball:

“People are always going to work the gray areas,” Georgetown coach John Thompson III said. “Most people if they’ve had any success in life have learned how to work the gray areas.”

And:

Here’s the one thing everyone can agree on: No one wants more NCAA rules. The reason the NCAA rulebook has swelled to its current 439-page girth is because of chronic rewrites and amendments necessitated by clever rule interpretation.
There was a time when the rules only mandated how many and how often a coach could call a recruit. And then along came text messaging. Lo and behold, a new rule was born.

This is true with digital security in the enterprise. Riding the grey area between regulations/policy and negligence (and often citing ignorance when caught). Why deal with the roadblocks that security erects when you can just get things done at a higher risk?

I’m finding that there are really only four ways to combat the tendency towards insecure practices: 1) Be informed and an expert at explaining security and insecurity such that you can defend your position with an extremely high degree of credibility, 2) Suffer an incident that exposes the weaknesses, 3) Regulations and laws, 4) and having management that is capabale of realizing the risk and being conservative about it (i.e. not diving into risky practices).

It is a particularly thick shade of grey in the area of compliance and auditing, especially if you can Deceive, Inveigle, Obfuscate them or your environment (Shh, don’t tell them about that network cabinet over there). Or you just simply (hah!) have systems that are too complicated for anyone to truly examine or tackle (Microsoft SQL Reporting Services permissions anyone?).

more (moore?) on smb relaying

In a futile attempt to catch up on my news feeds, I see HD Moore had a very detailed post dealing with SMB relay attacks and the MS08-068 patch. HD gets to the meat towards the end:

The patch does NOT address the case where the attacker relays the connection to a third-party host that the victim has access to. This can be accomplished by setting the SMBHOST parameter in the Metasploit smb_relay module to a third-party server. There are many cases where this is useful, especially in LAN environments where various tools authenticate to all local hosts with a domain administrator account (vulnerability scanners, inventory management, network monitor software, etc).

Maybe I still have a disconnect, but it still seems like this should be a huge concern, at least for an enterprise or even a small, but important trusted LAN. I think the hardest part might just be getting a user to initiate that first connection, or an automated device to initiate it (which might not be so hard these days as we have more and more automated tools ‘finding’ the devices on our networks on their own).

10 things your tech guy wants you to know

I found this gem of a “top 10 list” today on Twinturbo who found it on a now-defunct blog (Yes, I know it’s most likely not “his” at all but rather copy-pasted or even scraped from elsewhere; although most of his content is sourced [barely] at the end, other pieces are not). I’ll post it here in entirety.

  1. If you ask me technical questions please don’t argue with me because you don’t like my answer. If you think you know more about the topic, why ask? And if I’m arguing with you, it’s because I am positive that I am correct, otherwise I’d just say “I don’t know” or give you some tips on where to look it up, I don’t have the time to just argue for the sake of it.
  2. Starting a conversation by insulting yourself (i.e. “I’m such an idiot”) will not make me laugh, or feel sorry for you; all it will do is remind me that yes, you are an idiot and that I am going to hate having to talk to you. Trust me; you don’t want to start a call that way.
  3. I am ok with you making mistakes, fixing them is my job. I am not ok with you lying to me about a mistake you made. It makes it much harder to resolve and thus makes my job more difficult. Be honest and we can get the problem resolved and continue on with our business.
  4. There is no magic “Fix it” button. Everything takes some amount of work to fix, and not everything is worth fixing or even possible to fix. If I say that you just need to re-do a document that you accidentally deleted 2 months ago, please don’t get mad at me. I’m not ignoring your problem, and it’s not that I don’t like you, I just cant always fix everything.
  5. Not everything you ask me to do is “urgent”. In fact, by marking things as “urgent” every time, you almost ensure that I treat none of it as a priority.
  6. You are not the only one who needs help, and you usually don’t have the most urgent issue. Give me some time to get to your problem, it will get fixed.
  7. Emailing me several times about the same issue in the same day is not only unnecessary, it’s highly annoying. Emails will stay until I delete them, I won’t delete them until I’m done with them. I will typically respond as soon as I have a useful update. If it is an urgent issue, let me know (see number 5).
  8. Yes, I prefer email over telephone calls. It has nothing to do with being friendly, it’s about efficiency. It is much faster and easier for me to list out a set of questions that I need you to answer than it is for me to call and ask you them one by one. You can find the answers at your leisure and while I’m waiting I can work on other problems.
  9. Yes, I seem blunt and rude. It’s not that I mean to, I just don’t have the time to sugar coat things for you. I assume we are both adults and can handle the reality of a problem. If you did something wrong, I will tell you. I don’t care that it was a mistake, because it really makes no difference to me. Don’t take it personal, I just don’t want it to happen again.
  10. And finally, yes, I can read your email, I can see what web pages you look at while you are at work, yes, I can access every file on your work computer, and I can tell if you are chatting with people on an instant messenger or chat room (and can also read what you are typing). But no, I don’t do it. It’s unethical, I’m busy, and in all reality you aren’t all that interesting. So unless I am instructed to specifically monitor or investigate your actions, I don’t. There really are much more interesting things on the internet than you.

replay ntlm auths back to sender with smbrelay3

Yoinked from the FD mailing list is a tool called smbrelay3 from Andres Talasco (not related to the older smbrelay tools; he really should have picked a better name). When run, this tool opens a listener service. When certain protocols from another system make a connection, the tool negotiates and replays an NTLM authentication series back to the remote system, hopefully against a user with admin rights so a remote shell can be set up. For more information, check the site and particularly the comments in the source code.

This looks like a tool that could be useful on a spare system and just listening on the network for incoming connections. Few people, if any, should have any business connecting to such a box and listening port, so those that do could be ripe for a counterattack like this tool offers.

An attacker could also set up such a box on the target network. The people most likely to find it are admins (or even service tools running with high privs!), which can really make the damage pretty bad.

Protection against this sort of a tool goes back to ye olde patching advices. You could also not run as a priveleged user, but that won’t stop the tool from still seeking out other holes (srvcheck). I’ve not tested this, but I imagine a host-based firewall would help as well. Home users should be behind a router or other NAT/firewall device, otherwise you can be tricked into this attack as well.

On the same mail thread was mentioned Squirtle, which sounds like a tool that accepts HTTP connections and can pull in NTLM auths at will, and relay them to whatever, including a domain controller. Not bad!

a case for giving your staff ample free time

A production issue today has reminded me of one of those little IT ‘truths’ that I greatly believe in. This applies to security teams as much as operations.

IT should not be worked at 100% capacity.

There are a few reasons I believe in this.

1. find errors and problems.
Troubleshooting an issue or performing a specific task is really not the time to find and tackle other problems that might have been found in the process. For instance, today’s production level issue revealed that one server was having a problem writing the correct time on the web logs. I also found that my IPS had not been logging alerts for a week on just one interface. Issues that really should not go unnoticed for terribly long.

2. learn and monitor trends.
Not knowing what is normal on a server is really not a great place to be, especially when an issue appears and the knowledge of what is a normal baseline would be very important information. Sure, logging and monitoring automation can help, but there is still a lot of admin work that goes by gut feeling, and not just from all the historical data.

3. remain practiced with the tools and information at hand.
It is frustrating to be thrust into a situation where you know you’ve done it before, but can’t quit remember how to troubleshoot something. Kind of like pen-testing is best done constantly so you can be efficient and know what you’re going after, without bumbling around trying to remember the syntax of those commands or which exploit package will shovel you that shell. Or that filter in Wireshark to show you exactly and only what you need? Do it, do it, do it. Practice and make it known enough that it can be whipped out when the pressure is on.

4. bandwidth to react to issues
When two high priority issues come in to one admin, does one issue sit there until the first one is attended or a priority difference is assigned to each? What if both are show-stopper issues?

Yes, fine, there are plenty of us who spend some time visiting gaming sites and reading blogs during quiet periods of our week (fleeting as they are!), but this is also why I truly believe in hiring people who geek out about technology and security: when the fires are low, that’s what they’ll play with. They’ll look to automate troublesome tasks, improve anything that isn’t working optimally, and otherwise keep their fingers firmly on the pulse of the kingdom.

Ok, yes, I will actually concede that there are exceptions to not working the staff below 100% optimal, but this is largely a corporate culture exception where IT is expected to only do just enough to keep the lights on, even if the wiring is exposed in back and getting long in the tooth. As much as an approach like that pains me, it is reality in places and the realist in me still does accept that. But you can guess which type of environment I would be happier in. 🙂

you have your pro blackhats…and your noob admins

A couple articles skittered across my desk the other day. Los Angeles traffic engineers admit hacking into traffic light control systems and Rogue IT admin hands former employer’s network over to spammers.

There is lots of talk about the criminality of the black hat underworld and about profit-pursuing hacker groups (although maybe this is just the growing up of the teenage hacker vandals from 10 years ago now needing income), but there is another important set of threats: relatively normal people with access.

This includes former employees that can still use accounts for bad things, easy password guessing, or abuse of legitimate access just, well, because they can. It stems from both negligence and the simple aging of our reliance on technology. Ever wonder how many stale accounts you might have in your organization just because people with knowledge have left? And I’m not talking about obvious stores like LDAP/AD, email, VPN, network devices.

some of the coolest it security jobs

From Andrew Hay I saw this list of the “coolest” IT security jobs from SANS. I see a good mix of testing, after-the-fact response, detection and hands-on engineering/monitoring, and advisory/managerial positions. I’m always happy to see a good mix, since too often it seems the advisory/managerial people get heard from the most and aren’t always the ones with their ears near the ground.

On the other hand, there are a lot of ties in both lists, which might indicate a low number of votes. The higher then number of votes, the less ties I should expect.

a dose of reality with two few parts cynicism

Your security is only as good as you let your auditor review it. Your audit is only as good as your auditor has skill and knowledge enough to find the holes or verify the lack of them.

I wonder what the rate of subterfuge is for keeping your mushrooms auditors in the dark as much as possible.

So this gets back to one of the qualities I find most important in a tech worker: integrity and honesty. Not something business is always fond of adhering to? Business is long practiced in hiding as much as they can…

Does this mean that cyber security is destined to never be very good? While this is a decent gamble 20 years ago when one person couldn’t steal more than a few reams of paper, today’s digital world lends efficiency such that an entire company’s existence can be pillaged via a portable music player in minutes by one person. Can one company be at the forefront of security and still maintain a cost/profit/edge over the rest of their market?

Maybe it will just be painful until culture/society slowly catch up to these changing paradigms.

For as much as I hear and even talk about aligning business and security more and more, it’s still like pushing two earth magnet same poles together, no matter how well-meaning everyone is.