security pros unhappy in their jobs

Saw this article over on DarkReading:

Kushner and Murray say they were surprised by security’s high number of unhappy campers — 52 percent of the around 900 security pros who participated in the survey are less than satisfied with their current jobs.

I’m not surprised by low numbers, for a few reasons that I can throw out with no backing research:

  • pros from a technical background that may not like being dedicated to writing policy
  • “we know better” when it comes to the state of security.
  • we’re geeks; and too often we are either happy when we get everything that we want, or unhappy when mgmt can only fund anything less than 100%.
  • as geeks and as security geeks, we’re in a growing research-laden industry where new things are being discovered and developed. I’m sure many of us don’t like the day-to-day drudgery work that may come from watching graphs, monitors, and alert dashboards. Many are driven by the discovery, even if it just means self-learning new things.
  • organizations don’t properly know what to do with security/security pros as much as security pros may not know how to show value. We’re still struggling to sell the idea that security is a process and you don’t gain as much as you think just because you have a one-time project with lots of “security-in-a-box” purchases.
  • we really do have a lot of passion, but that also means we do get affected when we see security fails. And fails so often. And stupidly…

I wonder how many security pros would say they are satisfied with the security efforts/level of the networks and organizations they work with on a regular basis (either their employer or the companies they advise/test/consult for).

I also pulled this quote out:

Kushner says his biggest takeaway from the survey was that security pros are not really mapping out their career paths. “That generally leads to unhappiness, and you wind up in a job you don’t really like,” he says. The key is taking a position that provides the skills and development you need, he says.

I agree and disagree with that sentiment. I agree that one should know what job will make you happy or unhappy, or will move you towards a goal if you happen to have one, and which jobs will not. But I’m not sure “security pro” is something that needs a career path for all people.

There are security pros who probably could use a career path written down so they can move on to CISO/CSO or even lead researcher in the field they want to get into. But there are so many of us that have no desire to manage or, as we often see it, buy into the corporate bullshit and get away from actually *doing* something directly. And plenty that can easily find jobs doing what they enjoy without moving “up” from technical hands-on ranks.

Besides. We deal with security. When was the last time you asked a security geek if they’re happy with the state of their security? I don’t think we ever have “writer’s block” when it comes to ideas to implement or improve things. It’s kinda part of who we are just as much as being a measure paranoid is.

rock out with your hack out

Pauldotcom has a spot where they use the phrase, “rock out with your sploit out.” A great spin on the phrase “rock out with your cock out” (and goes great with “hack naked” which is one of the best hack/sec slogans out there with “trust your technolust” and “hack the planet“).

One drunken night I wondered if “rock out with your hack out” was used anywhere. A very empty Google search later surprised me: it wasn’t used anywhere notable. Whoa…

when does vuln research turn bad?

This post inspired by reading a story from Rich Mogull (Securosis) about VoIPShield deciding to effectively sell exploits. In case it is unclear, I’m initially in agreement with Rich’s sentiments.

At what point do you cross that strange line? I hesitate to give that line a name, since it might change the connotation a bit, but the line name I had in mind initially is “black hat.” Take these scenarios into consideration:

1. Security research firm (SRF) finds vulnerabilities and fully and freely reports them to the victim vendor and maybe the world at some point as well.

2. SRF finds vulns but only reports them to vendors, fully and freely.

3. SRF finds vulns and fully and freely reports them to the world immediately.

4. SRF finds vulns but only sells them to the victim vendor.

5. SRF finds vulns but decides this adds to their value as an SRF and keeps them secret as part of their stash of “we can own you during an assessment” tricks.

6. SRF purchases other vulns to add to their stash of tricks.

7. SRF finds vulns and adds them to their proproetary exploit tools that they sell to anyone.

8. SRF finds vulns and sells them to interested parties, whether they be the vendor or not.

9. SRF finds vulns and uses them to attack vulnerable sites/apps to steal information, i.e. criminal gain.

Quite often, we demonize criminal black hats because they’re realizing monetary gain at someone’s expense against the law. But where do vulnerability shops fall into the whole realm of things? Especially those who will sell vulns to the public. That’s like full-disclosure with a price tag…so in a way that is a monetary gain while possibly supporting criminal activity. Now, exploit-offering sites probably have indirect gain to their moderators and authors even if there is no charge, simply because of the knowledge and notoriety gains.

Maybe you can draw the line on whether utility is being experienced or not, i.e. is the general public more secure for your actions? Is there a legitimate value to your security efforts? If not, then we should all be working for free, right? Or what about intent? I might be making guns, but my intent is not to kill people even if I close my eyes while selling this gun to an obviously mental lunatic. So does that mean regulation of exploits be a government matter (like it is in some countries, for better or worse).

It’s an interesting road to think closely about…

my quick comments on milw0rm outage

It’s been a tiring week for news in the infosec world this week.

Between the DirectShow vulnerability and milw0rm faltering (and going down fully)…and it’s not even Thursday…

Here’s hoping milw0rm comes back up and str0ke gets some trustworthy and skilled help to keep it operative at the high level of quality it has had (I’ll third mubix being involved!). Not only has the content been top-notch (the burgeoning videos section comes to mind) but it has been an extreme help for researching vulnerabilities and exploits. I know, kids can get their hands on stuff like this and do mischief, but I truly feel that it does more harm than good to hide information under layers of moral grey lines.

Not only that, but if we keep hiding shit, we can’t allow more truly skilled security professionals to grow. And let’s face it, so many of us are hugely self-taught or community-taught. We need information to be open so we can keep making informed experts and share knowledge. Otherwise we just become elitist and closed-door…and everyone else has to repeatedly re-invent the wheel.

As far as rumors of FBI pressure on the hosting provider for milw0rm, you would really think law enforcement would prefer “the enemy” to remain on the open in places you can watch too. Milw0rm, at least in my point of view and experience, has been far more a positive to security than it has been a boon for those who spread insecurity. By far. Not even close.

links and info about directshow 0day (msvidctl.dll)

The Windows 0day against DirectShow (msvidctl.dll) has been moving like wildfire the past 24 hours. I’m only going to blitz a few links on this topic:

Metasploit has a module ready for it (can’t link while at work).
POC exploit that pops up calc.exe
another POC

A couple bits of yoinked code. I don’t recommend running these as they are both taken from live sites hosting bad stuff (the links here are just fine though!):
http://en.securitylab.ru/poc/extra/382195.php
http://4lt4l.blogspot.com/2009/07/directshow-0day-in-wild.html

moser exploits iphone usability to pwn it

Max Moser (and Lothar!) has posted a video and discussion on basically auto-pwning an iPhone. In essence, when connecting to a wireless network like a hotspot that requires you to first hit a landing page, the iPhone will helpfully automatically pop up a Safari browser window to that landing page. Let’s just say you better pray the landing page wasn’t karmetasploit in waiting. (Karma grabs you with its network, and Metasploit delivers the web payload.)

While this is amusing, one argument Apple may make (if they even bother to make one) is the iPhone is just doing automatically what the user would do anyway: open a browser window. However, this becomes really bad when the user only accidentally clicked the wrong network to join (an oops-auto-pwn) or the attacker is spoofing a legit-sounding network. (Gotcha!)

Most people I know don’t give a thought to the security of their cell phones, even though they may give some thought about it for their laptops. I don’t think it is sinking in yet that something like the iPhone is more akin to a laptop than a phone, if you ask me.

and you think us sec geeks bitch a lot…!

Every now and then you have to poke your head out from amongst the security geek circles and see what slightly more normal people have to say about a topic. Tonight, my moment of slumming comes from the comments on a story about a recent McAfee AV update that went bad.

From calling out for alternative OS solutions (in an office environment) to denouncing all AV to not understanding scales of economy and so on, the comments remind me that the opinions of the world are far worse outside the walls of our little geekdoms.

Kinda puts it into perspective what companies have to deal with when they service both corporate and home users, eh?

mcgrew takes down a bad guy

McGrew is starting his posts about hunting down and getting a hacker arrested for what amounts to a SCADA attack. Via Liquidmatrix I was pointed to a very informative Register article as well.

I sometimes state that I wear a grey hat now and then, but it really is far beyond the line to actually attack a system that has as much importance as an HVAC in a medical clinic; something that can jeopardize lives both directly and indirectly. It is also a gross negligence to subvert the trust placed in someone like a security guard who is meant to protect. Highest kudos to McGrew for doing something about it rather than just ignoring the incident.

After McGrew dropped the name in IRC the other night, I did some of my own quick searching on the person. Hacker kids and little hacker groups and even minor defacements are one thing, but escalating to a degree like this is trouble. Role-playing and playing at this kind of thing is fine, if you ask me (yes, even if you find it cool to wear a gas mask), but you don’t toe that moral right and wrong line. People who do that and have certain psychological dispositions are trouble, as they really have no where else to go but further escalation past that line.

What I found most ironic was a post on a profile that said his dream job was to be with the FBI Cyber Corps. Well, at least he got an up close introduction!

identity theft issues still hard to grasp for most

ID theft has been around a very, very long time. Only with the relatively recent explosion of the Internet has it become more than just an acceptable “cost of business.” So in recent years you’d think places, like, oh I don’t know say, local banks, would have a lot more awareness of the issues and do simple things like, I don’t know, shred or securely dispose of paper waste.
I guess not. Even today…

Federal agents say Nelson said it was easy to find new victims: All he needed to do was visit a local bank and search their dumpsters.

My only complaint on the news article is this part:

CBS13 was able to find processed deposit slips and junk mail with full names and addresses in the garbage of a local bank.

With absolutely no reference on why that sort of information might be useful or dangerous. Is my full name and address sensitive information? I would hope not since it’s public…

go to this site and put in your credit card to pay…

Came across news this fine day about the RIAA settling with a woman on one of their music-pirating crusade cases because she didn’t even own a computer at the time. But what really struck me was the facepalm of this paragraph:

[Mavis] Roy, of Hudson, New Hampshire, had been charged by four record labels with downloading and distributing hundreds of songs from the Internet. A letter from the record companies’ attorneys in July 2007 directed her to a web site where she could pay by credit card to settle the case. Since she did not have a computer in her house at the time she was alleged to have downloaded the music, she ignored the requests. “For many months she thought it was just a scam…”

Why do we even bother these days? Remind me to never wonder when people say, “Well, how are we supposed to know what is a scam or what isn’t? Good question, and unless the English is broken and stupid, we really don’t know anymore. Be paranoid.

nanomite security in a box

You buy our appliance and plug it into your data center. With care, cool temperatures, and constant feeding with power and network packets (they do not have to be destined to the box, but just spanned over), the appliance will start to produce nanomites within 6 months.

These mechanical mites will first gestate inside the appliance, but will soon skitter across your network cables and fix everything wrong. They will also steal unused bits and bytes of storage and bring them back to the hiv…appliance.

After 6 more months of scavenging unused cycles and bits, the appliance will begin to produce nanomites v2: physical security. They will be constructed inside the appliance and when ready, slip out of the vents and secure your data center. Do not be alarmed if you see network cables moving as if blown by a breeze, or small shadows around the corners of the racks when no one is around. Those are just the nanomites!

They are small enough to slip unnoticed anywhere, including all of your office PCs and telephones. They are constructed to adopt such devices as their new homes and they will protect them and their security for 47 years, per their average life expectancy. If your users or the systems they use exhibit insecure tendencies or practices, they will take physical action to shock…err…correct the situation.

Hmm…so not wanting to be at work today. Need sleepy. The above inspired after seeing Rybolov’s picture of a cat appliance!

reposting 10 things you auditor isn’t telling you

Via McKeay, I read a list of 10 things your auditor isn’t telling you, compiled by David Shackleford. Utter, terrible truths! So much so, that I had to yoink them and add comments.

If you read nothing else in this post, read my comments on #6. In fact, I’ll quote myself here: “This is where pen-tests can trump audits. A pen-test can say WRONG, but an audit is trying to say CORRECT, and it often can’t.”

1. I am actually just following a checklist.
A subjective checklist. An incomplete checklist. A checklist I can’t intelligently talk about because I don’t get it, nor can I really give you anything beyond obsurd vagueness if you ask me how to meet those checklist bullets! Oh, Dave covers some of those coming up! 🙂

2. I do not understand the technology I am auditing.
Also, too many varied ways of using varied technologies in various environments. Either you follow the checklist in #1, or you have to have a very large swath of knowledge. We’re just not close to being at the latter, yet. Kudos to any teams of auditors who have a nice cross-selection of skills that the lead can use to fill such gaps!

3. The well-dressed, experienced greyhairs came in and sold this deal, but I graduated from college 8 months ago and went through ( E&Y || IBM || Deloitte ) auditing bootcamp.
Possibly good if the guy is smurt, but honestly experience in a working environment does go a long way to “getting it,” both with technology and the how’s and why’s of business.

4. Most firms are really incentivized to help you pass.
In addition to Dave’s comments, I would say no one wants to lose business because your client only wanted a passing score. They *will* shop around to pass a weak audit rather than actually work up to passing any audit. Sad, but security will continue to be an economic function.

5. Show me a viable set of compensating controls, and I’m liable to pass you.
Just say no! Then again, combine #4 with #1 and you get #5. Don’t lose the business, but cover your ass so you’re not passing obviously wrong things. The one thing I dislike about this situation is if the controls are there, but just not really used except when the auditor is around, i.e. that AV/IPS management console full of alerts that no one ever looks at.

6. Auditing standards suck.
I’m not sure how this can get better, mostly because of what I said in #2 about varied technologies used in varied ways. *CAN* you have an easily understood Ubuntu Server build checklist? Doubtful, especially when you have no context as to what that Ubuntu Server should be doing. This is where pen-tests can trump audits. A pen-test can say WRONG, but an audit is trying to say CORRECT, and it often can’t. Yes, we can get better, but this is a Big Deal. And we all know the reaction when they see NIST docs for the first time. “Oh, just follows the recommends at NIST [and keep some Tums on hand.]”

7. Compliance regulations suck.

8. You can’t have it “your” way.
Combine this with #1, #2, and #3, and your auditor may WORSEN your security. But it is true, the audit’s real effectiveness is going to be rooted in the auditor and somewhat in the client technical staff (who may be able to pass off an auditor as being inexperienced). <--Of course, those staff that can do that probably need to be recruited into security/auditing!! 9. I know more than you.
Dave’s comments remind me why I think the trend on-going is to have in-house auditing/security. The biggest things stopping that will be a solid workforce and the Blame Game when a breach does occur. You can’t have someone blitz in for a week or two and be effective with anything but a checklist. You can’t expect a firm’s auditor to give you MSSP-like/consultant-like hours without either being gouged or limiting how many other paying clients he can handle. And you can’t always expect a client sticks to what they say, especially if they have no real security analysts whose job is to maintain such secure practices.

10. Covering my ass is my major goal.
Dave mentions the audit firm pestering to get answers/details to make sound decisions. Given #1, #2, #3, #4, and the ego-part of Dave’s comments in #9, this leads down the road of eliciting a response you want and then client wants, even if it is false. “Yes, fine, we have a log management product and sure we …watch…it…” can be written down as “Check!” even if it’s not true. “Honey! You let Billy track mud all over the living room!” “But dear, I asked if he had taken off his shoes and he said yes!” “Right, but did you actually CHECK that he was doing it?” “Wait, blame Billy, he lied about it!”

bonus: I know you probably don’t like me.
Really, we techs should like auditors. Tech/Sec managers should like their auditors. If you’re doing a good job, they legitimize it. If you’re doing a bad job because you can’t get budget, they’ll justify it. But if you’re being subpar and you know it (or don’t know it), yes, you dislike your auditors because they look at things you suck at and are asking for details that you don’t have. In that case, you need to look at them as being helpful to improve what you’re doing, not trying to expose you for a hack in front of your boss. If I’m driving stick horribly and someone gives me a tip, it’s just that…helpful!

This sounds cynical of me, but it’s likely because I’m too close to all of this to really appreciate it sometimes. Even my most cynical days are liked by some people because there is a deep thirst for security knowledge beyond sec geek circles. They just don’t like all the work we remind them needs done. No magic buttons… 🙂

is china putting itself in danger with green dam?

I find this news particularly interesting. China’s Green Dam software is riddled with bugs? Nice!

Not only is the government lowering overall security (and illustrating that even on a national security level functionality trumps security), but homogenous systems like that scare me. A business with a standard security suite is one thing, but a country of a billion people is a whole new game. If a government ever mandated a piece of software for its citizens and businesses, I can pretty much guarantee you it will be the most tested, fuzzed, and attacked piece of software since Windows, because just one remote exploit can turn into a virtual nuke for a government whose hackers find it…

If you click the link in the link I posted (or go to this article), you get this juicy quote from the CEO of the Green Dam maker:

“We are specialists in producing Internet filtering software rather than security,” Zhang said, according told the China Daily.