the continued rise of fuzzing

Securosis pointed me over to a really cool post by Michael Howard as he discusses SDL and the SMBv2 bug that was patched this month.

The takeaway I get is you really can only do so much to scan code, do code analysis, and even code reviews. There will still be bugs like this that make their way through. Automatic analysis just can’t find things like this. And humans make mistakes when reviewing things. (I suppose even code variables could have metadata in them to be marked as “untrusted inputs” and thus highlighted for more scrutiny? It’s like writing code to vet code…which is just odd to me since I’m not into comp sci…but maybe that’s what he’s talking about with their “analysis tools.”)

The only current way to find a bug like this is fuzzing.

But that should bring up the point of how much is enough fuzz testing? For instance, you won’t know if there *is* a problem in some code, so how long and deep should you fuzz? How do you prove it is secure? At some point, you really just have to release code and hope that what is essentially real-world fuzzing by millions of people will eventually reveal any missed issues, at which point your response teams can patch it promptly. Although, hopefully you’ve done enough fuzzing to match just how critical your software is to others (Windows? Pretty critical!).

Funny, that sounds a lot like the mantra, “security eventually fails, so make sure your detection and response is tight.” I’m glad we already look past raw numbers of security bugs, and focus in on how quickly they’re fixed by vendors, and how transparent/honest their process may be. Microsoft has really come a long way down this road.

a moment of industry pessimism

I’m getting passionately convinced that the “big security firms” that make these “big security suites” for home and business users have absolutely no clue what they’re doing anymore. Too big, too dumb.

I’m sure they have great engineers in place, but between the business itself and the messed up marketing, these firms and their products are beyond broken. It sucks to be held captive by them, though, since they (sort of) provide tools that form the foundation of a security posture (endpoint tools, mostly).

In short, STOP TRYING TO DO SO MUCH THAT YOU SUCK AT DOING ANY OF IT!

waiting for patches to release to wsus…

Patching. Every pen-tester and auditor will point it out and every security geek pretty much *facepalms* when you admit you haven’t patched since last week. But patching is half art and half time commitment. The reality of patching is that it is not quite as easy as we always make it sound, but that doesn’t make it any less necessary as a cornerstone to digital security.

You have a Windows environment with more than 100 systems. In other words, sneakernet just doesn’t work anymore, and a good portion of these systems are servers that you need to stagger reboot/install times during a maintenance window. Basically, you qualify for WSUS!

The easy part of patching with WSUS is getting a spare server with enough storage set up, WSUS installed, and all the patches downloaded that you want to manage (start small because the storage needed adds up quick!).

The next part is figuring out your WSUS groupings and your Group Policy Objects. If you don’t do much to manage the structure of your Machines OU, you might want to start here. Time spent on planning here will save time later on in reworking things you didn’t anticipate. Using Group Policy will help ensure that you don’t have to chase every new system and herd them into WSUS. Joining the domain should take care of it!

If you’re trying to massage a new WSUS implementation in an already-built Group Policy arrangement, you’ll probably have a lot of hair-pulling or catastrophic mistakes as you try to move inheritance around, policies around, and break things out properly. It’s really not all that fun early on.

Once you start getting systems populated, you then can start looking at your deficiencies in WSUS. More than likely you will end up approving everything, but that is still a boring time sink. This might also expose a few issues. First, all the systems you’ve neglected for months or years. Second, whether you want to approve patches for every system. I’d suggest approving for every system. That way if you create a new WSUS group later on, inheritance will still apply everything you’ve done previously. If you want to split, say, servers and workstations, I’d suggest getting a separate WSUS instance/box rather than compromise the inheritance stance. It really will pay off someday when you find surprise machines in your environment, but thankfully have been patched because you approve everything for everything.

In the process, you’ll learn how to view the reports in the WSUS management console. This is tricky, so play with the filters extensively. It sucks to get a nice warm fuzzy feeling as you get caught up only to realize you hadn’t even begun to look at what systems had errors or have a backlog of updates from years ago. Don’t just look at new patches!

Eventually, you’ll get caught up!

Then on patch day you have to figure out what systems you want to approve patches for as a test before you slam them out to all the other systems. And you need some method to validate the testing. This is harder than it sounds because you need systems that get used, but are not so important that you’ll jeopardize the business if they screw up. You also need to then manage their WSUS membership (and thus their GP objects and OU assigments) to accomodate their status as test boxes. Basically, good luck with that!

Then after some testing time, you can roll out the patches everywhere. Of course, this probably gets preceded by a wide announcement of patching, rebooting, and possible downtime in your maintenance window or overnight, and all the dumb questions that come back from it.

After all of that is done, you get the fun task of going back into WSUS to see which systems failed to do their installs. Then troubleshoot those installs, announce a second round of downtime, and get things up to speed.

In addition, you’ll probably have systems that no one likes to reboot, so they just accumulate months of patches, such as twitchy domain controllers, old systems that are more brittle than leaves in autumn, and database servers. Everyone loves a sudden database server reboot!

Whew, done, right? Nope! Now you have to have a process to validate that patches are installed on all systems that you manage. While WSUS does include reporting, it might be necessary to get some checking done out-of-band from your patching. Enter: vulnerability scanners!

This is a beast in itself as you need to be careful initially with how much you let the scanner beat on your systems. You might just end up doing Windows patch scans, which is an ok baseline if that’s all you can do. Of course, you get the pain of:
– getting systems into the scanner target list (too often this is manual!)
– getting dead systems out of the scanner target list
– parsing through the reports for both of the above
– parsing through the reports for the missing patches or alerts
– reconciling all the issues

The bottom line is patching is a necessary foundation to security. If you don’t have a patch management process for your OS of choice, you can’t have a good security stance. And too often the people who flippantly say patching is easy, don’t know anything about enterprise patching and think it’s just all about Automatic Updates and clicking the yellow icon every month before they go to bed. Proper patching is a time commitment that needs to be made by your IT or security staff, and it takes longer than you probably expect. Oh, and we’ve not even touched on non-Windows patching!

phishing? some people still just don’t get it

This article started some thinking. In it, the current FBI Director says he no longer banks online after nearly being fooled by a phishing email. (Yeah, my first reaction was that he shouldn’t really even be looking at emails like this, let alone almost falling for one…and the appropriate response is not to stop banking online but to stop reading those emails and clicking links on them. And by the way, if you say banking online is safe, but you don’t do it, and you’re an influential person…you’re confused and confusing. But hey, I’m glad it’s 2009 and our FBI Director experienced a “teaching moment” to the old issue of phishing emails…)

So, someone can still bank online if one does so strictly by following some guidelines, none of which ever requires you to even look twice at all the phishing (and legit!) email that may or may not come from your bank. Why is this? Because all of that is just bonus for doing your business online. You don’t *need* to read those emails. Ever.

At least…not yet.

Sadly, I think as more and more services go online (like the Twitter-enabled bank from the other week), I feel like someday we’ll look around and realize all these horribly insecure methods of communication will be not just relied upon, but the *only* ways to interact with things like your bank, short of driving to it and speaking to someone in person. It’ll happen someday (maybe not for decades yet), and to see it happen with our current set of technologies is a bit scary.

security consultant #8 best job in america

Usually when I read lists of the “best jobs” or “most rewarding jobs” I tend to look for engineer or general IT jobs. For the first time, I actually see a list over on CNN include Computer/Network Security Consultant as the #8 best job in America. I think this is saying something in terms of compliance and security awareness!

I don’t fully agree with the CNN statement that, “If a system is infiltrated by a virus or hacker, it could mean lights out for the security consultant’s career.” I think it’s correct that it could mean you probably will be looking for a new job. But I don’t think it’s entirely accurate that, “This is a job you can’t afford to ever fail in” [says an interviewee for the story]. Our best teacher is failure and failure is inherent in security. “Failure” as defined when a hacker gets in is not the end of the line. The rest depends on detection, response, mitigation, improvement, and honesty. But I do understand business tends to be all or nothing, especially as you get into the public orgs.

On the flip side, I love the first mention under pre-reqs: major geekdom. I fully agree with that. What sets good CISSPs apart from horrible CISSPs? In a nutshell, the geekdom more often than not, and all the other little things that tend to come with most geek/hacker mindsets.

as if heartland and carr don’t get me angry enough already…

Heartland can’t stay out of the news, nor can their CEO Robert Carr. Unfortunately this time the news deals with a new lawsuit that claims…well…check the excerpt below. Does this explain or at least put into perspective Carr’s newfound religion in regards to security? (To me, it actually convinces me he’s all hot air and I would only trust actual technical audit/pentest findings over whatever he claims reality to be; but that’s not much worse than I felt when the breach announcement broke…)

In a November 2008 earnings call, according to the complaint, Carr told analysts, “[We] also recognize the need to move beyond the lowest common denominator of data security, currently the PCI DSS standards. We believe it is imperative to move to a higher standard for processing secure transactions, one which we have the ability to implement without waiting for the payments infrastructure to change.”

So much politicking and legal posturing in the media/public over crap like this. People say one thing, but reality is totally different. The article even mentions how VISA removed Heartland this year and (someone at VISA) still claims no one compliant with PCI has been breached. Ugh…what an exactly wrong approach to take. That’s like admitting you have your head up your ass.

mcafee course teaches students how to create/use malware

Seems McAfee is holding a course this week on working with malware and how it works, where students will likely get hands-on learning in how to make a Trojan (or at least work with one) and do other things malware authors/users like to do. I first saw this from a post on Kurt Wismer’s blog.* In the post, Kurt goes over a few reasons why this course is a bad idea for McAfee.

I’m not sure I totally agree with him, but I don’t have any violent disagreements on this either. A few points I would bring up in defense of the course (yeah I’m marking the calendar as a day I actually gave a flimsy defense in favor of McAfee!).

1. The course is 4 hours and does have the attached cost of the Focus 09 conference on it. I’m not sure the course will have any newbie script kiddies in attendence looking to make their mark in the malware business.

2. Ok, the point of detractors to this course is not necessarily script kiddies, but possibly the newbie researchers getting their hands on these tools/skills the first time, and not fully understanding the risks of a rogue, not-contained piece of malware getting out of their home labs (or god help us their work environments if they experiment there!). Fair enough…but I think most virus-writers and even anti-virus writers probably had their start under worse conditions and less guidance.

I guess the point of 1 and 2 is that I’m not sure McAfee is introducing any new enablement with their course. If the labs/slides were made public, I would have more of an issue with it.

3. As defenders, we do need to stay abreast of these techniques. If learning how an attack can be done helps me be a better defender, I’m not sure I could argue against that. Well, not directly anyway. My point in going down this road is that maybe someone will write some malware and do Evil Things, but maybe someone may take this education and become the next senior engineer at Vendor X, or stop Evil Things in their own company. I don’t know, but I’d rather disseminate information if the Evil doesn’t outweigh…

I suppose one could pull in the analogy of bomb-making into this discussion. Is it ok to teach people how to make bombs? Perhaps not. Should anti-bomb engineers (yeah what they’re called right now is escaping my recollection) know how to make bombs? I think so.

4. Kurt has a great point that maybe McAfee, as an anti-malware company, shouldn’t be educating others on how to make more malware. I think this would be far more true if they were, say, teaching a room full of high school students. Less true here, although still a valid argument.

5. Kurt’s also correct in saying it doesn’t matter if McAfee is teaching these concepts using an already-existing toolkit or writing things from scratch. That really should have no bearing on the discussion.

In the end, I’m not holding fast to a Pro-course stance, but I would have some reasons to stay on the fence about this topic (agnostic if you will, while erring on the side of the course value).

* I like kurt’s posts/opinions most of the time. Even if I don’t agree with them, he states them clearly and with informed conviction that all people should exhibit.

is virtualization here to stay, or just a stop-gap back to big iron?

Hoff has opined about virtualization over on his blog. He calls it in incomplete thought (a blog post series, really), but it’s really quite thorough and deep. I suggest reading the comments as well.

In essence, Hoff says, “There’s a bloated, parasitic resource-gobbling cancer inside every VM.” It’s true. Virtualization isn’t a solution to much of anything. It’s a golem of a beast created to fix problems that were symptoms themselves or much larger problems.

Here’s a really quick, 30-second mindset I have on this.

  • mainframes centralize everything and people get things done with their slices
  • personal computers take the world by storm
  • suddenly everyone can do something on their own without the centralized wizards and curtains.
  • …and everyone does things on their own, creating apps, languages, etc; decentralized apps and data
  • the OS just can’t really keep up; same feature bloat hit Windows that hits all software that wants to be popular and fit every niche need (McAfee, Firefox, browsers, etc).
  • then shit gets too splintered and the IT world becomes an inefficient money-drain of equipment and maintenance
  • attempts to centralize everything is met with cries of “they’re stealing our admin rights, but my system is slow when I have admin rights!”

All of this ends up turning into a cycle, and one we’re destined to follow over and over. Big iron. Smaller iron. Big iron. Centralized. Decentralized. Centralized. Administrative power over your individual system. Locked down. Empowered. Locked down. It’s like a “grass is greener” mentality out of control.

But it’s more than that, as well. Part of this cyclic, mess of a vortex is the speed at which technology is progressing and our world is changing. It moves so fast that no one (business or individuals) can take the necessary time to do any of this correctly. As you’ll hear Ranum and I think even Potter say in recent talks, the problems of today are mistakes from 15 years ago. I think things just move too fast for us to realized it.

At any rate, it’s not like we can do much about it today, but at least we can be cognizant of this situation and do what we can in small measures to avoid the eddies and undertows that drown so many in these changes.

dialogue will save us

Last week I had a tiny, tiny rant about some feelings on isolated HR practices. The author who inspired my tiny, tiny rant posted a response to my initial comment on his blog, so it is only fair and right that I mention it here as well. Appreciation and thanks passed on! 🙂

Of note, I don’t have a relationship with any recruiters. I still get a voice mail on my phone now and then from a firm or two, but I admit to not following up or updating my resume with them (god, I still haven’t put CISSP on it either!). I really should get my face back on some recruiters minds…but I’m more than aware that Des Moines is not a very big city by any means, and most IT managers/recruiters are probably only 1 or 2 steps away from any others whom I talk to. I don’t like the idea that someone hears I might be looking, just because I updated with a recruiter. (One recruiter 3 years ago scared me away because she made mention she was well-acquainted with my boss, whom I wasn’t telling I was looking for a job…). I imagine with my CISSP and easily 5+ (loyal) years progressive technical experience I’m actually now marketable.

we usually do practice what we preach…

…but security is rarely about absolutes.

Bill Brenner has posted 7 Ways Security Pros Don’t Practice What They Preach. I’m surprised more of these types of lists don’t show up, especially as normal users rage against security measures.

So, my thoughts and how do I rate?

1. Using URL shortening services. Yeah, these suck. I hate clicking them and I hate using them. But sadly, Twitter use has forced us to do something risky in order to fit ghastly URLs into small boxes. Hell, even magazines use them. Just think if browsers could only handle xx characters and had to truncate the rest of the URL inthe address bars. Yeah, fucked. This reminds me of 1997 IRC where you’d learn quickly not to click on blind links because you’re see some fucked up shit that you thought was going to be something cute. This is probably why “Rick Rolling” never seemed that big a deal to me. Am I guilty of using these? Sadly, yes, I use (and only use) tinyurl.com, but I should move to one that, by default, previews the URL first.

2. Granting themselves exemptions in the firewall/Web proxy/content filter. Disclaimer: Yes, I’m exempt from some policies at work because I have to investigate such things. Yes, I get to exempt myself from some web site category filters (ever do security research when “hacking” sites are blocked? ever investigate hits on your external services when you have no idea what might be hosted on the other end? ever go to a blocked URL that a user hit only to see just why it was blocked?). But other than legitimate work uses, I don’t poke my own holes into security protections just because I want to, such as gaming sites or opening up holes for me to bridge a home network…

But here’s the real deal. Business wants you to get XYZ done. If you were a normal employee, you’d do whatever you *can* to skirt the rules if those rules are stopping you from getting XYZ done as requested. When you start doing that same business habit to the people who control the rules, then you put those people into a position where they *can* accomplish XYZ because they *do* have that power. This is a classic example of how security and convenience butt heads, and sadly convenience almost always wins without some help on the security side. This is why I hate the question, “But technically, you *can* open the firewall for me, right?” Yes, duh I *can,* but I won’t.

3. Snooping into files/folders that they don’t own. Doing this in the course of an investigation or because a manager or HR has specifically requested it (properly I might add) should be quite alright. Again, this is like saying don’t jump in the water, and then yelling at the fish because they’re inherently in the water.

There are also other reasons, such as disk usage investigations (really, I shouldn’t run that 300MB movie file you have on your network drive to determine business need because my fileserver disk is filling up at 10pm?) or when migrating a user from one system to another (yeah, people shove shit in the craziest places on their disk…).

But looking at things you shouldn’t look at, should be avoided. If a file says something like, “performance appraisal” or “tax return,” you probably want to take extra care not to open it. If you’re on an exec system, it’s probably best to stick to only the exact task at hand. Basically: common fucking sense.

Then again, this is just me. Even if I have such files in front of me, I won’t open them or touch them if I can help it. I think IT and especially security are hinged entirely on the integrity of the employees. Once that goes, there is no getting it back. So I try to vehemently protect that.

4. Using default or easy passwords. This is a red herring point; shame on Brenner. But it does ring of some truth. First, of course I use some easy passwords. Why? Because I dub such uses as low value fruit. For instance, I tend to reuse forum passwords because they’re untrusted systems and I maybe post 3 times and that’s it. I don’t care if the admin boinks the database and publishes my password. But for other things, in recent years I have slowly migrated all those passwords I made before I thought about security, into more complex ones. I’m almost complete, in fact. In defense of admins, I’m positive we tend to have a far higher tendency to use complex passwords vs easy passwords, than your normal population of users.

5. Failure to patch. I patch any time I have a moment at home, especially my Windows boxes. Applications getting patched is a bit different, but I have only limited Windows use these days. At work, this is a whole new ballgame as patch management needs to scale and there needs to be testing and change management. Windows/Microsoft patches are one thing, but I conjecture that very few shops keep applications patched (let alone internal applications). See item #2 for clues on why patches sometimes either don’t get down or keep getting pushed off (hint: it has to do with stakeholders/customers).

6. Using open wireless access points. This is an interesting item. First, security pros at least know what to look for and what not to do at wireless access points. Hopefully they’re not checking email with clear text auth. Second, the risk of being snarfed at a wireless hotspot can be low. But all it takes is once and you’re pwned. Me? I use open wireless, but I’m highly conscious what I do on them, even including sidejacking/injected CSRF attacks. Then again, I tend to be the snooper as opposed to the snooped…

See, when security pros tell “users” to not use open wireless access points, we’d only do so because we know the user isn’t technical enough to do it the right way. But what we’re really saying is, “don’t do sensitive things on open wireless, and be careful and protected from other things already.” This limits your risk greatly.

7. Misuse of USB sticks and other removable storage devices. I don’t have much to say on this one! But I will say I don’t use USB sticks at work or for moving work data. And I don’t keep sensitive stuff on my personal USb sticks longer than I need to. My assumption is that I will lose the stick at some point.

8. Seriously, you forgot to include running as least privilege Windows user? I’d be guilty of this, both at work and at home. At least at work I only run as domain admin on servers or using runas. For as much as we preach about least privs, we cheat at our own advice by running more Linux and MacOS. If we were on Windows systems, I’d bet most still run as local admin.

One thing I notice is how so many of these points skew our “advice” a bit. Most of these are, “Don’t do this unless…” or “Do this, but…” It’s the ability to fill in those second halves that make us security geeks. When people want advice, they usually want simple advice. “Don’t use simple passwords,” is far easier and digestable than explaining how to rate the risk of all the services you use a password for and how they interoperate.

metasploit unleashed course now available from offsec

The “free” Metasploit Unleased course from Offensive Security has been…unleashed! The additional materials you can purchase have been held back a bit until the next stable version of Metasploit (v3.3), but the wiki portion is available to consume now.

I’d strongly suggest donating money or paying for the additional materials if you have the funds and desire. Even if it weren’t going to charity, the guys who make BackTrack and this training possible deserve the kickback.

a trainwreck of a technical interview: skye cloud dns

@SimpleNomad threw down a doozy of a link today to a CNet interview with Jon Shalowitz, general manager of Skye, a new hosted DNS ‘cloud” division for Nominum, who talks about why his proprietary DNS cloud solution is better than what is currently used. This is an example of many things, including how some people will say anything to market their product. And a shining example of irresponsibility in putting crap like this into ears of other managers who may then bring up these “solutions.”

Freeware legacy DNS is the internet’s dirty little secret — and it’s not even little, it’s probably a big secret…Given all the nasty things that have happened this year, freeware is a recipe for problems, and it’s just going to get worse.

So, freeware (later he clarifies that he means “open source” when he says freeware) is the root of evil. Moving on…

Freeware is not akin to malware, but is opening up those customers to problems. So we’ve seen the majority of the world’s top ISPs migrating away from freeware to a solution that is carrier-grade, commercial-grade and secure.

So, freeware is not carrier-grade, commercial-grade, nor secure. This is a big jump in logic with nothing backing it up. And there is nothing inherent in a non-freeware solution that makes it carrier-grade, commercial-grade, or secure.

By virtue of something being open source, it has to be open to everybody to look into. I can’t keep secrets in there. But if I have a commercial-grade software product, then all of that is closed off, and so things are not visible to the hacker.

So, does this mean code review is bad, or improving security through obscurity is good? I’d ask that as a question as I don’t want to strawman the poor fellow, but none of this really demonstrates any understanding of development practices or security common sense. You shouldn’t be relying on keeping secrets. At least open source code with holes exposed has the chance to close those holes rather than keep them latently present for years.

Nominum software was written 100 percent from the ground up, and by having software with source code that is not open for everybody to look at, it is inherently more secure.

And how does anyone know your software is “inherently more secure” if no one can look at it? Because you can keep your little secrets hidden, the secrets of shoddy code?

I would respond to them by saying, just look at the facts over the past six months, at the number of vulnerabilities announced and the number of patches that had to made to Bind and freeware products. And Nominum has not had a single known vulnerability in its software.

Jon has used lame examples of security incidents this year to somehow prove his “statistics,” so I’d offer it right back that Microsoft and Apple and Adobe have closed source software but have been inundated with security issues all year and beyond. Oh, and a commentor linked to a disclosed vulnerability for Nominum software. Granted, it’s not this Skye “cloud” DNS solution, but I have a strong suspicion Skye is just the same products rebranded by marketing.

By delivering a cloud model that allows essentially any enterprise or any ISP to have the wherewithal to take advantage of a Nominum solution is like putting fluoride in the water.

An argument can be made about a homogenous environment being inherently less secure…I mean, if we’re talking about “inherent” assumptions.

You really do need to look under the hood and kick the tyres. Maybe it’s a Ferrari on the outside, but it could be an Austin Maxi on the inside. The software being run and the network itself are very critical. And that’s one point the customer really needs to be wary of.

Umm, exactly. People need to be able to look under the hood of the code. Oh, and saying something to the effect of, “If you care about security you’ll accept we’re right,” is not an argument. It’s typical marketing/sales-speakto confuse the dimwitted.

All in all, poor Jon has given us an example of how NOT to give a technical interview. By the way, if you dig a bit on him, you’ll see he is marketing and product management (more marketing), not technical. Especially when the interviewer makes a point of asking point blank if he means open source. That is an obvious giveaway that you’re doing something wrong and you need to stop and back up, not truck forward like an idiot.

moore on nss labs comparing antimalware

HD Moore posted up his thoughts to a recent NSS Labs report on some “anti-malware” testing. I’m not surprised too much by the results even though it still is a bit disheartening to see freer products scorer lower (where really they should score below the big boys with money). I just know that surfing the web doesn’t actually scare me, but I’m constantly wary and conscious of what I’m doing and what scripts I am allowing to run. I can’t imagine doing so on a Windows/IE box day-to-day anymore.

The real problems are user education and layered defenses (or risk mgmt), not some expectation that Anti-malware be perfect.