is virtualization here to stay, or just a stop-gap back to big iron?

Hoff has opined about virtualization over on his blog. He calls it in incomplete thought (a blog post series, really), but it’s really quite thorough and deep. I suggest reading the comments as well.

In essence, Hoff says, “Thereโ€™s a bloated, parasitic resource-gobbling cancer inside every VM.” It’s true. Virtualization isn’t a solution to much of anything. It’s a golem of a beast created to fix problems that were symptoms themselves or much larger problems.

Here’s a really quick, 30-second mindset I have on this.

  • mainframes centralize everything and people get things done with their slices
  • personal computers take the world by storm
  • suddenly everyone can do something on their own without the centralized wizards and curtains.
  • …and everyone does things on their own, creating apps, languages, etc; decentralized apps and data
  • the OS just can’t really keep up; same feature bloat hit Windows that hits all software that wants to be popular and fit every niche need (McAfee, Firefox, browsers, etc).
  • then shit gets too splintered and the IT world becomes an inefficient money-drain of equipment and maintenance
  • attempts to centralize everything is met with cries of “they’re stealing our admin rights, but my system is slow when I have admin rights!”

All of this ends up turning into a cycle, and one we’re destined to follow over and over. Big iron. Smaller iron. Big iron. Centralized. Decentralized. Centralized. Administrative power over your individual system. Locked down. Empowered. Locked down. It’s like a “grass is greener” mentality out of control.

But it’s more than that, as well. Part of this cyclic, mess of a vortex is the speed at which technology is progressing and our world is changing. It moves so fast that no one (business or individuals) can take the necessary time to do any of this correctly. As you’ll hear Ranum and I think even Potter say in recent talks, the problems of today are mistakes from 15 years ago. I think things just move too fast for us to realized it.

At any rate, it’s not like we can do much about it today, but at least we can be cognizant of this situation and do what we can in small measures to avoid the eddies and undertows that drown so many in these changes.

dialogue will save us

Last week I had a tiny, tiny rant about some feelings on isolated HR practices. The author who inspired my tiny, tiny rant posted a response to my initial comment on his blog, so it is only fair and right that I mention it here as well. Appreciation and thanks passed on! ๐Ÿ™‚

Of note, I don’t have a relationship with any recruiters. I still get a voice mail on my phone now and then from a firm or two, but I admit to not following up or updating my resume with them (god, I still haven’t put CISSP on it either!). I really should get my face back on some recruiters minds…but I’m more than aware that Des Moines is not a very big city by any means, and most IT managers/recruiters are probably only 1 or 2 steps away from any others whom I talk to. I don’t like the idea that someone hears I might be looking, just because I updated with a recruiter. (One recruiter 3 years ago scared me away because she made mention she was well-acquainted with my boss, whom I wasn’t telling I was looking for a job…). I imagine with my CISSP and easily 5+ (loyal) years progressive technical experience I’m actually now marketable.

we usually do practice what we preach…

…but security is rarely about absolutes.

Bill Brenner has posted 7 Ways Security Pros Don’t Practice What They Preach. I’m surprised more of these types of lists don’t show up, especially as normal users rage against security measures.

So, my thoughts and how do I rate?

1. Using URL shortening services. Yeah, these suck. I hate clicking them and I hate using them. But sadly, Twitter use has forced us to do something risky in order to fit ghastly URLs into small boxes. Hell, even magazines use them. Just think if browsers could only handle xx characters and had to truncate the rest of the URL inthe address bars. Yeah, fucked. This reminds me of 1997 IRC where you’d learn quickly not to click on blind links because you’re see some fucked up shit that you thought was going to be something cute. This is probably why “Rick Rolling” never seemed that big a deal to me. Am I guilty of using these? Sadly, yes, I use (and only use), but I should move to one that, by default, previews the URL first.

2. Granting themselves exemptions in the firewall/Web proxy/content filter. Disclaimer: Yes, I’m exempt from some policies at work because I have to investigate such things. Yes, I get to exempt myself from some web site category filters (ever do security research when “hacking” sites are blocked? ever investigate hits on your external services when you have no idea what might be hosted on the other end? ever go to a blocked URL that a user hit only to see just why it was blocked?). But other than legitimate work uses, I don’t poke my own holes into security protections just because I want to, such as gaming sites or opening up holes for me to bridge a home network…

But here’s the real deal. Business wants you to get XYZ done. If you were a normal employee, you’d do whatever you *can* to skirt the rules if those rules are stopping you from getting XYZ done as requested. When you start doing that same business habit to the people who control the rules, then you put those people into a position where they *can* accomplish XYZ because they *do* have that power. This is a classic example of how security and convenience butt heads, and sadly convenience almost always wins without some help on the security side. This is why I hate the question, “But technically, you *can* open the firewall for me, right?” Yes, duh I *can,* but I won’t.

3. Snooping into files/folders that they don’t own. Doing this in the course of an investigation or because a manager or HR has specifically requested it (properly I might add) should be quite alright. Again, this is like saying don’t jump in the water, and then yelling at the fish because they’re inherently in the water.

There are also other reasons, such as disk usage investigations (really, I shouldn’t run that 300MB movie file you have on your network drive to determine business need because my fileserver disk is filling up at 10pm?) or when migrating a user from one system to another (yeah, people shove shit in the craziest places on their disk…).

But looking at things you shouldn’t look at, should be avoided. If a file says something like, “performance appraisal” or “tax return,” you probably want to take extra care not to open it. If you’re on an exec system, it’s probably best to stick to only the exact task at hand. Basically: common fucking sense.

Then again, this is just me. Even if I have such files in front of me, I won’t open them or touch them if I can help it. I think IT and especially security are hinged entirely on the integrity of the employees. Once that goes, there is no getting it back. So I try to vehemently protect that.

4. Using default or easy passwords. This is a red herring point; shame on Brenner. But it does ring of some truth. First, of course I use some easy passwords. Why? Because I dub such uses as low value fruit. For instance, I tend to reuse forum passwords because they’re untrusted systems and I maybe post 3 times and that’s it. I don’t care if the admin boinks the database and publishes my password. But for other things, in recent years I have slowly migrated all those passwords I made before I thought about security, into more complex ones. I’m almost complete, in fact. In defense of admins, I’m positive we tend to have a far higher tendency to use complex passwords vs easy passwords, than your normal population of users.

5. Failure to patch. I patch any time I have a moment at home, especially my Windows boxes. Applications getting patched is a bit different, but I have only limited Windows use these days. At work, this is a whole new ballgame as patch management needs to scale and there needs to be testing and change management. Windows/Microsoft patches are one thing, but I conjecture that very few shops keep applications patched (let alone internal applications). See item #2 for clues on why patches sometimes either don’t get down or keep getting pushed off (hint: it has to do with stakeholders/customers).

6. Using open wireless access points. This is an interesting item. First, security pros at least know what to look for and what not to do at wireless access points. Hopefully they’re not checking email with clear text auth. Second, the risk of being snarfed at a wireless hotspot can be low. But all it takes is once and you’re pwned. Me? I use open wireless, but I’m highly conscious what I do on them, even including sidejacking/injected CSRF attacks. Then again, I tend to be the snooper as opposed to the snooped…

See, when security pros tell “users” to not use open wireless access points, we’d only do so because we know the user isn’t technical enough to do it the right way. But what we’re really saying is, “don’t do sensitive things on open wireless, and be careful and protected from other things already.” This limits your risk greatly.

7. Misuse of USB sticks and other removable storage devices. I don’t have much to say on this one! But I will say I don’t use USB sticks at work or for moving work data. And I don’t keep sensitive stuff on my personal USb sticks longer than I need to. My assumption is that I will lose the stick at some point.

8. Seriously, you forgot to include running as least privilege Windows user? I’d be guilty of this, both at work and at home. At least at work I only run as domain admin on servers or using runas. For as much as we preach about least privs, we cheat at our own advice by running more Linux and MacOS. If we were on Windows systems, I’d bet most still run as local admin.

One thing I notice is how so many of these points skew our “advice” a bit. Most of these are, “Don’t do this unless…” or “Do this, but…” It’s the ability to fill in those second halves that make us security geeks. When people want advice, they usually want simple advice. “Don’t use simple passwords,” is far easier and digestable than explaining how to rate the risk of all the services you use a password for and how they interoperate.

metasploit unleashed course now available from offsec

The “free” Metasploit Unleased course from Offensive Security has been…unleashed! The additional materials you can purchase have been held back a bit until the next stable version of Metasploit (v3.3), but the wiki portion is available to consume now.

I’d strongly suggest donating money or paying for the additional materials if you have the funds and desire. Even if it weren’t going to charity, the guys who make BackTrack and this training possible deserve the kickback.

a trainwreck of a technical interview: skye cloud dns

@SimpleNomad threw down a doozy of a link today to a CNet interview with Jon Shalowitz, general manager of Skye, a new hosted DNS ‘cloud” division for Nominum, who talks about why his proprietary DNS cloud solution is better than what is currently used. This is an example of many things, including how some people will say anything to market their product. And a shining example of irresponsibility in putting crap like this into ears of other managers who may then bring up these “solutions.”

Freeware legacy DNS is the internet’s dirty little secret โ€” and it’s not even little, it’s probably a big secret…Given all the nasty things that have happened this year, freeware is a recipe for problems, and it’s just going to get worse.

So, freeware (later he clarifies that he means “open source” when he says freeware) is the root of evil. Moving on…

Freeware is not akin to malware, but is opening up those customers to problems. So we’ve seen the majority of the world’s top ISPs migrating away from freeware to a solution that is carrier-grade, commercial-grade and secure.

So, freeware is not carrier-grade, commercial-grade, nor secure. This is a big jump in logic with nothing backing it up. And there is nothing inherent in a non-freeware solution that makes it carrier-grade, commercial-grade, or secure.

By virtue of something being open source, it has to be open to everybody to look into. I can’t keep secrets in there. But if I have a commercial-grade software product, then all of that is closed off, and so things are not visible to the hacker.

So, does this mean code review is bad, or improving security through obscurity is good? I’d ask that as a question as I don’t want to strawman the poor fellow, but none of this really demonstrates any understanding of development practices or security common sense. You shouldn’t be relying on keeping secrets. At least open source code with holes exposed has the chance to close those holes rather than keep them latently present for years.

Nominum software was written 100 percent from the ground up, and by having software with source code that is not open for everybody to look at, it is inherently more secure.

And how does anyone know your software is “inherently more secure” if no one can look at it? Because you can keep your little secrets hidden, the secrets of shoddy code?

I would respond to them by saying, just look at the facts over the past six months, at the number of vulnerabilities announced and the number of patches that had to made to Bind and freeware products. And Nominum has not had a single known vulnerability in its software.

Jon has used lame examples of security incidents this year to somehow prove his “statistics,” so I’d offer it right back that Microsoft and Apple and Adobe have closed source software but have been inundated with security issues all year and beyond. Oh, and a commentor linked to a disclosed vulnerability for Nominum software. Granted, it’s not this Skye “cloud” DNS solution, but I have a strong suspicion Skye is just the same products rebranded by marketing.

By delivering a cloud model that allows essentially any enterprise or any ISP to have the wherewithal to take advantage of a Nominum solution is like putting fluoride in the water.

An argument can be made about a homogenous environment being inherently less secure…I mean, if we’re talking about “inherent” assumptions.

You really do need to look under the hood and kick the tyres. Maybe it’s a Ferrari on the outside, but it could be an Austin Maxi on the inside. The software being run and the network itself are very critical. And that’s one point the customer really needs to be wary of.

Umm, exactly. People need to be able to look under the hood of the code. Oh, and saying something to the effect of, “If you care about security you’ll accept we’re right,” is not an argument. It’s typical marketing/sales-speakto confuse the dimwitted.

All in all, poor Jon has given us an example of how NOT to give a technical interview. By the way, if you dig a bit on him, you’ll see he is marketing and product management (more marketing), not technical. Especially when the interviewer makes a point of asking point blank if he means open source. That is an obvious giveaway that you’re doing something wrong and you need to stop and back up, not truck forward like an idiot.

moore on nss labs comparing antimalware

HD Moore posted up his thoughts to a recent NSS Labs report on some “anti-malware” testing. I’m not surprised too much by the results even though it still is a bit disheartening to see freer products scorer lower (where really they should score below the big boys with money). I just know that surfing the web doesn’t actually scare me, but I’m constantly wary and conscious of what I’m doing and what scripts I am allowing to run. I can’t imagine doing so on a Windows/IE box day-to-day anymore.

The real problems are user education and layered defenses (or risk mgmt), not some expectation that Anti-malware be perfect.

doing nothing is good for the soul

Even geeks need to unplug and relax a bit. Security geeks probably more so (although I may be a bit biased there) with our constant battle to maintain acceptable security and the constant threat of our phones, PDAs, and Blackberries chirping for our attention. I read an article by Tom Hodgkinson titled “10 ways to enjoy doing nothing” (CNN) yesterday and wanted to echo a few points.

As a background, I have leanings towards zen buddhism and meditation. Not necessarily your traditional lotus position meditation, but just the ability to find peace and reflection where you are; and just mentally and spiritually relax. I’ll add a few other points below from my own experiences.

1. Banish the guilt. We are all told that we should be terribly busy, so we can’t laze around without that nagging feeling that we need to be getting stuff done….Guilt for doing nothing is artificially imposed on us by a Calvinistic and Puritanical culture that wants us to work hard. That’s true, right? Me, I tend to laze around and play video games. While that is still technically *doing* something, it usually is not something that directly adds to my life, ya know? The point is, don’t be guilty about doing things that don’t matter or doing nothing at all. Find a hobby, play a guitar, tinker with something, but never let it make you feel anxious or time-constrained or stressed when you do it. Just do it and flow with it like a babbling stream rather than a raging wave.

7. Lie in a field. Doing nothing is profoundly healing… Listen to the birds and smell the grass. Ever do this as a kid? I did. It’s beautifully calming and amazing. Ever do this as an adult? Me either, not nearly enough!

8. Gaze at the clouds. Don’t have a field nearby? Doing nothing can easily be dignified by calling it “cloud spotting.” It gives a purpose to your dawdling. Go outside and look up at the ever-changing skies and spot the cirrus and the cumulonimbus. You can even do this as you sit at Starbucks on the outside chairs if they have them. Or on the steps of your nearby library. Gazing up at the sky no matter what the weather is an amazing, heart-warming, thing that helps put so many things about life and our place and our thoughts into perspective.

And my own additions…

11. Gaze at the stars/sit out in the rain/sit out while it snows. I have an immense appreciation for nature; nothing in the world is or ever will be as perfect as a whole, even with its individual imperfections. Stargazing, sitting out in and watching/feeling/smelling/hearing the rain or snow are the kinds of things that make you know you’re alive; your senses assuring you of existence. You can even do this in your regular residential neighborhood (although seeing the stars might be a bit difficult without a good dark park or something) as long as the rest of the world is not too busy. Preferably without distractions, but I wouldn’t judge someone less if they mixed in some mood music as well (“new age” music or even minimalistic electronica adds to these moments).

12. Exercise. Many people bemoan exercise as boring or painful or just a waste of time. If you’re going to be doing something whether cardio or weights, you really should enjoy doing it; it’s good for the soul to be happy with the things you do. So rather than focus on the pain, focus on the good things. Focus on your breathing, not just the rate, but *how* you breathe (chest vs stomach; mouth vs nose…). Focus on the movements of your body, the contracting and relaxing of the muscles that move our limbs. Focus on the rhythmic beat of your heart. Focus on your posture and form. Focus on those points where you do feel real pain and be aware of your limits. If you need to, include music that you can focus on as well; minimal words, heavy on beat and instrumentation/sound, and longer than 3 minute sound-bytes-go for real trance/techno).

the doctor will see you now, after we scan your id

Our ID cards are being scanned at an increasingly alarming rate. Marisa over at Errata Security has posted about having her driver’s license scanned at a doctor’s office (including more links to other reports).

I don’t see why this is necessary. Is identity theft at a doctor’s office *that* big of a deal? What is the gain, free health care at someone else’s expense? Hijacked prescriptions? I can’t imagine healthscare theft is widespread as those seem like ballsy, planning-intensive forms of crime. Then again, maybe all it takes is one check-up and that information for someone else is entered into your record (positive for herpes? allergic to penicillin? DNA on file that isn’t yours?) which can have disasterous effects on your health later on. But that seems to be more a failure of relying so heavily on what is stored on a computer somewhere. We see movies that make these wild scenarios (The Net, Hackers, and many others) where a computer says you’re evil so everyone treats you as evil without a question…

Shit, maybe I’m convincing myself of something here!

Still, what if we go further down the RFID route, or any type of embedded ID system? RFID could be gathered without your being able to stop it once you walk in the door to an office (or god forbid walk *near* it and away!). An embedded ID chip (like pets are getting these days) pretty much has the requirement to be scanned, and let’s just hope that’s not being saved and is just being validated (yeah right). These kneejerk reactions to having our ID scan may be a joke in 20 years from now.

If you read the “Red Flags” Rule from the FTC, you’ll get the distinct impression this is not to protect consumers, but to protect healthcare providers. It also doesn’t even make a hint that providers should scan and store ID card information. It sounds very much like being carded at a bar where a visual glance at the card will be enough. (What I “like” about the Reg Flags Rule is just how vague they are…and we thought PCI was vague! This basically says you need to spot “red flags” and good luck with that!).

It was just last week that I mused on Twitter that I might have to look into a tight sleeve for my driver’s license; a sleeve that keeps the front visible but obscures the back so that I can stop a merchant/receptionist from scanning it while they slip the card out of the sleeve, yet still slip it into the slot in my wallet.

catching the unicorn that is nac

(via infosecramblings) Jennifer over at Security Uncorked has posted up a paper on why NAC is failing. It makes for a good read (pdf).
If you were to ask me before reading this paper what my gut reaction to NAC is, it would read:

  • complex to manage in anything beyond a lab or small org with strict system policies, low speed of change, and few exceptions.
  • can only exist with other foundational technologies like something to compare against (AV version, etc) and something to control access (managed switches, firewalls, proxy, etc). If you don’t have the foundations managed well, you have no business putting NAC in yet.
  • can be a nice way to validate inventory and policies, every organization still has to manage the exceptions and guests. If you have inventory and policy-checking already being done, NAC’s only purpose is rogue isolation which you can do, to varying degrees of depth, in many other (even homebrew) ways.
  • I always hear about messy, issue-prone installation attempts and have never heard of one real success story.
  • orgs like McAfee already are trying to put all the pieces together anyway; it’s not a big step to take their huge suite of apps and just add in a control piece to their rogue detection/ePO/HIPS/NIPS conglomerate (for better or worse, since all of that rolled into one huge dungpile makes for a beast in administrative costs). But you still need the foundations set even outside such a “complete” (yay marketing!) security suite. This leads into the “it’s a feature not a product” argument which I don’t usually voice because it sounds way to “analyst-like” for my tastes. Besides, too many features = unwieldy product that is worth far less than the sum of the features!

It makes me a lot more confident in my impressions of NAC that Jennifer hit on these points and more (for instance I totally didn’t think about authenication/identity with NAC) in her paper. I’m also not sure I’ve ever read a more complete and understandable description of NAC in general!

One key quote I want to pull out is this one, which I think succinctly sums up some of my feeling.

A single NAC product will not, in any environment, scale or grow to a level
acceptable for widespread adoption. At the moment, the solutions are too difficult to implement and there are other alternatives that give organizations many of the features NAC can offer without the hassle involved with implementing NAC.

Often we do have to implement security technologies and apps that aren’t perfect and don’t provide 100% coverage no matter how much hacking we do on the side. But NAC is too big of a beast for many managers to swallow and still admit it only protects swaths X, Y,and Z systems/scenarios. Huge suites of varying quality (like McAfee, Symantec, Cisco, etc) that already have roots in what I consider the foundational aspects of an enterprise network already have their work cut out for them. It’s natural the NAC will absorb into them rather than be yet another boulder to massage into the corporate cyber landscape.

If I had one suggestion, it would be to include a sub-list in the exec summary under the technical challenges item, and quickly list the big technical challenges specifically, or word it in a way that my initial reaction to that item is not the question, “What challenges?”

i don’t like to read too much into resumes

I’m of a mind that some HR folks overthink their job, especially when it comes to hiring and looking at resumes. Maybe this is all just a result of needing to sift through and rule out potentially dozens or hundreds of resumes for a single job (and maybe have backable reasons for ruling whittling them down!). But it still seems like a lot of overthink for something you just can’t predict until an interview and you test drive the employee. This tiny mini-rant was inspired by a post over on Jeff Snyder ‘s blog, an excellent blog that combines both security issues and career/hiring issues. I’m not sure I know of another similar blog to his

Though there is no magic length of time to stay with an employer, this HR executive likes to see longevity of 3-5 years or longer with each employer. Within each 3-5 year stay, Mike looks for growth. Growth could be represented by expanded skills, expanded responsibilities, bigger titles, etc.

I don’t really buy into such an approach for tech positions. Managerial or leadership positions, sure. But I think this threatens to shackle technical people with the often superficial trappings of business appearances. With the exception of being let go repeatedly very quickly, I’m not sure I’d read too much into how often someone changes (non-contract) jobs or whether they’re seeing progression or not.

Disclaimer: I haven’t really done much hiring (I helped look at resumes once…) nor manage people. I also fall into the bucket of 3-5 years per job with progression, so this isn’t me bitching about being shorted personally. ๐Ÿ™‚

and this is why policies and computer restrictions exist

Just filing this story away as an example of why policies and computer restrictions are in place. Local admin rights, checking personal email at work,* local malware prevention, etc.

He allegedly sent the spyware to the woman’s Yahoo e-mail address, hoping that it would give him a way to monitor what she was doing on her PC. But instead, she opened the spyware on a computer in the hospital’s pediatric cardiac surgery department, creating a regulatory nightmare for the hospital.

* This is getting stupidly hard, really. But everyone should still stop the big names, and then manual analysis of logs should pick up on regular use of smaller mail providers which can then be added to a blocklist. Sadly, this means staff-hours in a time when every company wants automated appliances to secure the world with little input.

science and best practices

Before dissing “best practices” in general, keep in mind that following many “best practices” will save you time and effort discovering for yourself what others already know. Basically, “standing on the shoulders of giants…”

I think many people get mad at “best practices” because they’re not universal and absolute. They won’t work in all cases (maybe they just won’t work in yours!), and they won’t result in absolute security (what does?).

As paranoid security geeks, we should question and strive to understand what is going on, but don’t just rage against “best practices” because it’s chic.

white papers evaluating ips offerings

Joel Snyder over on Opus1 has a couple white papers posted about evaluating IPS solutions. Granted, these are dated 11/2007, but they read well enough to stand valid still. The first paper lists 6 steps to selecting the right IPS (pdf).. The second paper lists 7 key requirements for IPS vendors (pdf).

I don’t have much to add to the first paper as it is pretty complete. The second paper has a few things I’d mention.

1. I still prefer calling an IDS/IPS just an IDS. Unless specifically configured (and you have the confidence in the device) to actually prevent attacks, they all work as an IDS instead. And this is good so no managers start thinking all attacks are being prevented even though 90% of the IPS device is working as an IDS device. It’s an expectations thing.

2. In the performance item (#1), I’d just briefly mention along with failopen capabilities, that the device should do so as seamlessly as possible, especially during an upgrade of the device/software. I don’t like patches/upgrades being disincentivized by downtime and off-hours work. That just leads to admins dragging ass. Same with power cycling the device if it isn’t very stable…

3. Item 2 in this paper should be read along with item #2 in the first paper; both deal with what sort of detection the IPS will be doing (rate, signature, anomaly, behavior…). Keep in mind that many IPS offerings doing all of them ends up doing all of them sort of watered down. If you already have netflow analysis efforts, you might value that the least.

4. Item #7 asks for some limited firewall capabilities. While noble to include, I don’t want to confuse network gurus in thinking they should be mucking heavily in these ACLs and IPS rules just because this is the closest device to the source traffic. In IDS/IPS shouldn’t be heavily leaned on for such duties, and thus arguably shouldn’t even begin to be leaned on.

5. I’d add item #8 to the mix and say that enterprise IPS should give the operators the ability to be informed and capture enough evidence in an alert to make an informed decision. No data = fail. 1 packet = fail. And so on. This should be part of the evaluation of the IPS and not something you take as truth just because a sales guy says so.

6. Additionally, the alerts an IPS gives should not only be clear and precise on the problem, but signatures should be viewable by analysts to compare why something was triggered. Bonus points if you have capability to craft new signatures, either fully new or using an existing one as a template.