just a resume update blurb

Finally got my resume (pdf) updated with my CISSP status. The thing I hate most about resumes? Nope, it’s not the description of accomplishments. It’s the damned list of technical skills and programs I know. I try to use that as the part I tailor to job descriptions and the tools they list. Otherwise, a good IT tech simply has an aptitude to pick up and learn anything he or she doesn’t already know. I didn’t list Python? I’m confident I could pick it up quickly. But I hate feeling a little strange listing anything I’ve not professionally or extensively personally used, like Python. It’s silly how even old, stupid tools stay listed there for years…

social network security and user education

It’s not really that often that I think user education is a solution (or close to it), as I think it is just a small sliver of a security posture in a company. One such situation is policing social networking for an organization.

I’m listening to Exotic Liability 37 and Ryan and Chris are having a great discussion on what organizations should do about social networking. I agree that companies need to have policies on social networking, but I’m sympathetic to the feeling that an organization shouldn’t be reading every post that every employee makes on their personal time, or that you have to disclose what your social networking identities are. That seems to be a huge effort for very little gain, especially as most people never post anything to do with the organization.

I agree that anything about the company should be addressed, and anything where someone may be miscontrued as speaking for the organization should be curbed. That should be done by policy and user education. As should any unauthorized use on business time when explicitly prohibited. Or maybe something like an exec making a comment on visiting Smegma, Florida, and someone knowing that a potentially bought acquired company is HQed there which could divulge big information.

But I’m not sure I can say employers should be inventorying your identities online and examining every post you make. Considering how much crap is posted to so many places compared to how much would actually damage a company, it seems like a waste of resources to watch it.

I like when the EL guys barely touched on the idea of following developers. That is one place where you really could get some information, for instance code snippets posted to a help forum. The problems here, though, are similar. How many thousands of such sites exist? And how often would those snippets and tidbits actually be useful?

I guess it all depends on the company and what their interest is in protecting information. Defense contractors, game companies, and Apple would be far different than a small business in Wichita that only serves local customers. I think a policy is necessary, user education is necessary (tailored to the level of employee), and some measure of monitoring for references to your company may be necessary. But I’m not sure monitoring individuals will offer good return for most cases.

on perimeters, clouds, database outsourcing, security

I’ve long been somewhat anti-anti-perimeter. I understand the reason why groups of professionals will say there is no more perimeter, and I agree with most of their observations, I don’t really buy their conclusion that the perimeter is dead. I still feel there is a perimeter and there will continue to be a perimeter. It’s just not as hard and physical a stop as it used to be.

But, finally, the first real crack in the perimeter issue (at least to me) is coming from “cloud” services, like this Amazon RDS service. Basically, you want a database hosted by someone else? There you go, a MySQL instance at a third-party. Really, this is outsourcing your IT infrastructure piece by piece. This is like hooking me up to an external plasma or blood machine, and having it a critical part of my circulatory system. You’ll make me extremely nervous every time you get close to those delicate tubes and the power switch next to my bed! At least when it was inside my body, I certainly knew when something was wrong.

While I think this is a horrible approach for security (in light of our ever-increasing sensitivity to data flow, transmission, storage), I do recognize that it continues the destruction of “the perimeter.” Pretty soon we’ll have these golems out on the web where the web front-end is hosted at X, the database hosted at Y, with API calls to A-thru-M, and built with no security in mind. The silver lining? Continues the push for encryption when you’re outside the traditional perimeter. Is this bad? Who knows, maybe this will evolve into something awesome, but for now my initial feelings are quite cynical (if I were a web developer, I’d probably think the opposite!).

And like most outsourcing endeavors, I really think this will be a cool, trendy cost-saver in the very short term, but all the issues that come with “cloud” and outsourcing and trying to make a customized service into a one-size-fits-all product (study business strategy and economics to see why I make such a fuss about those two categories of product) are going to challenge this deeply beyond the next 12 months. At least with offering something narrow like a “database instance” you could maybe get away with calling this less a customized service and more a standard product. It’s definitely much better than saying something vague like, “we’ll massage data if you send it to us”. But still, it’s a very narrow piece that must rely on something else, and it is the stringing of those sorts of connections across untrusted networks that is sketchy.

interesting read on evolving security

Also via Chuvakin, I skimmed an article by Josh Corman on evolving security. Perusing the comments, I see good points about the vagueness on what we’re supposed to be evolving into.

This reminds me of a few years back when someone threw down a great essay on why security sucks, with the promise of a follow-up so they didn’t sound like someone just complaining. That follow-up never truly came. (Fine, it came, but it just opined about other people’s responses; hardly a half-assed fulfillment.) (I’m having a problem finding it or remembering enough specifics to search for it, but I will find it!) Update: It was Noam Eppel’s essay on the total failure of information security [now defunct] which I posted about years back

It’s one thing I’ve slowly learned (and am still learning) through my business/work experience is that you don’t often want to just rage without a plan of action. Not unless you’re aware that you’re just venting, in which case it’s ok. Otherwise the first question from anyone who helps determine your future is, “What do you suggest?” That pivotal, important question…like a knight challenging your queen on your side of the board…that if you don’t have an answer for, is the beginning of your endgame.

Especially in security, we need to step back and ask ourselves why we think security needs to evolve. Is it because we’re still insecure? If so, then you’ll rage forever because there’s no “win.” Unless we want to define “win,” which…yeah…that’s a good start. I feel this is an industry that can only define itself after the fact, rather than define some novel approach that is “oh my god” glorious and impacting. We’ll define our security methods and standards only after we try them out and see if they worked, or in what measure they worked. This is why I see ‘security’ more a science than a business discipline… *…now where’d I put my crack rock…*

it’s official: i hate the term “cloud”

This is too good not to repost. Via Chuvakin, I got linked over to an article on CSOOnline: 5 Mistakes a Security Vendor Made in the Cloud. I think this is a kick-ass article for three reasons. First, these are many of the same points I’ve been making since I first heard the term “cloud” a year ago. Second, no shit these are problems. These are problems in traditional software (from notepad apps to OS). Cloud will not fix them. Not without incurring tons of cost and stealing away the efficiencies that cloud exists to take advantage of. The “cloud” still has an identity crisis not just with itself but in how it has been marketed and defined by everyone else: it doesn’t know whether it is a service (customized) or a commodity (one size fits all). Customers think they want commodity (Salesforce!) and vendors want to give commodity. But business doesn’t work well with commodity IT solutions and tend to drop over into customized stuff, which (real) cloud vendors really can’t offer without simply being another word for outsourcing your IT/development.

The third reason this is a kick-ass article: it illustrates the bastardization of the term “cloud” because the example is not what I call “cloud.” The examples given in each mistake do not sound like a “cloud” solution but rather a centrally managed software app. Nothing more. I would call that a case of marketing being stupid. You could place the name, Microsoft (Windows) or Symantec (AV) into each mistake and it’d fit. Those aren’t cloud.

Anyway, here are the 5 mistakes.

MISTAKE 1: Updating the SaaS product without telling customers or letting them opt out – Updating customers should be done, but even traditional software will often not be clear. And even if you update customers, far too many won’t give a shit until it breaks something. Letting customers opt out is a recipe for disaster. Part of the beauty and draw of “cloud” is that you can make robust, agile solutions that will fit a wide swath of your customers. But if you allow customers to opt out, you’ve just created lots of little exceptions and splinters all of which will end up being maintained specially, or being called “legacy.” Traditional IT and software knows this well.

MISTAKE 2: Not offering a rollback to the last prior version – Same problem applies here, too. The ideal goal should be to never have exceptions. But I believe “cloud” just can’t do that in every solution. Salesforce can do it. “Cloud” computing for business intelligence cannot (imo, it’s too customized). That or we’re too muddled on what “cloud” means…

MISTAKE 3: Not offering customers a choice to select timing of an upgrade
– Sort of defeats the purpose of “cloud” and either gets us back to traditional software or a managed services provider. Neither of which I consider “cloud.”

MISTAKE 4: New versions ignore prior configurations or settings, which creates instability in the customer environment – This is one reason why products bloat. The larger they get and more Voltron-like they are (especially through acquisitions by larger giants) the more they bloat and look like ass, because you can’t take things away. At any rate, this sounds like a software upgrade process problem, not a “cloud” issue.

MISTAKE 5: Not offering a safety valve – Why would “cloud” do this?

user-supplied content sites help scammers

Comment spam continues to evolve. I think spammers are learning the more general and succinct their comments are, the better they may be mistaken for real comments. Sometimes, I’m seeing the only tipoff is the link they leave in the link box.

But what if that link goes to a site you know, but to a page of user-supplied content? Like a twitter account just made by a bot, or linkedin account, or myspace page? Eventually you lose, either by being suckered or by swatting away what might have been a real post!

ford engineer takes data with him to new job

If someone important tenders their resignation tomorrow, would you be able to see if over the last week he has been siphoning off confidential information from your network to use at his next job? Do you ever give exit reports on what information that person had access to while with the company, even if you couldn’t tell what he did or did not copy? I’d consider these important, but fairly advanced questions for a security team to ask.

A former Ford Motor engineer has been indicted for allegedly stealing thousands of sensitive documents from the company and copying them onto a USB drive before taking a job with another auto company.

This happens. It happens a lot, and has always happened. Technology has just made this easier, larger in scale, and trackable (even done remotely over VPN!). This is one of those dirty little secrets of salesforce hiring and even some executive job-hopping (what can you bring with you to us? is an oft-unspoken question).

catching up on choicepoint and paychoice breaches

Just a pointer over to a cnet article talking about recent ChoicePoint and PayChoice breaches and the activity swarming around them.

In April 2008, ChoicePoint turned off a key electronic security tool that it used to monitor access to one of its databases and failed to notice the problem for four months…

I think it is misleading (for the FTC) to say it took 4 months to discover that a key security tool was disabled. Who knows how long it would have been disabled had an investigation not taken place.

It might seem like these companies are Doing It Wrong. But I suspect they’re no different than most of their peers. They’re just the ones caught with their pants down and are now subject to extra scrutiny. This is good, but I wouldn’t outright say these two specifically suck more than others.

The FTC alleged that ChoicePoint’s conduct violated a 2006 court order requiring the company to institute a comprehensive information security program following…

This is pretty interesting. Would this mean that once you suffer a data breach, you’re forever needing to be perfect? This is like being on the sex offender list; once you’re on it, you’re basically a prisoner of sorts for life. This could have subtle implications for long-term costs of a major breach.

that wal-mart breach you barely heard about (2006)

If it weren’t for the blogs I follow, I’d miss tidbits of news as the weeks roll past. Like this update to an “old” Wal-Mart breach that occurred back in 2006. (This is what I remind myself when I repost rehashed things…just in case I want the links later on or someone who reads mine didn’t see it elsewhere.)
I’m pulling out nuggets that struck a chord with me. Yes, I’m cynical!

Wal-Mart uncovered the breach in November 2006, after a fortuitous server crash led administrators to a password-cracking tool that had been surreptitiously installed on one of its servers. Wal-Mart’s initial probe traced the intrusion to a compromised VPN account…

First, I’m not surprised that the breach was discovered by accidental (or 3rd party) means. This probably happens 90%+ of the time (my own figure, and I think I’m lowballing it!). Second, it is quite well known that VPN connections are an issue. I don’t want to take the time to look it up, but I recall distinctly reading from numerous places that remote employees have a tendency to feel more brazen with stealing information, and, like in the case of Wal-Mart, run on less secure systems with less secure practices and yet connect directly into sensitive corporate networks. Basically, VPN (remote) access is not to be taken lightly. If someone can subvert that one single login, your entire organization could start falling down. (Think how bad it would be if an IT admin logged into the VPN from his home machine which was infected with a keylogger. Hello admin login!

Wal-Mart says it was in the process of dramatically improving the security of its transaction data…

“Wal-Mart … really made every effort to…

Security doesn’t give a shit about talk. You’re either doing it or you’re not. That’s why verifying this talk as done is driving the industry. It also illustrates a huge problem (that affects more than just security) when management has a reality/belief gap between what they think is going on and what is really going on.

Strickland says the company took the [PCI-driven] report to heart and “put a massive amount of energy and expertise” into addressing the risks to customer data, and became certified as PCI-compliant in August 2006 by VeriSign.

I’m not about to wave around that a PCI-compliant firm had a data breach. In this case, no PCI-related data was actually divulged. But…this breach could have led down the road of revealing POS source codes, flows, and infrastructure such that those defenses could have been broken. Basically: chasing PCI compliance is not the same as chasing proper security for your organization. It’s a small slice and sample of what you should have in mind when you think corporate security. For instance, many orgs spend a lot of resources to limit the PCI compliance scope, rather than tackle the security of those things, they end up argued to be out of scope. Reminds me of shoving my toys under my bed and calling my room clean. Out of sight, right?

I think this also underscores that absolute need for organizations of sufficient size to have a dedicated security team that has high influence on all of IT. It’s not just about detection mechanisms and watching dashboards, especially if the network/server teams place them in bad positions or don’t feed them proper flows. You can’t just watch, but you have to poke and probe and continuously test your own systems and architecture for holes. And not just by an annual pen-testing team, but by people who have vested interests in and deep knowledge of the organization and its innards. You can’t find out your IDS, firewalls, logs, and patching efforts are “inconsistent” after a real breach. If you need to, role-play security incidents just like the business demands role-playing disaster recovery plans.

the continued rise of fuzzing

Securosis pointed me over to a really cool post by Michael Howard as he discusses SDL and the SMBv2 bug that was patched this month.

The takeaway I get is you really can only do so much to scan code, do code analysis, and even code reviews. There will still be bugs like this that make their way through. Automatic analysis just can’t find things like this. And humans make mistakes when reviewing things. (I suppose even code variables could have metadata in them to be marked as “untrusted inputs” and thus highlighted for more scrutiny? It’s like writing code to vet code…which is just odd to me since I’m not into comp sci…but maybe that’s what he’s talking about with their “analysis tools.”)

The only current way to find a bug like this is fuzzing.

But that should bring up the point of how much is enough fuzz testing? For instance, you won’t know if there *is* a problem in some code, so how long and deep should you fuzz? How do you prove it is secure? At some point, you really just have to release code and hope that what is essentially real-world fuzzing by millions of people will eventually reveal any missed issues, at which point your response teams can patch it promptly. Although, hopefully you’ve done enough fuzzing to match just how critical your software is to others (Windows? Pretty critical!).

Funny, that sounds a lot like the mantra, “security eventually fails, so make sure your detection and response is tight.” I’m glad we already look past raw numbers of security bugs, and focus in on how quickly they’re fixed by vendors, and how transparent/honest their process may be. Microsoft has really come a long way down this road.

a moment of industry pessimism

I’m getting passionately convinced that the “big security firms” that make these “big security suites” for home and business users have absolutely no clue what they’re doing anymore. Too big, too dumb.

I’m sure they have great engineers in place, but between the business itself and the messed up marketing, these firms and their products are beyond broken. It sucks to be held captive by them, though, since they (sort of) provide tools that form the foundation of a security posture (endpoint tools, mostly).

In short, STOP TRYING TO DO SO MUCH THAT YOU SUCK AT DOING ANY OF IT!

waiting for patches to release to wsus…

Patching. Every pen-tester and auditor will point it out and every security geek pretty much *facepalms* when you admit you haven’t patched since last week. But patching is half art and half time commitment. The reality of patching is that it is not quite as easy as we always make it sound, but that doesn’t make it any less necessary as a cornerstone to digital security.

You have a Windows environment with more than 100 systems. In other words, sneakernet just doesn’t work anymore, and a good portion of these systems are servers that you need to stagger reboot/install times during a maintenance window. Basically, you qualify for WSUS!

The easy part of patching with WSUS is getting a spare server with enough storage set up, WSUS installed, and all the patches downloaded that you want to manage (start small because the storage needed adds up quick!).

The next part is figuring out your WSUS groupings and your Group Policy Objects. If you don’t do much to manage the structure of your Machines OU, you might want to start here. Time spent on planning here will save time later on in reworking things you didn’t anticipate. Using Group Policy will help ensure that you don’t have to chase every new system and herd them into WSUS. Joining the domain should take care of it!

If you’re trying to massage a new WSUS implementation in an already-built Group Policy arrangement, you’ll probably have a lot of hair-pulling or catastrophic mistakes as you try to move inheritance around, policies around, and break things out properly. It’s really not all that fun early on.

Once you start getting systems populated, you then can start looking at your deficiencies in WSUS. More than likely you will end up approving everything, but that is still a boring time sink. This might also expose a few issues. First, all the systems you’ve neglected for months or years. Second, whether you want to approve patches for every system. I’d suggest approving for every system. That way if you create a new WSUS group later on, inheritance will still apply everything you’ve done previously. If you want to split, say, servers and workstations, I’d suggest getting a separate WSUS instance/box rather than compromise the inheritance stance. It really will pay off someday when you find surprise machines in your environment, but thankfully have been patched because you approve everything for everything.

In the process, you’ll learn how to view the reports in the WSUS management console. This is tricky, so play with the filters extensively. It sucks to get a nice warm fuzzy feeling as you get caught up only to realize you hadn’t even begun to look at what systems had errors or have a backlog of updates from years ago. Don’t just look at new patches!

Eventually, you’ll get caught up!

Then on patch day you have to figure out what systems you want to approve patches for as a test before you slam them out to all the other systems. And you need some method to validate the testing. This is harder than it sounds because you need systems that get used, but are not so important that you’ll jeopardize the business if they screw up. You also need to then manage their WSUS membership (and thus their GP objects and OU assigments) to accomodate their status as test boxes. Basically, good luck with that!

Then after some testing time, you can roll out the patches everywhere. Of course, this probably gets preceded by a wide announcement of patching, rebooting, and possible downtime in your maintenance window or overnight, and all the dumb questions that come back from it.

After all of that is done, you get the fun task of going back into WSUS to see which systems failed to do their installs. Then troubleshoot those installs, announce a second round of downtime, and get things up to speed.

In addition, you’ll probably have systems that no one likes to reboot, so they just accumulate months of patches, such as twitchy domain controllers, old systems that are more brittle than leaves in autumn, and database servers. Everyone loves a sudden database server reboot!

Whew, done, right? Nope! Now you have to have a process to validate that patches are installed on all systems that you manage. While WSUS does include reporting, it might be necessary to get some checking done out-of-band from your patching. Enter: vulnerability scanners!

This is a beast in itself as you need to be careful initially with how much you let the scanner beat on your systems. You might just end up doing Windows patch scans, which is an ok baseline if that’s all you can do. Of course, you get the pain of:
– getting systems into the scanner target list (too often this is manual!)
– getting dead systems out of the scanner target list
– parsing through the reports for both of the above
– parsing through the reports for the missing patches or alerts
– reconciling all the issues

The bottom line is patching is a necessary foundation to security. If you don’t have a patch management process for your OS of choice, you can’t have a good security stance. And too often the people who flippantly say patching is easy, don’t know anything about enterprise patching and think it’s just all about Automatic Updates and clicking the yellow icon every month before they go to bed. Proper patching is a time commitment that needs to be made by your IT or security staff, and it takes longer than you probably expect. Oh, and we’ve not even touched on non-Windows patching!

phishing? some people still just don’t get it

This article started some thinking. In it, the current FBI Director says he no longer banks online after nearly being fooled by a phishing email. (Yeah, my first reaction was that he shouldn’t really even be looking at emails like this, let alone almost falling for one…and the appropriate response is not to stop banking online but to stop reading those emails and clicking links on them. And by the way, if you say banking online is safe, but you don’t do it, and you’re an influential person…you’re confused and confusing. But hey, I’m glad it’s 2009 and our FBI Director experienced a “teaching moment” to the old issue of phishing emails…)

So, someone can still bank online if one does so strictly by following some guidelines, none of which ever requires you to even look twice at all the phishing (and legit!) email that may or may not come from your bank. Why is this? Because all of that is just bonus for doing your business online. You don’t *need* to read those emails. Ever.

At least…not yet.

Sadly, I think as more and more services go online (like the Twitter-enabled bank from the other week), I feel like someday we’ll look around and realize all these horribly insecure methods of communication will be not just relied upon, but the *only* ways to interact with things like your bank, short of driving to it and speaking to someone in person. It’ll happen someday (maybe not for decades yet), and to see it happen with our current set of technologies is a bit scary.

security consultant #8 best job in america

Usually when I read lists of the “best jobs” or “most rewarding jobs” I tend to look for engineer or general IT jobs. For the first time, I actually see a list over on CNN include Computer/Network Security Consultant as the #8 best job in America. I think this is saying something in terms of compliance and security awareness!

I don’t fully agree with the CNN statement that, “If a system is infiltrated by a virus or hacker, it could mean lights out for the security consultant’s career.” I think it’s correct that it could mean you probably will be looking for a new job. But I don’t think it’s entirely accurate that, “This is a job you can’t afford to ever fail in” [says an interviewee for the story]. Our best teacher is failure and failure is inherent in security. “Failure” as defined when a hacker gets in is not the end of the line. The rest depends on detection, response, mitigation, improvement, and honesty. But I do understand business tends to be all or nothing, especially as you get into the public orgs.

On the flip side, I love the first mention under pre-reqs: major geekdom. I fully agree with that. What sets good CISSPs apart from horrible CISSPs? In a nutshell, the geekdom more often than not, and all the other little things that tend to come with most geek/hacker mindsets.