it’s official: i hate the term “cloud”

This is too good not to repost. Via Chuvakin, I got linked over to an article on CSOOnline: 5 Mistakes a Security Vendor Made in the Cloud. I think this is a kick-ass article for three reasons. First, these are many of the same points I’ve been making since I first heard the term “cloud” a year ago. Second, no shit these are problems. These are problems in traditional software (from notepad apps to OS). Cloud will not fix them. Not without incurring tons of cost and stealing away the efficiencies that cloud exists to take advantage of. The “cloud” still has an identity crisis not just with itself but in how it has been marketed and defined by everyone else: it doesn’t know whether it is a service (customized) or a commodity (one size fits all). Customers think they want commodity (Salesforce!) and vendors want to give commodity. But business doesn’t work well with commodity IT solutions and tend to drop over into customized stuff, which (real) cloud vendors really can’t offer without simply being another word for outsourcing your IT/development.

The third reason this is a kick-ass article: it illustrates the bastardization of the term “cloud” because the example is not what I call “cloud.” The examples given in each mistake do not sound like a “cloud” solution but rather a centrally managed software app. Nothing more. I would call that a case of marketing being stupid. You could place the name, Microsoft (Windows) or Symantec (AV) into each mistake and it’d fit. Those aren’t cloud.

Anyway, here are the 5 mistakes.

MISTAKE 1: Updating the SaaS product without telling customers or letting them opt out – Updating customers should be done, but even traditional software will often not be clear. And even if you update customers, far too many won’t give a shit until it breaks something. Letting customers opt out is a recipe for disaster. Part of the beauty and draw of “cloud” is that you can make robust, agile solutions that will fit a wide swath of your customers. But if you allow customers to opt out, you’ve just created lots of little exceptions and splinters all of which will end up being maintained specially, or being called “legacy.” Traditional IT and software knows this well.

MISTAKE 2: Not offering a rollback to the last prior version – Same problem applies here, too. The ideal goal should be to never have exceptions. But I believe “cloud” just can’t do that in every solution. Salesforce can do it. “Cloud” computing for business intelligence cannot (imo, it’s too customized). That or we’re too muddled on what “cloud” means…

MISTAKE 3: Not offering customers a choice to select timing of an upgrade
– Sort of defeats the purpose of “cloud” and either gets us back to traditional software or a managed services provider. Neither of which I consider “cloud.”

MISTAKE 4: New versions ignore prior configurations or settings, which creates instability in the customer environment – This is one reason why products bloat. The larger they get and more Voltron-like they are (especially through acquisitions by larger giants) the more they bloat and look like ass, because you can’t take things away. At any rate, this sounds like a software upgrade process problem, not a “cloud” issue.

MISTAKE 5: Not offering a safety valve – Why would “cloud” do this?

user-supplied content sites help scammers

Comment spam continues to evolve. I think spammers are learning the more general and succinct their comments are, the better they may be mistaken for real comments. Sometimes, I’m seeing the only tipoff is the link they leave in the link box.

But what if that link goes to a site you know, but to a page of user-supplied content? Like a twitter account just made by a bot, or linkedin account, or myspace page? Eventually you lose, either by being suckered or by swatting away what might have been a real post!

ford engineer takes data with him to new job

If someone important tenders their resignation tomorrow, would you be able to see if over the last week he has been siphoning off confidential information from your network to use at his next job? Do you ever give exit reports on what information that person had access to while with the company, even if you couldn’t tell what he did or did not copy? I’d consider these important, but fairly advanced questions for a security team to ask.

A former Ford Motor engineer has been indicted for allegedly stealing thousands of sensitive documents from the company and copying them onto a USB drive before taking a job with another auto company.

This happens. It happens a lot, and has always happened. Technology has just made this easier, larger in scale, and trackable (even done remotely over VPN!). This is one of those dirty little secrets of salesforce hiring and even some executive job-hopping (what can you bring with you to us? is an oft-unspoken question).

catching up on choicepoint and paychoice breaches

Just a pointer over to a cnet article talking about recent ChoicePoint and PayChoice breaches and the activity swarming around them.

In April 2008, ChoicePoint turned off a key electronic security tool that it used to monitor access to one of its databases and failed to notice the problem for four months…

I think it is misleading (for the FTC) to say it took 4 months to discover that a key security tool was disabled. Who knows how long it would have been disabled had an investigation not taken place.

It might seem like these companies are Doing It Wrong. But I suspect they’re no different than most of their peers. They’re just the ones caught with their pants down and are now subject to extra scrutiny. This is good, but I wouldn’t outright say these two specifically suck more than others.

The FTC alleged that ChoicePoint’s conduct violated a 2006 court order requiring the company to institute a comprehensive information security program following…

This is pretty interesting. Would this mean that once you suffer a data breach, you’re forever needing to be perfect? This is like being on the sex offender list; once you’re on it, you’re basically a prisoner of sorts for life. This could have subtle implications for long-term costs of a major breach.

that wal-mart breach you barely heard about (2006)

If it weren’t for the blogs I follow, I’d miss tidbits of news as the weeks roll past. Like this update to an “old” Wal-Mart breach that occurred back in 2006. (This is what I remind myself when I repost rehashed things…just in case I want the links later on or someone who reads mine didn’t see it elsewhere.)
I’m pulling out nuggets that struck a chord with me. Yes, I’m cynical!

Wal-Mart uncovered the breach in November 2006, after a fortuitous server crash led administrators to a password-cracking tool that had been surreptitiously installed on one of its servers. Wal-Mart’s initial probe traced the intrusion to a compromised VPN account…

First, I’m not surprised that the breach was discovered by accidental (or 3rd party) means. This probably happens 90%+ of the time (my own figure, and I think I’m lowballing it!). Second, it is quite well known that VPN connections are an issue. I don’t want to take the time to look it up, but I recall distinctly reading from numerous places that remote employees have a tendency to feel more brazen with stealing information, and, like in the case of Wal-Mart, run on less secure systems with less secure practices and yet connect directly into sensitive corporate networks. Basically, VPN (remote) access is not to be taken lightly. If someone can subvert that one single login, your entire organization could start falling down. (Think how bad it would be if an IT admin logged into the VPN from his home machine which was infected with a keylogger. Hello admin login!

Wal-Mart says it was in the process of dramatically improving the security of its transaction data…

“Wal-Mart … really made every effort to…

Security doesn’t give a shit about talk. You’re either doing it or you’re not. That’s why verifying this talk as done is driving the industry. It also illustrates a huge problem (that affects more than just security) when management has a reality/belief gap between what they think is going on and what is really going on.

Strickland says the company took the [PCI-driven] report to heart and “put a massive amount of energy and expertise” into addressing the risks to customer data, and became certified as PCI-compliant in August 2006 by VeriSign.

I’m not about to wave around that a PCI-compliant firm had a data breach. In this case, no PCI-related data was actually divulged. But…this breach could have led down the road of revealing POS source codes, flows, and infrastructure such that those defenses could have been broken. Basically: chasing PCI compliance is not the same as chasing proper security for your organization. It’s a small slice and sample of what you should have in mind when you think corporate security. For instance, many orgs spend a lot of resources to limit the PCI compliance scope, rather than tackle the security of those things, they end up argued to be out of scope. Reminds me of shoving my toys under my bed and calling my room clean. Out of sight, right?

I think this also underscores that absolute need for organizations of sufficient size to have a dedicated security team that has high influence on all of IT. It’s not just about detection mechanisms and watching dashboards, especially if the network/server teams place them in bad positions or don’t feed them proper flows. You can’t just watch, but you have to poke and probe and continuously test your own systems and architecture for holes. And not just by an annual pen-testing team, but by people who have vested interests in and deep knowledge of the organization and its innards. You can’t find out your IDS, firewalls, logs, and patching efforts are “inconsistent” after a real breach. If you need to, role-play security incidents just like the business demands role-playing disaster recovery plans.

the continued rise of fuzzing

Securosis pointed me over to a really cool post by Michael Howard as he discusses SDL and the SMBv2 bug that was patched this month.

The takeaway I get is you really can only do so much to scan code, do code analysis, and even code reviews. There will still be bugs like this that make their way through. Automatic analysis just can’t find things like this. And humans make mistakes when reviewing things. (I suppose even code variables could have metadata in them to be marked as “untrusted inputs” and thus highlighted for more scrutiny? It’s like writing code to vet code…which is just odd to me since I’m not into comp sci…but maybe that’s what he’s talking about with their “analysis tools.”)

The only current way to find a bug like this is fuzzing.

But that should bring up the point of how much is enough fuzz testing? For instance, you won’t know if there *is* a problem in some code, so how long and deep should you fuzz? How do you prove it is secure? At some point, you really just have to release code and hope that what is essentially real-world fuzzing by millions of people will eventually reveal any missed issues, at which point your response teams can patch it promptly. Although, hopefully you’ve done enough fuzzing to match just how critical your software is to others (Windows? Pretty critical!).

Funny, that sounds a lot like the mantra, “security eventually fails, so make sure your detection and response is tight.” I’m glad we already look past raw numbers of security bugs, and focus in on how quickly they’re fixed by vendors, and how transparent/honest their process may be. Microsoft has really come a long way down this road.

a moment of industry pessimism

I’m getting passionately convinced that the “big security firms” that make these “big security suites” for home and business users have absolutely no clue what they’re doing anymore. Too big, too dumb.

I’m sure they have great engineers in place, but between the business itself and the messed up marketing, these firms and their products are beyond broken. It sucks to be held captive by them, though, since they (sort of) provide tools that form the foundation of a security posture (endpoint tools, mostly).

In short, STOP TRYING TO DO SO MUCH THAT YOU SUCK AT DOING ANY OF IT!

waiting for patches to release to wsus…

Patching. Every pen-tester and auditor will point it out and every security geek pretty much *facepalms* when you admit you haven’t patched since last week. But patching is half art and half time commitment. The reality of patching is that it is not quite as easy as we always make it sound, but that doesn’t make it any less necessary as a cornerstone to digital security.

You have a Windows environment with more than 100 systems. In other words, sneakernet just doesn’t work anymore, and a good portion of these systems are servers that you need to stagger reboot/install times during a maintenance window. Basically, you qualify for WSUS!

The easy part of patching with WSUS is getting a spare server with enough storage set up, WSUS installed, and all the patches downloaded that you want to manage (start small because the storage needed adds up quick!).

The next part is figuring out your WSUS groupings and your Group Policy Objects. If you don’t do much to manage the structure of your Machines OU, you might want to start here. Time spent on planning here will save time later on in reworking things you didn’t anticipate. Using Group Policy will help ensure that you don’t have to chase every new system and herd them into WSUS. Joining the domain should take care of it!

If you’re trying to massage a new WSUS implementation in an already-built Group Policy arrangement, you’ll probably have a lot of hair-pulling or catastrophic mistakes as you try to move inheritance around, policies around, and break things out properly. It’s really not all that fun early on.

Once you start getting systems populated, you then can start looking at your deficiencies in WSUS. More than likely you will end up approving everything, but that is still a boring time sink. This might also expose a few issues. First, all the systems you’ve neglected for months or years. Second, whether you want to approve patches for every system. I’d suggest approving for every system. That way if you create a new WSUS group later on, inheritance will still apply everything you’ve done previously. If you want to split, say, servers and workstations, I’d suggest getting a separate WSUS instance/box rather than compromise the inheritance stance. It really will pay off someday when you find surprise machines in your environment, but thankfully have been patched because you approve everything for everything.

In the process, you’ll learn how to view the reports in the WSUS management console. This is tricky, so play with the filters extensively. It sucks to get a nice warm fuzzy feeling as you get caught up only to realize you hadn’t even begun to look at what systems had errors or have a backlog of updates from years ago. Don’t just look at new patches!

Eventually, you’ll get caught up!

Then on patch day you have to figure out what systems you want to approve patches for as a test before you slam them out to all the other systems. And you need some method to validate the testing. This is harder than it sounds because you need systems that get used, but are not so important that you’ll jeopardize the business if they screw up. You also need to then manage their WSUS membership (and thus their GP objects and OU assigments) to accomodate their status as test boxes. Basically, good luck with that!

Then after some testing time, you can roll out the patches everywhere. Of course, this probably gets preceded by a wide announcement of patching, rebooting, and possible downtime in your maintenance window or overnight, and all the dumb questions that come back from it.

After all of that is done, you get the fun task of going back into WSUS to see which systems failed to do their installs. Then troubleshoot those installs, announce a second round of downtime, and get things up to speed.

In addition, you’ll probably have systems that no one likes to reboot, so they just accumulate months of patches, such as twitchy domain controllers, old systems that are more brittle than leaves in autumn, and database servers. Everyone loves a sudden database server reboot!

Whew, done, right? Nope! Now you have to have a process to validate that patches are installed on all systems that you manage. While WSUS does include reporting, it might be necessary to get some checking done out-of-band from your patching. Enter: vulnerability scanners!

This is a beast in itself as you need to be careful initially with how much you let the scanner beat on your systems. You might just end up doing Windows patch scans, which is an ok baseline if that’s all you can do. Of course, you get the pain of:
– getting systems into the scanner target list (too often this is manual!)
– getting dead systems out of the scanner target list
– parsing through the reports for both of the above
– parsing through the reports for the missing patches or alerts
– reconciling all the issues

The bottom line is patching is a necessary foundation to security. If you don’t have a patch management process for your OS of choice, you can’t have a good security stance. And too often the people who flippantly say patching is easy, don’t know anything about enterprise patching and think it’s just all about Automatic Updates and clicking the yellow icon every month before they go to bed. Proper patching is a time commitment that needs to be made by your IT or security staff, and it takes longer than you probably expect. Oh, and we’ve not even touched on non-Windows patching!

phishing? some people still just don’t get it

This article started some thinking. In it, the current FBI Director says he no longer banks online after nearly being fooled by a phishing email. (Yeah, my first reaction was that he shouldn’t really even be looking at emails like this, let alone almost falling for one…and the appropriate response is not to stop banking online but to stop reading those emails and clicking links on them. And by the way, if you say banking online is safe, but you don’t do it, and you’re an influential person…you’re confused and confusing. But hey, I’m glad it’s 2009 and our FBI Director experienced a “teaching moment” to the old issue of phishing emails…)

So, someone can still bank online if one does so strictly by following some guidelines, none of which ever requires you to even look twice at all the phishing (and legit!) email that may or may not come from your bank. Why is this? Because all of that is just bonus for doing your business online. You don’t *need* to read those emails. Ever.

At least…not yet.

Sadly, I think as more and more services go online (like the Twitter-enabled bank from the other week), I feel like someday we’ll look around and realize all these horribly insecure methods of communication will be not just relied upon, but the *only* ways to interact with things like your bank, short of driving to it and speaking to someone in person. It’ll happen someday (maybe not for decades yet), and to see it happen with our current set of technologies is a bit scary.

security consultant #8 best job in america

Usually when I read lists of the “best jobs” or “most rewarding jobs” I tend to look for engineer or general IT jobs. For the first time, I actually see a list over on CNN include Computer/Network Security Consultant as the #8 best job in America. I think this is saying something in terms of compliance and security awareness!

I don’t fully agree with the CNN statement that, “If a system is infiltrated by a virus or hacker, it could mean lights out for the security consultant’s career.” I think it’s correct that it could mean you probably will be looking for a new job. But I don’t think it’s entirely accurate that, “This is a job you can’t afford to ever fail in” [says an interviewee for the story]. Our best teacher is failure and failure is inherent in security. “Failure” as defined when a hacker gets in is not the end of the line. The rest depends on detection, response, mitigation, improvement, and honesty. But I do understand business tends to be all or nothing, especially as you get into the public orgs.

On the flip side, I love the first mention under pre-reqs: major geekdom. I fully agree with that. What sets good CISSPs apart from horrible CISSPs? In a nutshell, the geekdom more often than not, and all the other little things that tend to come with most geek/hacker mindsets.

as if heartland and carr don’t get me angry enough already…

Heartland can’t stay out of the news, nor can their CEO Robert Carr. Unfortunately this time the news deals with a new lawsuit that claims…well…check the excerpt below. Does this explain or at least put into perspective Carr’s newfound religion in regards to security? (To me, it actually convinces me he’s all hot air and I would only trust actual technical audit/pentest findings over whatever he claims reality to be; but that’s not much worse than I felt when the breach announcement broke…)

In a November 2008 earnings call, according to the complaint, Carr told analysts, “[We] also recognize the need to move beyond the lowest common denominator of data security, currently the PCI DSS standards. We believe it is imperative to move to a higher standard for processing secure transactions, one which we have the ability to implement without waiting for the payments infrastructure to change.”

So much politicking and legal posturing in the media/public over crap like this. People say one thing, but reality is totally different. The article even mentions how VISA removed Heartland this year and (someone at VISA) still claims no one compliant with PCI has been breached. Ugh…what an exactly wrong approach to take. That’s like admitting you have your head up your ass.

mcafee course teaches students how to create/use malware

Seems McAfee is holding a course this week on working with malware and how it works, where students will likely get hands-on learning in how to make a Trojan (or at least work with one) and do other things malware authors/users like to do. I first saw this from a post on Kurt Wismer’s blog.* In the post, Kurt goes over a few reasons why this course is a bad idea for McAfee.

I’m not sure I totally agree with him, but I don’t have any violent disagreements on this either. A few points I would bring up in defense of the course (yeah I’m marking the calendar as a day I actually gave a flimsy defense in favor of McAfee!).

1. The course is 4 hours and does have the attached cost of the Focus 09 conference on it. I’m not sure the course will have any newbie script kiddies in attendence looking to make their mark in the malware business.

2. Ok, the point of detractors to this course is not necessarily script kiddies, but possibly the newbie researchers getting their hands on these tools/skills the first time, and not fully understanding the risks of a rogue, not-contained piece of malware getting out of their home labs (or god help us their work environments if they experiment there!). Fair enough…but I think most virus-writers and even anti-virus writers probably had their start under worse conditions and less guidance.

I guess the point of 1 and 2 is that I’m not sure McAfee is introducing any new enablement with their course. If the labs/slides were made public, I would have more of an issue with it.

3. As defenders, we do need to stay abreast of these techniques. If learning how an attack can be done helps me be a better defender, I’m not sure I could argue against that. Well, not directly anyway. My point in going down this road is that maybe someone will write some malware and do Evil Things, but maybe someone may take this education and become the next senior engineer at Vendor X, or stop Evil Things in their own company. I don’t know, but I’d rather disseminate information if the Evil doesn’t outweigh…

I suppose one could pull in the analogy of bomb-making into this discussion. Is it ok to teach people how to make bombs? Perhaps not. Should anti-bomb engineers (yeah what they’re called right now is escaping my recollection) know how to make bombs? I think so.

4. Kurt has a great point that maybe McAfee, as an anti-malware company, shouldn’t be educating others on how to make more malware. I think this would be far more true if they were, say, teaching a room full of high school students. Less true here, although still a valid argument.

5. Kurt’s also correct in saying it doesn’t matter if McAfee is teaching these concepts using an already-existing toolkit or writing things from scratch. That really should have no bearing on the discussion.

In the end, I’m not holding fast to a Pro-course stance, but I would have some reasons to stay on the fence about this topic (agnostic if you will, while erring on the side of the course value).

* I like kurt’s posts/opinions most of the time. Even if I don’t agree with them, he states them clearly and with informed conviction that all people should exhibit.

is virtualization here to stay, or just a stop-gap back to big iron?

Hoff has opined about virtualization over on his blog. He calls it in incomplete thought (a blog post series, really), but it’s really quite thorough and deep. I suggest reading the comments as well.

In essence, Hoff says, “There’s a bloated, parasitic resource-gobbling cancer inside every VM.” It’s true. Virtualization isn’t a solution to much of anything. It’s a golem of a beast created to fix problems that were symptoms themselves or much larger problems.

Here’s a really quick, 30-second mindset I have on this.

  • mainframes centralize everything and people get things done with their slices
  • personal computers take the world by storm
  • suddenly everyone can do something on their own without the centralized wizards and curtains.
  • …and everyone does things on their own, creating apps, languages, etc; decentralized apps and data
  • the OS just can’t really keep up; same feature bloat hit Windows that hits all software that wants to be popular and fit every niche need (McAfee, Firefox, browsers, etc).
  • then shit gets too splintered and the IT world becomes an inefficient money-drain of equipment and maintenance
  • attempts to centralize everything is met with cries of “they’re stealing our admin rights, but my system is slow when I have admin rights!”

All of this ends up turning into a cycle, and one we’re destined to follow over and over. Big iron. Smaller iron. Big iron. Centralized. Decentralized. Centralized. Administrative power over your individual system. Locked down. Empowered. Locked down. It’s like a “grass is greener” mentality out of control.

But it’s more than that, as well. Part of this cyclic, mess of a vortex is the speed at which technology is progressing and our world is changing. It moves so fast that no one (business or individuals) can take the necessary time to do any of this correctly. As you’ll hear Ranum and I think even Potter say in recent talks, the problems of today are mistakes from 15 years ago. I think things just move too fast for us to realized it.

At any rate, it’s not like we can do much about it today, but at least we can be cognizant of this situation and do what we can in small measures to avoid the eddies and undertows that drown so many in these changes.