more info on the tls/ssl mitm attack

Some more information is slowly getting out about the TLS/SSL MITM attack via an “authentication gap” that was disclosed yesterday. As I somewhat inferred from the original details, this has limited potential (usually against connections utilizing client certs) and does not result in snifing traffic. As I somewhat expect with what limited confirmed info is out there, this is not a big deal to most, but may be a big deal to smart card vendors. As mentioned in the linked article, SOAPs and other web service connections may also be susceptible.

Even without the huge public risk in this, those three mentioned issues at the end may still be pretty important, especially if someone weaponizes and scripts it out for easy use to usurp connections or inject Bad Things (inject, POST, follow-up with GETs?).

Props to the security community on Twitter for being such an insanely great way to spread news on issues quickly. The above link I saw from the Taosecurity tweets.

andy-recently-unsatisfied-it-security-guy

Mr Andy IT Guy has posted a great article about his recent unsatisfying experiences as a security guy and subsequent positive move onward!

I can’t say I fully agree when he says that security needs to be separate from IT and so on about the political structure of an organization. However, I don’t know exactly what the right answer should be, and suspect that it differs depending on the corporate/mgmt culture. I believe in an audit (test, QA, etc) function that does checks. I believe in a group that has the same access to the business and infrastructure as the IT teams (monitoring, investigative, SOC/NOC, etc), but only doing security tasks (and not waiting for IT to get a span port right for the security tools to work). I believe in baking security knowledge and practice into the IT roles themselves. Sadly, all of that often lives in an ideal world. Ideally, I believe there are many seurity professionals of such a high degree of integrity that if you made them roughly gods in the company, they would properly secure the shit out of it without all the political BS.

And, too often, I think some organizations just have no desire whatsoever to do security. They just don’t want to do the shit and they don’t want to do the shit right. Sadly, that also will be a reality and hopefully we don’t have too many truly gifted, hard-working, positive security geeks tied up in such organizations for too long. (Maybe this is why ‘security consultants’ are such a rising deal. Organizations don’t want security, but they want some quick answers…)

I really love the mention at the end that often we security geeks get worn down. This is true. We get worn down. We get negative. We need to vent. We sometimes think the tasks are impossible. We even get frustrated and angry and share our passioned war stories over beers and strippers (I’m listening to too much ExoticLiability!). That’s why this industry and culture we have is so cool! Because we’re not all negative at the same time, and understand that sometimes we have to vent and sometimes we have to support the venting from our peers. But hey, hopefully with hard work and an alignment of the corporate stars, we can effect some positive security change when we have the opportunities to do so. The long-term goal is education, and whether we see it reflected or not, we do slowly improve the education of those around us (even if it causes *them* to take up beer binging, too).

tls mitm attack initial thoughts

Saw this first shoot out on Twitter at the end of my workday, but without any details, I simply made a mental note to keep an eye out. Sooner than expected, further details on this TLS MITM attack have surfaced.

Is this a big deal? Possibly. Certainly big enough to keep on the *front* burner, especially since initial details are pretty technical.

Does this allow an attacker to intercept and sniff TLS-encrypted traffic? It doesn’t sound like it so far. If I’m reading this correctly, an attacker can inject data into the stream and influence what the browser (in a web client->server scenario) renders, with no visible warning to the client that bad data has been introduced. That or I’m seeing that the client can influence what the server sees in the requests being made, in which case this is an attack on the server? Either way, this appears to be an MITM injection attack and not necessarily a MITM sniffing.

I’m also unsure how this stands with TLS negotiations without client certificates, such as most people I imagine are familiar with in their web environs.

I wonder if this might be very important for anything using TLS and client certs for authentication, such as the smart cards mentioned in the advisory. Would it be possible for someone to usurp that authentication and re-use it such that the attacker can then access/view those protected areas on the server?

When I find out more, I’ll post a follow-up!

ramblings on evolving security

Evolving security. I’m coming again to a post by Josh Corman on FUDSec, which I mentioned last week.

I don’t disagree with the post Josh made, but it doesn’t necessarily leave me in a beautiful spot. There are three thoughts I have in response to the post.

First, sure, I’ll buy on a certain level that we need to change. But even if we all agree we need to, what next? That isn’t answered at all. This sort of discussion is something I’ll have in the bar over drinks with fellow geeks, but that doesn’t make it fruitful or useful.

Second, why do we need to change? I know, Josh went into this in detail as business always changes and attackers are ahead of us and we don’t retire security tools and so on. But he doesn’t sell me on why those situations are bad or how they are not supposed to be that way. Do we want to change just because we have attackers taking some wins? We (defense) can’t “win” security, so will this argument be a perpetually fueled one? By the very nature of things, insecurity will always be ahead of security as one follows the other. Fundamentally, this is not a new issue with PCI or even computers.

Third, that’s not to say I’m saying we’re doing the best we can. But I’m not going to go so far as to throw up my hands and start over or think some new evolution or innovation will save us.

Let me get a few statements made that I don’t think need fancy backing statements and proofs that kind of relate to my mindset on this topic after reading his post and the comments. Really, I tried really hard to make these quick, but sadly I still fail.

1. Change is inevitable in business. (the unknown)

2. A good portion of the infrastructure does not change (the known).

3. Security needs to manage…security…for both #1 and #2.

4. Security is a function of economics.

5. We get better at managing #2 given time and resources.

6. We get better at managing #1 given resources. Time turns #1 into #2.

7. #1 introduces uncertainty and new risks, challenges, security holes. Less knowledge; often more complexity.

8. Security is not necessarily against #1, but it puts pressure to be secure and yet be economical in the business and a non-barrier. A security guard will never speed up flow past a checkpoint, he will only ever slow it down.

9. There is no “security win” or “state of security” to security geeks, but that might exist for a narrower business perspective (compliance).

10. The culture and personality of executive mgmt (or stakeholders) will determine how everything above (#1-8) are handled and in what order/magnitude.

11. We will always be better at securing the known (#2) as opposed to securing the unknown (#1). Attackers will be unpredictable in their skill at attacking #1 and #2; 0days happen in both.

12. It is as difficult for business to put security up front on par with the pace of agile/forward-thinking and new business ventures and experiments (clouds, etc) as it is for a new programmer to build security before proving that her code actually works. Likewise, it is as difficult as a start-up company having a mature infrastructure (both tech and mgmt) before they know their ideas/products are economically viable. It doesn’t work any other way! This is very hard-wired in almost everything we do that is new and bleeding edge. You plug in the cable and test connectivity before you lock down that connectivity. You have a control group before you have an experiment group. You build Facebook before you secure it.

13. Business will always reward the agile risk-takers up front, for better or worse.

14. Again, if you move forward, you can’t come close to perfect security. Truly accept that.

15. And this gets back to detection, response, visibility, transparency, standards, identification/authorization, least privilege, monitoring, logging. Things that are ongoing and don’t necessarily care or change based on legacy or bleeding-edge infrastructure. It doesn’t take an “evolved security geek” to do those things well, regardless of the level of #1 and #2 in an organization.

16. And lastly for now, while mgmt and stakeholders control the weather of corporate culture, the level of passion and enthusiasm in security geeks will determine their course of actions as well. Not the least of which is their own happiness with their current state of security and effort.

this old brain works, just a bit slow sometimes

It took about a week, but I finally remembered the essay years back by Noam Eppel: Security Absurdity: The Complete, Unquestionable, and Total Failure of Information Security. The site is now defunct, but the essay was a pointed finger at how security was not working, with promises of a follow-up with suggestions on how to fix it. No satisfying follow-up emerged, however. I mentioned it here a while back. It came to my mind last week.

just a resume update blurb

Finally got my resume (pdf) updated with my CISSP status. The thing I hate most about resumes? Nope, it’s not the description of accomplishments. It’s the damned list of technical skills and programs I know. I try to use that as the part I tailor to job descriptions and the tools they list. Otherwise, a good IT tech simply has an aptitude to pick up and learn anything he or she doesn’t already know. I didn’t list Python? I’m confident I could pick it up quickly. But I hate feeling a little strange listing anything I’ve not professionally or extensively personally used, like Python. It’s silly how even old, stupid tools stay listed there for years…

social network security and user education

It’s not really that often that I think user education is a solution (or close to it), as I think it is just a small sliver of a security posture in a company. One such situation is policing social networking for an organization.

I’m listening to Exotic Liability 37 and Ryan and Chris are having a great discussion on what organizations should do about social networking. I agree that companies need to have policies on social networking, but I’m sympathetic to the feeling that an organization shouldn’t be reading every post that every employee makes on their personal time, or that you have to disclose what your social networking identities are. That seems to be a huge effort for very little gain, especially as most people never post anything to do with the organization.

I agree that anything about the company should be addressed, and anything where someone may be miscontrued as speaking for the organization should be curbed. That should be done by policy and user education. As should any unauthorized use on business time when explicitly prohibited. Or maybe something like an exec making a comment on visiting Smegma, Florida, and someone knowing that a potentially bought acquired company is HQed there which could divulge big information.

But I’m not sure I can say employers should be inventorying your identities online and examining every post you make. Considering how much crap is posted to so many places compared to how much would actually damage a company, it seems like a waste of resources to watch it.

I like when the EL guys barely touched on the idea of following developers. That is one place where you really could get some information, for instance code snippets posted to a help forum. The problems here, though, are similar. How many thousands of such sites exist? And how often would those snippets and tidbits actually be useful?

I guess it all depends on the company and what their interest is in protecting information. Defense contractors, game companies, and Apple would be far different than a small business in Wichita that only serves local customers. I think a policy is necessary, user education is necessary (tailored to the level of employee), and some measure of monitoring for references to your company may be necessary. But I’m not sure monitoring individuals will offer good return for most cases.

on perimeters, clouds, database outsourcing, security

I’ve long been somewhat anti-anti-perimeter. I understand the reason why groups of professionals will say there is no more perimeter, and I agree with most of their observations, I don’t really buy their conclusion that the perimeter is dead. I still feel there is a perimeter and there will continue to be a perimeter. It’s just not as hard and physical a stop as it used to be.

But, finally, the first real crack in the perimeter issue (at least to me) is coming from “cloud” services, like this Amazon RDS service. Basically, you want a database hosted by someone else? There you go, a MySQL instance at a third-party. Really, this is outsourcing your IT infrastructure piece by piece. This is like hooking me up to an external plasma or blood machine, and having it a critical part of my circulatory system. You’ll make me extremely nervous every time you get close to those delicate tubes and the power switch next to my bed! At least when it was inside my body, I certainly knew when something was wrong.

While I think this is a horrible approach for security (in light of our ever-increasing sensitivity to data flow, transmission, storage), I do recognize that it continues the destruction of “the perimeter.” Pretty soon we’ll have these golems out on the web where the web front-end is hosted at X, the database hosted at Y, with API calls to A-thru-M, and built with no security in mind. The silver lining? Continues the push for encryption when you’re outside the traditional perimeter. Is this bad? Who knows, maybe this will evolve into something awesome, but for now my initial feelings are quite cynical (if I were a web developer, I’d probably think the opposite!).

And like most outsourcing endeavors, I really think this will be a cool, trendy cost-saver in the very short term, but all the issues that come with “cloud” and outsourcing and trying to make a customized service into a one-size-fits-all product (study business strategy and economics to see why I make such a fuss about those two categories of product) are going to challenge this deeply beyond the next 12 months. At least with offering something narrow like a “database instance” you could maybe get away with calling this less a customized service and more a standard product. It’s definitely much better than saying something vague like, “we’ll massage data if you send it to us”. But still, it’s a very narrow piece that must rely on something else, and it is the stringing of those sorts of connections across untrusted networks that is sketchy.

interesting read on evolving security

Also via Chuvakin, I skimmed an article by Josh Corman on evolving security. Perusing the comments, I see good points about the vagueness on what we’re supposed to be evolving into.

This reminds me of a few years back when someone threw down a great essay on why security sucks, with the promise of a follow-up so they didn’t sound like someone just complaining. That follow-up never truly came. (Fine, it came, but it just opined about other people’s responses; hardly a half-assed fulfillment.) (I’m having a problem finding it or remembering enough specifics to search for it, but I will find it!) Update: It was Noam Eppel’s essay on the total failure of information security [now defunct] which I posted about years back

It’s one thing I’ve slowly learned (and am still learning) through my business/work experience is that you don’t often want to just rage without a plan of action. Not unless you’re aware that you’re just venting, in which case it’s ok. Otherwise the first question from anyone who helps determine your future is, “What do you suggest?” That pivotal, important question…like a knight challenging your queen on your side of the board…that if you don’t have an answer for, is the beginning of your endgame.

Especially in security, we need to step back and ask ourselves why we think security needs to evolve. Is it because we’re still insecure? If so, then you’ll rage forever because there’s no “win.” Unless we want to define “win,” which…yeah…that’s a good start. I feel this is an industry that can only define itself after the fact, rather than define some novel approach that is “oh my god” glorious and impacting. We’ll define our security methods and standards only after we try them out and see if they worked, or in what measure they worked. This is why I see ‘security’ more a science than a business discipline… *…now where’d I put my crack rock…*

it’s official: i hate the term “cloud”

This is too good not to repost. Via Chuvakin, I got linked over to an article on CSOOnline: 5 Mistakes a Security Vendor Made in the Cloud. I think this is a kick-ass article for three reasons. First, these are many of the same points I’ve been making since I first heard the term “cloud” a year ago. Second, no shit these are problems. These are problems in traditional software (from notepad apps to OS). Cloud will not fix them. Not without incurring tons of cost and stealing away the efficiencies that cloud exists to take advantage of. The “cloud” still has an identity crisis not just with itself but in how it has been marketed and defined by everyone else: it doesn’t know whether it is a service (customized) or a commodity (one size fits all). Customers think they want commodity (Salesforce!) and vendors want to give commodity. But business doesn’t work well with commodity IT solutions and tend to drop over into customized stuff, which (real) cloud vendors really can’t offer without simply being another word for outsourcing your IT/development.

The third reason this is a kick-ass article: it illustrates the bastardization of the term “cloud” because the example is not what I call “cloud.” The examples given in each mistake do not sound like a “cloud” solution but rather a centrally managed software app. Nothing more. I would call that a case of marketing being stupid. You could place the name, Microsoft (Windows) or Symantec (AV) into each mistake and it’d fit. Those aren’t cloud.

Anyway, here are the 5 mistakes.

MISTAKE 1: Updating the SaaS product without telling customers or letting them opt out – Updating customers should be done, but even traditional software will often not be clear. And even if you update customers, far too many won’t give a shit until it breaks something. Letting customers opt out is a recipe for disaster. Part of the beauty and draw of “cloud” is that you can make robust, agile solutions that will fit a wide swath of your customers. But if you allow customers to opt out, you’ve just created lots of little exceptions and splinters all of which will end up being maintained specially, or being called “legacy.” Traditional IT and software knows this well.

MISTAKE 2: Not offering a rollback to the last prior version – Same problem applies here, too. The ideal goal should be to never have exceptions. But I believe “cloud” just can’t do that in every solution. Salesforce can do it. “Cloud” computing for business intelligence cannot (imo, it’s too customized). That or we’re too muddled on what “cloud” means…

MISTAKE 3: Not offering customers a choice to select timing of an upgrade
– Sort of defeats the purpose of “cloud” and either gets us back to traditional software or a managed services provider. Neither of which I consider “cloud.”

MISTAKE 4: New versions ignore prior configurations or settings, which creates instability in the customer environment – This is one reason why products bloat. The larger they get and more Voltron-like they are (especially through acquisitions by larger giants) the more they bloat and look like ass, because you can’t take things away. At any rate, this sounds like a software upgrade process problem, not a “cloud” issue.

MISTAKE 5: Not offering a safety valve – Why would “cloud” do this?

user-supplied content sites help scammers

Comment spam continues to evolve. I think spammers are learning the more general and succinct their comments are, the better they may be mistaken for real comments. Sometimes, I’m seeing the only tipoff is the link they leave in the link box.

But what if that link goes to a site you know, but to a page of user-supplied content? Like a twitter account just made by a bot, or linkedin account, or myspace page? Eventually you lose, either by being suckered or by swatting away what might have been a real post!

ford engineer takes data with him to new job

If someone important tenders their resignation tomorrow, would you be able to see if over the last week he has been siphoning off confidential information from your network to use at his next job? Do you ever give exit reports on what information that person had access to while with the company, even if you couldn’t tell what he did or did not copy? I’d consider these important, but fairly advanced questions for a security team to ask.

A former Ford Motor engineer has been indicted for allegedly stealing thousands of sensitive documents from the company and copying them onto a USB drive before taking a job with another auto company.

This happens. It happens a lot, and has always happened. Technology has just made this easier, larger in scale, and trackable (even done remotely over VPN!). This is one of those dirty little secrets of salesforce hiring and even some executive job-hopping (what can you bring with you to us? is an oft-unspoken question).

catching up on choicepoint and paychoice breaches

Just a pointer over to a cnet article talking about recent ChoicePoint and PayChoice breaches and the activity swarming around them.

In April 2008, ChoicePoint turned off a key electronic security tool that it used to monitor access to one of its databases and failed to notice the problem for four months…

I think it is misleading (for the FTC) to say it took 4 months to discover that a key security tool was disabled. Who knows how long it would have been disabled had an investigation not taken place.

It might seem like these companies are Doing It Wrong. But I suspect they’re no different than most of their peers. They’re just the ones caught with their pants down and are now subject to extra scrutiny. This is good, but I wouldn’t outright say these two specifically suck more than others.

The FTC alleged that ChoicePoint’s conduct violated a 2006 court order requiring the company to institute a comprehensive information security program following…

This is pretty interesting. Would this mean that once you suffer a data breach, you’re forever needing to be perfect? This is like being on the sex offender list; once you’re on it, you’re basically a prisoner of sorts for life. This could have subtle implications for long-term costs of a major breach.

that wal-mart breach you barely heard about (2006)

If it weren’t for the blogs I follow, I’d miss tidbits of news as the weeks roll past. Like this update to an “old” Wal-Mart breach that occurred back in 2006. (This is what I remind myself when I repost rehashed things…just in case I want the links later on or someone who reads mine didn’t see it elsewhere.)
I’m pulling out nuggets that struck a chord with me. Yes, I’m cynical!

Wal-Mart uncovered the breach in November 2006, after a fortuitous server crash led administrators to a password-cracking tool that had been surreptitiously installed on one of its servers. Wal-Mart’s initial probe traced the intrusion to a compromised VPN account…

First, I’m not surprised that the breach was discovered by accidental (or 3rd party) means. This probably happens 90%+ of the time (my own figure, and I think I’m lowballing it!). Second, it is quite well known that VPN connections are an issue. I don’t want to take the time to look it up, but I recall distinctly reading from numerous places that remote employees have a tendency to feel more brazen with stealing information, and, like in the case of Wal-Mart, run on less secure systems with less secure practices and yet connect directly into sensitive corporate networks. Basically, VPN (remote) access is not to be taken lightly. If someone can subvert that one single login, your entire organization could start falling down. (Think how bad it would be if an IT admin logged into the VPN from his home machine which was infected with a keylogger. Hello admin login!

Wal-Mart says it was in the process of dramatically improving the security of its transaction data…

“Wal-Mart … really made every effort to…

Security doesn’t give a shit about talk. You’re either doing it or you’re not. That’s why verifying this talk as done is driving the industry. It also illustrates a huge problem (that affects more than just security) when management has a reality/belief gap between what they think is going on and what is really going on.

Strickland says the company took the [PCI-driven] report to heart and “put a massive amount of energy and expertise” into addressing the risks to customer data, and became certified as PCI-compliant in August 2006 by VeriSign.

I’m not about to wave around that a PCI-compliant firm had a data breach. In this case, no PCI-related data was actually divulged. But…this breach could have led down the road of revealing POS source codes, flows, and infrastructure such that those defenses could have been broken. Basically: chasing PCI compliance is not the same as chasing proper security for your organization. It’s a small slice and sample of what you should have in mind when you think corporate security. For instance, many orgs spend a lot of resources to limit the PCI compliance scope, rather than tackle the security of those things, they end up argued to be out of scope. Reminds me of shoving my toys under my bed and calling my room clean. Out of sight, right?

I think this also underscores that absolute need for organizations of sufficient size to have a dedicated security team that has high influence on all of IT. It’s not just about detection mechanisms and watching dashboards, especially if the network/server teams place them in bad positions or don’t feed them proper flows. You can’t just watch, but you have to poke and probe and continuously test your own systems and architecture for holes. And not just by an annual pen-testing team, but by people who have vested interests in and deep knowledge of the organization and its innards. You can’t find out your IDS, firewalls, logs, and patching efforts are “inconsistent” after a real breach. If you need to, role-play security incidents just like the business demands role-playing disaster recovery plans.