patco vs ocean bank vs reasonable security

Brian Krebs has an article up on the case of Patco vs Ocean Bank. The implications of this case could have important industry ramifications as the key point of contention is what technically constitutes “good enough” security from the bank’s perspective.

I don’t suggest reading too many of the comments. This is a very delicate and not-clear situation, which many commentors don’t seem to grasp very well. While some of the angst may center on whether the bank really had 2-factor authentication or possibly the out-dated guidance from teh FFIEC.

Side note 1: I’ve not read the actual case file, but from Brain’s article, I’d say Ocean Bank isn’t using 2-factor authentication.

Side note 2: Always asking security questions on every transaction reduces the security value? Actually, sort of, when your attackers are employing keyloggers and you normally don’t have transfers that trigger the asking of those questions. Then again, any attacker who runs into those barriers will just keep lowering their transfer amount until they’re under the threshold. Hopefully that would trigger some fraud alerts…

Site note 3: At some point consumers (business) need to put their own diligence in doing their banking on trusted systems. If you hire a courier or some other proxy to run to the bank and make transfers for you, if that person ends up skipping town with extra money because they inflated the transfer amount and sent it to themselves, do you blame the bank or your own hiring practices/trust? In this way, computers are a sort of proxy, granted, a proxy that answers to anyone with the right handshake, so to speak…

Site note 3a: Unlike the “simple” maintenance and safety and security of a car or other vehicle, the care and safety and security of a computer system or network is still going to be far above the head of most consumers and workers. Telling people they need to put forth their own effort in maintaining a trusted computing platform is often going to be met with tears of anguish and outrage…as they then turn their eyes to app/OS vendors and their security track records…or to the government’s lack of “internet jurisdiction” in keeping foreign attackers out or at least under threat of arrest..and on and on.

Site note 3b: All of this ends up raising questions of what is reasonable in a highly-technical globally-connected digital world? I’m not sure anyone will ever be happy with where the decisions fall in such a discussion.

the enemy knows the system, and the allies do not

Go read Gunnar’s quick piece (and the comment) about Jay Jacob’s insight on Shannon’s Maxim (can I make this sentence more awkward?): The enemy knows our systems, but the good guys don’t.

Even looking at it from the network perspective, the enemy knows your firewall rules, yet so many internal folks do not. It sucks to look at a firewall and ask why rule #267 is present. Only to have no one able to answer it.

Or to have a developer look at the security person who wants security, but the developer has no idea and no one else to talk to on how to fit that in without potentially breaking everything else. As Jay says, “…people aren’t motivated to evaluate all the options and pick the best, they are motivated to pick the first option that works and move on.” (Coders/developers are notorious for this, but so are sysadmins and users as well!)

Essentially, security is often covertly treated as the experts in…everything internal. Which really is a tough requirement to ever meet. Really, the organization needs to know its own stuff intimately.

Before the enemy does it. This is still why I consider pen-testing activities to be valuable; since they often expose exactly what an attacker is learning that an organization hasn’t.

As Marcus in the comment to the linked article essentially says, I’m sure the revolving door of (questionably skilled) outsourced and contractor IT doesn’t help at all.

and there is still craving about rsa hack details

Lockheed Martin recently suffered a hacking incident. In the days that followed, it was reported by the NY Times that the attack was indeed linked directly to a previous RSA hack that stole what is still unidentified information from RSA. CNET has posted more information and links and Wired has a blurb about L3.

As I mentioned on Twitter, how much better would we all be if RSA had divulged full details to the public or affected parties? Were they just going to wait and hope nothing came of whatever was stolen from them?

Of course, with something like this the worst should be assumed, but that’s not a great strategy to tell your boss or use to formulate your budgets and risk postures. No one assumes the worst; if they (or we) did, we’d have far better security initiatives…

I understand they are certainly fixing whatever was broken and replacing what needs to be replaced, but it’s still irresponsible in my book.

management vs technical ramblings

Jarrod Loidl has an interesting discussion on the topic Management vs Technical Career. This is always definitely something to keep in mind as a career moves forward, and I think he really does end up hitting most of the milestone points in such a thought process. It’s a long post, but it keeps firing the cylinders even at the end.

I really like the ending tandem points of, “do what you love,” and (in a Wolverine voice), “be the best at what you do.” Combine that with, “don’t be an ass,” and you really have a simple guide to work and life.

If I were to look at my own lot, I’d say it certainly is hard to keep current with the skillsets. I remember starting out my career around Windows XP and I still feel like I know it inside and out. Windows Vista/7? I fully doubt I’ll ever be as intimate (then again, I don’t do desktop support right now). On the managerial side, I feel like I have excellent organization, attention to detail, high degree of problem-solving/troubleshooting skill, and I make accurate decisions quickly (backed by confidence in those skills) when I need to get things done. My downside is that I’m not entirely a people person. Oh, once I get going, I’m fine, but it really takes significant effort and time for me to find my voice socially in a given group, as any introvert is likely to echo.

That said, at this point in my life and career, I could probably swing management, but I get far more enjoyment out of the technical side of the equation, for a variety of reasons that I won’t dump out here quite yet.* Management is one of those things I accept I’ll do someday simply because of the decision-making support and anlysis skills I have, but I have the luxury of allowing that “someday” to not be tomorrow quite yet. Perhaps if I snag some security consulting gigs that would be enough… 🙂

The end thought is one Jarrod mentioned: At least spend the time to do this reflection on who you are, what you are, what makes you happy, why it makes you happy, and so on. Too many people never ask these introspective questions, and they should.

* Updated to add this: This isn’t to say I wouldn’t actually find myself happier in the right managerial position. It’s hard to tell since I’ve not been in a situation other than a team lead/senior sort of role. While I might not look at managerial want ads, that’s not to say I’d shy away from the right one whose doors opened for me.

a monitoring lesson from fraud correlation

While this article on the Michaels breach is nothing special, I did like the very opening paragraph:

…a sign that strong transaction monitoring and behavioral analytics are the best ways to curb growing card-fraud schemes.

Remove the word, “best,” and I really like the application of a paradigm like this in many aspects of digital security.

Of course, none of that level of monitoring and analysis is really new in concept, but organizations still have an issue in both realizing this approach but also doing it effectively with a blend of technology and talent.

It’s important to note that without customer reports, even analysis can’t tell a bad transaction from a good one….

wild leaps of logic from tech journalists

Cringely has a strange article which continues the RSA SecureID attack mystery: InsecureID: No more secrets?. I can’t say I’ve ever read Cringely before, so maybe he’s just some tech commentator with no real insight here other than a wide following and sensational, wild speculations… (After writing this, scanning his recent articles pretty much shows me he’s just a tech blogger and that’s it. Yes, I mean it when I say that’s “it.” And yes, I’m being ornery today and particularly spiteful in my dislike of tech commentators dipping a crotchety toe into deeper discussions than they’re suited for.)

It seems likely that whoever hacked the RSA network got the algorithm for the current tokens and then managed to get a key-logger installed on one or more computers used to access the intranet at this company.

Wow, that’s quite the leap in logic there (on multiple fronts), especially since RSA hasn’t revealed what was pilfered from their network. Common discussion tends to speculate that the master list that maps the token seed list to organizations that are issued those tokens (probably keyed by serial number) is the most likely divulged piece.

How would a keylogger assist in this? Well, first, a keylogger alone could be enough to divulge login credentials, although any captured credentials are quite ephemeral when using a secureid token. Second, it could reveal any static PIN numbers used and usernames. I assume the PIN number is the “second password” mentioned in the article. *If* (and that’s a big if) the attacker was the same one who *may* have the seed list and algorithm, that attacker could theoretically match up that user and their fob based on keylogged information.

Does this mean “admin files have probably been compromised?” No; that’s an even bigger leap in logic. Possible, sure. But only with the correct access and/or expanded control inside the network. Hell, I’m not even sure Cringely knows what he means by “admin files.”

Of bigger concern is how a keylogger got installed on such a system long enough to caused this issue. Granted, something was detected (though I suspect it was *after* a theft or attempted VPN connection), but being able to spot such incidents on the endpoints or in the network itself should be a big priority.

microsoft’s waca v2.0 released: web/app/sql scanner

Microsoft has recently released their Microsoft Web Application Configuration Analyzer v2.0 tool. This is such a straight-forward tool to use, and includes rather clear checks and fixes, that it’s really not acceptable to *not* run something like this, especially if you run Microsoft IIS web servers or SQL instances.

The tool has a nice array of checks when pointed against an IIS box, and even does decent surface checks against SQL. While this tool does include “web app” in the name, I don’t think it goes much beyond inspecting a site’s web.config file on that front. It also requires Microsoft .NET 4.0 on the system you install the tool on, and predictably needs admin rights on any target systems it scans. If you’re curious about any checks, they’re pretty clearly spelled out. Also, if you want to supress any checks because they don’t apply, you can do so. The report then mentions the presence of suppressions (yay!), and you can even take off the supressions after the fact, since the tool still does the checks but just doesn’t include them in the end tallies.

This does make a great companion scan tool to add to your toolbelt for appropriate systems, even if it has a herky-jerky interface.

As a sort of cautionary piece of advice, I wouldn’t be totally surprised if some organizations request this tool be run by potential vendors/service providers whose systems meet the tool’s criteria. Which means you hopefully will have run this tool before such a request! It’s much more palatable to request something like this as part of an initial security/fit checkbox when it is an official Microsoft tool. Just sayin’…

some security practice hypotheses

I’m not sure if I jotted these notes down here or not, but wanted to move these from a napkin to something more permanent.

What is the hardest part of security? My thought: Telling someone they’re doing it wrong when they don’t know how to do it right, and you can’t explain it properly. The more technical, the worse it is?

Two examples: First, someone makes a request that is just kinda dumb and gets denied. They come back with, “Why?” And you have to figure out why the value of saying no is higher than the value of just doing it, or what it would cost to accomplish the request while maintaining security and administrative efficiency. (i.e. You want *what* installed on the web server?!) This can be highly frustrating in a non-rigid environment. It’s also the source of quite a lot of security bullshitting.

Second, a codemonkey makes poor code and you point it out. Codemonkey asks how it should be done. If you’re going to point it out, I’d really kinda hope for some specific guidance appropriate to the tech level of your audience. This brings up the pseudo-rhetorical question: Should you be pointing out poor code if you don’t know how to give suggestions on fixing it? (Answer: depends. On one hand, don’t be a dick. On the other, anyone should be able to point our security issues, otherwise people wouldn’t point them out! It’s extremely nice when someone *can* help those with questions, though, with actionable answers beyond just “go read OWASP.”)

And here’s a hypothesis: You’re not doing security if you’re not breaking things, i.e. pushing boundaries. Follow-up: Security pursuit breaks things unless you have expert knowledge and experience.

answering some questions on siem

(I should name this: how I can’t type SIEM and keep typing SEIM…) Thought I’d ramble about SIEM for a moment (as I’m also in the midst of waiting on a report to spin up in my own SIEM), sparked by Adrian Lane’s post, SIEM: Out with the Old, which also channels Anton Chuvakin’s How to Replace a SIEM?

Adrian echoed some rhetorical questions that I wanted to humbly poke at!

“We collect every event in the data center, but we can’t answer security questions, only run basic security reports.” – That probably means you got the tool you wanted in the first place: to run reports and get auditors off your butt! More seriously, this is a good question as it somewhat illustrates a maturing outlook on digital security. I’d consider this a good reason to find a new vendor. That or your auditors are worth more than you’re paying them, and asking harder questions than usual. Good on them! (Though I’d honestly hope your security or security-aware staff are asking the questions instead…)

“We can barely manage our SIEM today, and we plan on rolling event collection out across the rest of the organization in the coming months.” – Run. Now.

“I don’t want to manage these appliances – can this be outsourced?” – You want to…outsource…your…security…? You may as well just implement a signature-based SIEM and forget about it, because that’s the value you’ll get from a group that isn’t intimately aware of or caring about your environment. Sorry, I would love to say otherwise and I’m sure there are quality firms here and there, but I just can’t bring myself to do so. It is hard enough to manage a SEIM when you know every single system and its purpose.

“Do I really need a SIEM, or is log management and ad hoc reporting enough?” – That’s a good question! You’d think the answer goes along the lines of, “Well, if you want it to do work for you, get the SEIM, otherwise you’ll need to spend time on the ad hoc reports.” But really, it’s the opposite: you need to spend time with the SEIM, but the reports you likely can poop out and turn in to your auditors. This might also depend on whether you do security once a quarter or want to do it as part of ongoing ops. It amazes me that people know about this question, have it asked to their face, but then go about life in the opposite direction.

“Can we please have a tool that does what it says?” – Probably the most valid question. The purchasing process for tools like this is too often like speed dating, when really it should be about doing multiple, intimate dates with several candidates; you might even spend some memorable moments together! With such an advanced tool like SIEM that has an infinite number of ways it can be run and slice an infinite number of types of logs, you can’t believe what the marketing team throws at you. Hell, you can’t even listen to what the purchasing manager says either. You need the people with their hands in the trenches to talk to the sales engineers and get real hands-on time. Nothing can fast-track that other than some real solid peers (industry networking! oh shit!) who can give you the real deal information on living with a tool.

The biggest issue in this? No SIEM reads and understands every log you throw at it, especially your internal custom apps! No matter what the sales dude says! (Some will read anything you send in, but they’ll lump the contents into the “Log Contents” field, rather than truly parse or understand it.)

“Why is this product so effing hard to manage?” – Well, I’ve not seen a SIEM that is *easy* to manage, so who is in the wrong here?

Anton had this awesome paragraph:

By the way, I have seen more than a few organizations start from an open source SIEM or home-grown log management tool, learn all the lessons they can without paying any license fees – and then migrate to a commercial SIEM tool. Their projects are successful more often than just pure “buy commercial SIEM on day 1” projects and this might be a model to follow (I once called this “build then buy” approach)

I think this is a great way to go! But I’d caution: The team that has the time and skill to afford to roll their own or open source tools, are also the ones who will have the time and skill to afford to manage their commercial solutions. However, the real point is valid: You’ll learn a ton by doing it yourself first, and can go into the “real” selection process armed with experience. To build on the analogy above, you’ve lived with someone for a while, broken up, and now know what is *really* important in a concub…I mean, partner.

a case of digitally spilled milk

A security researcher presenting at BSides-Australia demonstrated Facebook privacy issues by targeting the wife of a fellow security researcher without permission. Sounds exciting, yes?

1. What he did was in bad taste, and maybe even unethical. Let’s get beyond that…

2. We all know in security that you don’t get shit done unless someone gets slapped in the face, hacked, or embarassed/shamed. This is human and group psychology. So, in a way, this guy probably made more impression on people who might read this than would otherwise have happened. Sad, but true. Will it get out beyond the security ranks? Probably not, unfortunately.

3. It doesn’t sound like anything embarassing or harmful was actually found. I mean, seriously, are people uploading kinky or extremely embarassing photos to Flickr/Facebook and truly not wanting them seen by anyone else? If so, you’ve already failed. (People who upload such content for public consumption and leave them up for future employers to harvest are a different sort of dumb.)

4. Intent does count for a lot in the perpetrator of a crime as well as the negligence of the victim, but Heinrich does have an interesting point, “‘I have no ethical qualms about publishing the photos,’ he said. ‘They are in the public domain.'” Facebook may intend to not make them in the public domain, but they may not be doing enough. Honestly, I’d consider the end result of this to be public domain, yes. Sorry, fix your shit. Wishing and hoping and saying it doesn’t matter. (Yes, I know if I leave my door open and someone breaks in, it wasn’t enticement, but still, shame on me…)

5. In addition, I’m not sure how pissed I’d be if it were my wife and/or kids. I mean, I’ve opted to put my photos up. As security aware as I am, I have the opportunity to know the risks. A real attacker is going to do much worse if they have it out for me, such as photoshopping images into even worse content, and so on. I’d rather have someone helpfully hack me and expose issues than a real attacker do so with vengeance, especially in something that doesn’t harm me any more than a little public ribbing and feeling a little used, like being the brunt of a non-harmful joke. In another way of thinking, don’t spend effort getting pissed over little things; know what’s important in life.

At some point in security, the “kid gloves” do have to come off, if you want to get shit done. And we’re all a little “grey hat” every now and then…or at least Bob is…

(Snagged off the InfosecNews wire via article.)

impassioned people tend to do quality stuff

I read this article about how Process Kills Developer Passion. I’m not a formal coder (more like a scripter), and I only work with a subset of coders (web devs), but I really believe this article hits several points squarely, particular in how process can kill creativity.

The caveat is how this applies to most anything, really. Process can be necessary, for instance in documentation on how things work or why they’re they way they are. Or to cover your own ass when requirements try to change in 6 months.

But the point remains: passionate people tend to do quality things; don’t kill the passion.

I’d also point out something that has been permeating a few of my recent posts: There are not always going to be universal, blanket answers for everything!

You won’t appease every developer you hire by having any one of a given coding process methodologies. You won’t cover every situation with a monolithic security tool. You won’t reach every student with a singular approach to learning. You won’t block every breach…

Picked this article up via Securosis.

why aren’t they using our technology? (tears)

ITWorld has an article: “Apps to stop data breaches are too complicated to use”, which itself is a rehash of this article on The Reg. The article makes 2 (obvious to us anyway) points:

1. Security software is too damned complicated to use. No shit.

2. “…the tendency of customers to not use even the security products they’ve already bought.” I think many of these tools don’t get used because they’re complicated, require experts to feed-and-love-it and review logs constantly, and when they get in the way business gets pissed. They cost money directly, they cost operational money, they cost CPU cycles, they cost frustration from users…

(I’m trying desparately, futiley to avoid the assertion in the second article: “…needs to change so that the technology can be up and running in hours rather than months…” Trying to meet that sort of goal is ludicrous…)

Strangely, the article finishes with this odd moment:

Security systems, intrusion protection, for example, are often left in passive mode, which logs unauthorized attempts at penetration, but doesn’t identify or actively block attackers from making another try.

“It’s a mature market – please turn it on,” Vecchi told TheReg.

I’m not going to deny or accept that these are mature markets, but I will say most *businesses* ren’t mature enough to just turn security shit on. There are 2 very common results when you “turn on” technologies to do active blocking or whatever you have in mind.

a. It blocks shit you wanted to allow. This pisses off users, gets your manager in trouble, and requires experts to configure the tools and anticipate problem points, or extra time to figure it out (with the danger of some nitwhit essentially doing an “allow all” setting).

b. It doesn’t get in the way, but doesn’t block much of anything by default. I imagine far too many orgs leave it this way thinking they’re safe, when in fact it’s only blocking the absolute most obvious worm traffic and port probes (31337). In order to get it better tuned, you need experts who know what to look for and block.

The ideal is usually a state where you bounce between those two outcomes: you tune security to butt right up against the point where you’re negatively impacting people, but still providing security protection. Unless you’re a perfect security god, you will bounce in between those two states.

Business doesn’t like that. They want to create a project with a definite start and finish, implement a product, leave it alone, and have it never get in the way of legitimate business.

This is bound to fail. It’s the same concept of a security checkpoint or guard at a door: it’s intended to *slightly* get in the way when something or someone is suspicious, and does so forever. This is why I have yet to buy into “security as enabler.” Security is designed to get in the way; even security to meet a requirement so you can continue business: the requirement is the part that delivers the security and gets in the way.

There are companies that “get” security; but I guarantee they are also companies filled with employees who can tell plenty of stories about how security gets in their way on a daily basis, whether justified or not. That’s how it is, and business hates that. Even something “simple” like encryption on all laptops, is a pain in the ass to support.

To dive into a tangent at the end of this post, let me posit that security tools-makers are just plain doing it wrong. They too often want to make monolithic suites of tools that cover every base and every customer and every use case and every sort of organization. This creates tools that have tons of features that any single org will never ever have a chance in hell of using. This creates bloat, performance issues, overwhelmed staff, and mistakes. It leaves open lots of little holes; chinks in the armor. I’d liken it to expecting a baseball player to perform exceptionally at every position and in every situation. It’s not going to happen. Vendors need to offer solid training as part of their standard sale (not an extra tacked on that is always declined by the buyer).

It starts with staff and they start with smaller, scalpel-like tools. Only when staff and companies are “security-mature” will they get any below-the-surface value out of larger security tools.

Maybe over the long haul we’ll all (security and ops) get used to these huge tools in a way that we can start to really use them properly. Oh, wait, but these vendors keep buying shit and shoving it in there and releasing new versions that change crap that didn’t need changing. And IT in general is changing so fast (new OSes, new tech, new languages, new solutions) that these tools can’t keep up while also remaining useful. So…in my opinion, still doing it wrong. The difference between real useful security tools and crappy monolithic security tools, as a kneejerk though: good tools don’t change their core or even their interface much at all, they just continue to tack on new stuff inside (snort?). Bad tools keep changing the interface and and expanding core uses; essentially reseting analyst knowledge on every yearly release.

Picked this article up via Securosis.

social media in the classroom

Saw an article linked from HardOCP about social media in the classroom (in Iowa, in fact), and I also read the accompanying forum comments. This is one of those situations where almost every comment is correct.

We, as an American culture, often seem to stumble when it comes to our strange drive to find the one right universal answer; even in a subject that really doesn’t *have* one single blanket answer, such as education (IT and security suffer the same problem). What about class size? What about subject matter? What about teacher personality? What about the extroverts? The introverts? The ones who actually need special attention? And so on. All of these factors really, to me as a non-teacher, stress that every situation is going to be different.

It is exciting to see the role of technology such as tablets (and the internet) working into education, and I think you should try anything you can to engage as many students as possible in the way they respond to best, on an individual level.