.: May 2011 Archives
Picked up via @MikkoHypponen
that some liveblogging was happening during a Sony press conference
. I still won't complain about their response to all of this, but...
No CSO level position? Weak. You're how big with how much IP and data and how big if your digital footprint?
Known vulnerability but wasn't known to you? What does that even mean? You think management is going to understand such vulnerabilities? Anyway, this means to me that either patching was broken or this was a reported hole in their systems that wasn't addressed properly.
Rebuild/move data center with better security? Sounds almost like they just outsource their operations...that or moving your physical location isn't going to help against a digital attack.
by michael 05.01.11 at 9:29 AM in /general
Catching up on ISC.SANS entries and I came across "In-house developed applications: The constant headache for the information security officer."
This is one of those things that I think is not only far easier said than done, but is also not limited at all to in-house apps. I've had as much headache, if not more, with third-party delivered apps, especially those custom made.
In-house apps suffer from a developer doing things any way they can get away with. The only protection is to be stringent with least privileges and access, and questioning every design requirement; basically make them develop inside a safe box, which of course gets in the way of innovation.
Out-of-house apps suffer from doing things any way they can that will get the job done with as little tinkering as possible. The only protection to this is to give complete knowledge of your requirements to the third-party so they design it to fit. Yeah, good luck with that.
So when shit hits the fan and a manger has already spent xx manhours on an application, guess what? Yup, the network/systems/security need to bend to accomodate, often creating exceptions and other administrative headache. All because of poor up-front involvement...
...and expert level knowledge. (Yes, that's the crux of it all!)
This is why I am cynical about getting code to be better. It helps in large enterprises with mature development lifecycles, but I truly feel most shops don't have that, and their security/ops teams are manhandled by developers meeting business requests.
by michael 05.02.11 at 1:03 PM in /general
Preaching to the choir, but here is my illustration of how difficult PCI can be. Let's look at requirement 10.5.1: Limit viewing of audit trails to those with a job-related need. Let's also keep in mind the wording of 10.5.2: Protect audit trail files from unauthorized modifications. Essentially we're talking about log management.
(If you've worked in logs before, you can probably guess where I'm going to go...)
Let's say Bob uses LogRhythm as his choice of log management software, and he points his devices over to it. For simplicity, let's just say he has a Windows Server OS box that is under scope for PCI. Since the LogRhythm agent sucks up these logs and throws them at the master server, Bob submits only a screenshot of the user account list inside LogRhythm. Bob reasons that only these peope can see the logs in the SEIM.
Well, wait a minute. The point of these PCI items is twofold. First, make sure
unauthorized people can't view the logs, only those who need to see them can view the logs (an important distinction, sadly), either because they may give details away or aid an attacker in seeing what errors she generates. Second, make sure the attacker doesn't have a chance to modify those logs, or flat out destroy them.
As some vendors in this space will tell you, there are gaps here! The gap between when Windows gets the event and when it saves it to the event log. The gap between when the event is written to a local log and when LogRhythm's agent grabs it up (including when an attacker has been able to turn off the collector agent). Moving forward, what about the backup location of log files? The agent-to-master communication? (Better yet, let's talk syslog in terms of confidentiality and integrity!)
Another way to look at it is just to evalute our audit logs in a way that unauthorized people can't just stumble upon them and/or edit them. If an attacker subverted a system and can intercept logs before they're gathered, that just might be an advanced case. If an attacker popped Local System on his Windows/IIS box, should he still be able to protect those logs completely? I think that's arguable. Likewise, someone may argue that more open logs like the Windows System and Application logs aren't in scope of this, and only the Security log is, which is more locked down by default in Windows. Perhaps... In cases like this, you at least have logs up to when the attacker gained enough rights to start hiding her tracks.
I'm not going to diss on "just enough security" since I think that's what we often preach anyway when we talk risk. I just wanted to illustrate that even slam dunk PCI items, when really analyzed deeply, are not always so easy to rush through.
Update: Also check out 11.5: Deploy file-integrity monitoring tools to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly. This begs the obvious question, "What files should I monitor?" It's not an easy question and more orgs/people will opt not to tell you unless you're paying them money to do so. So, do you purchase and deploy a FIM tool with defaults? What executables and dlls and files do you monitor? Unless you do the bare minimum of following vendor defaults, this won't ever be something you just do and forget forever...and that's not even having to deal with patch-related false positives or a misguided desire to log *everything* just because you want to, and then suffer through many false positives...
by michael 05.03.11 at 2:07 PM in /general
If you have a remote developer who has access to a development database that actually has mostly production-level data inside it, would you know if that developer downloaded the whole database to their home system?
Would you know it if they put in a backdoor page on a production site that allows raw query access?
by michael 05.03.11 at 2:43 PM in /general
This is a useful exercise to do with oneself every now* and then: If I had xx million dollars right now, jobwise, what would I do? A few things spring to my mind...
1. Let's get this out of the way first: Nothing.
Retire and travel around the world to beautiful places and experiences. Play video games. Do whatever. Nothing too crazy.
Why not do it now? Duh, $$$
2. Open a store: combination arcade, video/PC gaming, tabletop gaming, culture.
Why not do it now? A store like this won't yield crazy margins and probably won't ever be profitable. But if I had the money to eat the losses, I can think of many, many, many other less interesting and fun ways to spend my life.
Ok, now let's get back to the real world for a bit.
3. Security consulting business.
Now, I'm not talking some generic consulting where you just regurgitate the latest NetworkWorld news blurb or Gartner reports on what products in the AV space to buy. I simply want to answer security questions and help someone improve their security. I'd want to have the ability to dive in deeper as well, such as evaluating weaknesses in an IDS/IPS deployment and configuration, making recommendations on staffing for technologies, code development processes, testing detection and response, what works and what doesn't in identity management. Not just top-level non-actionable things, but actually fingers-in-the-shit sort of work. Basically one step away from being on staff/contractor, so that the things I can talk about are also things that can be lived with, and any questions I can't answer (like how do I protect against XSS in this specific function?) I can spend the time to figure out the answer. I wouldn't want to be the consultant who says, "Classify all your data," and then walks away with a paycheck for dropping that load of shit on some CIO's desk when there are many other actionable items that can be tackled first. Even small things like the PCI item to discover all CC info on the network, would be fun, without just saying, "Buy DLP." Any down time would be spent as I do now: tinkering with whatever I want to dig my fingers into, and staying abreast of the community.
That's a huge paragraph, and I'm probably being more detailed than I need to be. Essentially like 1 part security analyst, 1 part architect,, 1 part coder, 1 part auditor, 1 part pen-tester, 1 part manager, 1 part managed security service provider....
Goal: I love doing it (defense and offense), including the allowances for profits/convenience.
Why not do it now? Simply financial risk.
Why not do it now, part 2? I'm not much of a salesperson; I often understate my abilities rather than thinking I'm a qualified expert.
Why not do it now, part 3? Ok, fine, as Rothman excellently points out
, I could almost do this now by taking on the consultant attitude. Other than not having a dedicated security role right now (general ops), I could be there.
I get that it's not all fun and games and there's tons of report-writing and analysis and screen-staring and delivering the same old report to hostile managers and fruitless scanning and frustration at squeezing 2 months of work into 72 hours on site. I get that. But I'd still love to be doing it.
Why not do it now? RE:An earlier point, I feel like I could use some "junior" time under a mentor/guidance.
Why not do it now, part dos? Honestly, I feel like I would suck for several years until I gained more experience and instinct, and I hate underdelivering. That would be a rough few years where financial security would be nice. But for someone who self-describes as having the logical/analystical/paranoid mindset that is nice for security, it's really just a matter of getting experience under the belt.
Any number of roles also come to mind or even my own managed security services firm, though I still am not sure of their value, ultimately. Even doing some auditing, but I also feel like that will never be profitable because of the corners so many other firms cut in order to do more and quicker audits while keeping customers happy (i.e. as much good news as possible).
As far as company size, I don't mind large companies all that much, or even just being a cog in a much bigger wheel, but I would love the family-and-friends feel of a smaller shop, where you can relax and be yourself in the office and not just have it be a stuffy 9-to-5 sort of environment. I've actually been in a start-up for a summer, and while it was ultimately a waste of time, I think, I did really enjoy the informality and get-it-done feeling. (The Penny-Arcade office atmosphere comes to mind...)
The ultimate goal that makes me happy, though, is helping someone better understand the security of their data, business, network, systems, and ultimately people.
* I actually just hit my 5-year anniversary at my current job. A bit of a milestone that causes me to sit back and think about where I am now and my next 5 years...
by michael 05.05.11 at 12:59 PM in /general
As I earlier mentioned as an afterthought, I just passed my 5-year anniversary in my current job.
1996-2001 college (5 years=studies change halfway through)
2001-2002 yeah, the tech hiring bust!
2002-2006 first job
2006-2011 second job
Let me tell ya, time flies.
by michael 05.05.11 at 1:19 PM in /general
The NSA has published a nifty Best Practices for Keeping Your Home Network Safe
fact sheet. This is a pretty good document which mixes easy-to-understand concepts with some more challenging ones. I really feel that people can get overwhelmed with the technical stuff, but usually do react favorably when given managable challenges.
I'd like to have seen more emphasis made on unique, complex passwords and the importance of passwords, but these are still excellent bullet points to cover with people. Entire books can't cover the breadth of tips for good security these days, even for the layman....
by michael 05.06.11 at 2:49 PM in /general
Every now and then I have to give reasons against something like Skype in the enterprise. Here's a great reason why: 0day Skype messages.
Wormable. (via @hdmoore
The point is not to waggle fingers at Skype (though you could, since they're closed and not very talkative), but to illustrate the risks inherent in any new technologies brought into the enterprise. (Not that I wouldn't waggle fingers at Skype anyway, since I believe something like Skype wouldn't be allowed to be so popular unless there were ways to tap into the voice streams.)
by michael 05.06.11 at 7:51 PM in /general
Saw this fly past on Twitter, and I can't remember who posted it. Anyway, an interesting article on "Why The New Guy Can't Code."
Getting straight to the point (of this article and others linked to from it): technical interviews tend to suck and hire only people who have accomplished something.
by michael 05.08.11 at 7:02 PM in /general
Harlan echoed some of my own feelings in a recent post of his
...I keep coming back to the same thought...that the best tool available to an analyst is that grey matter between their ears.
...Over the years, knowing how things work and knowing what I needed to look for really helped me a lot...it wasn't a matter of having to have a specific tool as much as it was knowing the process and being able to justify the purchase of a product, if need be.
Totally agree. This should apply to IT in general. If the tools replace knowledge, then you become a slave to the tool and it's capabilities and weaknesses or lose the ability to ever work around the inevitable gaps of these tools.
by michael 05.09.11 at 10:55 AM in /general
In case someone has missed it, Backtrack 5
has been released. You can even score some free Infected Mushroom (goa/psytrance) songs
(3 if you register, or 1 if you follow the link).
by michael 05.13.11 at 3:02 PM in /general
Saw an article linked from HardOCP about social media in the classroom
(in Iowa, in fact), and I also read the accompanying forum comments
. This is one of those situations where almost every comment is correct.
We, as an American culture, often seem to stumble when it comes to our strange drive to find the one right universal answer; even in a subject that really doesn't *have* one single blanket answer, such as education (IT and security suffer the same problem). What about class size? What about subject matter? What about teacher personality? What about the extroverts? The introverts? The ones who actually need special attention? And so on. All of these factors really, to me as a non-teacher, stress that every situation is going to be different.
It is exciting to see the role of technology such as tablets (and the internet) working into education, and I think you should try anything you can to engage as many students as possible in the way they respond to best, on an individual level.
by michael 05.14.11 at 8:33 AM in /general
ITWorld has an article: "Apps to stop data breaches are too complicated to use"
, which itself is a rehash of this article on The Reg
. The article makes 2 (obvious to us anyway) points:
1. Security software is too damned complicated to use.
2. "...the tendency of customers to not use even the security products they've already bought."
I think many of these tools don't get used because they're complicated, require experts to feed-and-love-it and review logs constantly, and when they get in the way business gets pissed. They cost money directly, they cost operational money, they cost CPU cycles, they cost frustration from users...
(I'm trying desparately, futiley to avoid the assertion in the second article: "...needs to change so that the technology can be up and running in hours rather than months..." Trying to meet that sort of goal is ludicrous...)
Strangely, the article finishes with this odd moment:
Security systems, intrusion protection, for example, are often left in passive mode, which logs unauthorized attempts at penetration, but doesn't identify or actively block attackers from making another try.
"It's a mature market – please turn it on," Vecchi told TheReg.
I'm not going to deny or accept that these are mature markets, but I will say most *businesses* ren't mature enough to just turn security shit on. There are 2
results when you "turn on" technologies to do active blocking or whatever you have in mind.
a. It blocks shit you wanted to allow.
This pisses off users, gets your manager in trouble, and requires experts to configure the tools and anticipate problem points, or extra time to figure it out (with the danger of some nitwhit essentially doing an "allow all" setting).
b. It doesn't get in the way, but doesn't block much of anything by default.
I imagine far too many orgs leave it this way thinking they're safe, when in fact it's only blocking the absolute most obvious worm traffic and port probes (31337). In order to get it better tuned, you need experts who know what to look for and block.
The ideal is usually a state where you bounce between those two outcomes: you tune security to butt right up against the point where you're negatively impacting people, but still providing security protection. Unless you're a perfect security god, you will bounce in between those two states.
Business doesn't like that. They want to create a project with a definite start and finish, implement a product, leave it alone, and have it never get in the way of legitimate business.
This is bound to fail. It's the same concept of a security checkpoint or guard at a door: it's intended to *slightly* get in the way when something or someone is suspicious, and does so forever. This is why I have yet to buy into "security as enabler." Security is designed to get in the way; even security to meet a requirement so you can continue business: the requirement is the part that delivers the security and gets in the way.
There are companies that "get" security; but I guarantee they are also companies filled with employees who can tell plenty of stories about how security gets in their way on a daily basis, whether justified or not. That's how it is, and business hates that. Even something "simple" like encryption on all laptops, is a pain in the ass to support.
To dive into a tangent at the end of this post, let me posit that security tools-makers are just plain doing it wrong. They too often want to make monolithic suites of tools that cover every base and every customer and every use case and every sort of organization. This creates tools that have tons of features that any single org will never ever have a chance in hell of using. This creates bloat, performance issues, overwhelmed staff, and mistakes. It leaves open lots of little holes; chinks in the armor. I'd liken it to expecting a baseball player to perform exceptionally at every position and in every situation. It's not going to happen. Vendors need to offer solid training as part of their standard sale (not an extra tacked on that is always declined by the buyer).
It starts with staff and they start with smaller, scalpel-like tools. Only when staff and companies are "security-mature" will they get any below-the-surface value out of larger security tools.
Maybe over the long haul we'll all (security and ops) get used to these huge tools in a way that we can start to really use them properly. Oh, wait, but these vendors keep buying shit and shoving it in there and releasing new versions that change crap that didn't need changing. And IT in general is changing so fast (new OSes, new tech, new languages, new solutions) that these tools can't keep up while also remaining useful. So...in my opinion, still doing it wrong. The difference between real useful security tools and crappy monolithic security tools, as a kneejerk though: good tools don't change their core or even their interface much at all, they just continue to tack on new stuff inside (snort?). Bad tools keep changing the interface and and expanding core uses; essentially reseting analyst knowledge on every yearly release.
Picked this article up via Securosis
by michael 05.16.11 at 9:44 AM in /general
I read this article about how Process Kills Developer Passion
. I'm not a formal coder (more like a scripter), and I only work with a subset of coders (web devs), but I really believe this article hits several points squarely, particular in how process can kill creativity.
The caveat is how this applies to most anything, really. Process can be necessary, for instance in documentation on how things work or why they're they way they are. Or to cover your own ass when requirements try to change in 6 months.
But the point remains: passionate people tend to do quality things; don't kill the passion.
I'd also point out something that has been permeating a few of my recent posts: There are not always going to be universal, blanket answers for everything!
You won't appease every developer you hire by having any one of a given coding process methodologies. You won't cover every situation with a monolithic security tool. You won't reach every student with a singular approach to learning. You won't block every breach...
Picked this article up via Securosis
by michael 05.16.11 at 10:30 AM in /general
A security researcher presenting at BSides-Australia demonstrated Facebook privacy issues by targeting the wife of a fellow security researcher without permission. Sounds exciting
1. What he did was in bad taste, and maybe even unethical. Let's get beyond that...
2. We all know in security that you don't get shit done unless someone gets slapped in the face, hacked, or embarassed/shamed. This is human and group psychology. So, in a way, this guy probably made more impression on people who might read this than would otherwise have happened. Sad, but true. Will it get out beyond the security ranks? Probably not, unfortunately.
3. It doesn't sound like anything embarassing or harmful was actually found. I mean, seriously, are people uploading kinky or extremely embarassing photos to Flickr/Facebook and truly not wanting them seen by anyone else? If so, you've already failed. (People who upload such content for public consumption and leave them up for future employers to harvest are a different sort of dumb.)
4. Intent does count for a lot in the perpetrator of a crime as well as the negligence of the victim, but Heinrich does have an interesting point, "'I have no ethical qualms about publishing the photos,' he said. 'They are in the public domain.'" Facebook may intend to not make them in the public domain, but they may not be doing enough. Honestly, I'd consider the end result of this to be public domain, yes. Sorry, fix your shit. Wishing and hoping and saying it doesn't matter. (Yes, I know if I leave my door open and someone breaks in, it wasn't enticement, but still, shame on me...)
5. In addition, I'm not sure how pissed I'd be if it were my wife and/or kids. I mean, I've opted to put my photos up. As security aware as I am, I have the opportunity to know the risks. A real attacker is going to do much worse if they have it out for me, such as photoshopping images into even worse content, and so on. I'd rather have someone helpfully hack me and expose issues than a real attacker do so with vengeance, especially in something that doesn't harm me any more than a little public ribbing and feeling a little used, like being the brunt of a non-harmful joke. In another way of thinking, don't spend effort getting pissed over little things; know what's important in life.
At some point in security, the "kid gloves" do have to come off, if you want to get shit done. And we're all a little "grey hat" every now and then...or at least Bob is...
(Snagged off the InfosecNews wire
by michael 05.18.11 at 8:27 AM in /general
(I should name this: how I can't type SIEM and keep typing SEIM...) Thought I'd ramble about SIEM for a moment (as I'm also in the midst of waiting on a report to spin up in my own SIEM), sparked by Adrian Lane's post, SIEM: Out with the Old
, which also channels Anton Chuvakin's How to Replace a SIEM?
Adrian echoed some rhetorical questions that I wanted to humbly poke at!
“We collect every event in the data center, but we can’t answer security questions, only run basic security reports.” -
That probably means you got the tool you wanted in the first place: to run reports and get auditors off your butt! More seriously, this is a good question as it somewhat illustrates a maturing outlook on digital security. I'd consider this a good reason to find a new vendor. That or your auditors are worth more than you're paying them, and asking harder questions than usual. Good on them! (Though I'd honestly hope your security or security-aware staff are asking the questions instead...)
“We can barely manage our SIEM today, and we plan on rolling event collection out across the rest of the organization in the coming months.” -
“I don’t want to manage these appliances – can this be outsourced?” -
You want to...outsource...your...security...? You may as well just implement a signature-based SIEM and forget about it, because that's the value you'll get from a group that isn't intimately aware of or caring about your environment. Sorry, I would love to say otherwise and I'm sure there are quality firms here and there, but I just can't bring myself to do so. It is hard enough to manage a SEIM when you know every single system and its purpose.
“Do I really need a SIEM, or is log management and ad hoc reporting enough?” -
That's a good question! You'd think the answer goes along the lines of, "Well, if you want it to do work for you, get the SEIM, otherwise you'll need to spend time on the ad hoc reports." But really, it's the opposite: you need to spend time with the SEIM, but the reports you likely can poop out and turn in to your auditors. This might also depend on whether you do security once a quarter or want to do it as part of ongoing ops. It amazes me that people know about this question, have it asked to their face, but then go about life in the opposite direction.
“Can we please have a tool that does what it says?” -
Probably the most valid question. The purchasing process for tools like this is too often like speed dating, when really it should be about doing multiple, intimate dates with several candidates; you might even spend some memorable moments together! With such an advanced tool like SIEM that has an infinite number of ways it can be run and slice an infinite number of types of logs, you can't believe what the marketing team throws at you. Hell, you can't even listen to what the purchasing manager says either. You need the people with their hands in the trenches to talk to the sales engineers and get real hands-on time. Nothing can fast-track that other than some real solid peers (industry networking! oh shit!) who can give you the real deal information on living with a tool.
The biggest issue in this? No SIEM reads and understands every log you throw at it, especially your internal custom apps!
No matter what the sales dude says! (Some will read anything you send in, but they'll lump the contents into the "Log Contents" field, rather than truly parse or understand it.)
“Why is this product so effing hard to manage?” -
Well, I've not seen a SIEM that is *easy* to manage, so who is in the wrong here?
Anton had this awesome paragraph:
By the way, I have seen more than a few organizations start from an open source SIEM or home-grown log management tool, learn all the lessons they can without paying any license fees – and then migrate to a commercial SIEM tool. Their projects are successful more often than just pure “buy commercial SIEM on day 1” projects and this might be a model to follow (I once called this “build then buy” approach)
I think this is a great way to go! But I'd caution: The team that has the time and skill to afford to roll their own or open source tools, are also the ones who will have the time and skill to afford to manage their commercial solutions. However, the real point is valid: You'll learn a ton by doing it yourself first, and can go into the "real" selection process armed with experience. To build on the analogy above, you've lived with someone for a while, broken up, and now know what is *really* important in a concub...I mean, partner.
by michael 05.18.11 at 2:30 PM in /general
by michael 05.25.11 at 8:40 AM in /general
I'm not sure if I jotted these notes down here or not, but wanted to move these from a napkin to something more permanent.
What is the hardest part of security? My thought: Telling someone they're doing it wrong when they don't know how to do it right, and you can't explain it properly. The more technical, the worse it is?
Two examples: First, someone makes a request that is just kinda dumb and gets denied. They come back with, "Why?" And you have to figure out why the value of saying no is higher than the value of just doing it, or what it would cost to accomplish the request while maintaining security and administrative efficiency. (i.e. You want *what* installed on the web server?!) This can be highly frustrating in a non-rigid environment. It's also the source of quite a lot of security bullshitting.
Second, a codemonkey makes poor code and you point it out. Codemonkey asks how it should be done. If you're going to point it out, I'd really kinda hope for some specific guidance appropriate to the tech level of your audience. This brings up the pseudo-rhetorical question: Should you be pointing out poor code if you don't know how to give suggestions on fixing it? (Answer: depends. On one hand, don't be a dick. On the other, anyone should be able to point our security issues, otherwise people wouldn't point them out! It's extremely nice when someone *can* help those with questions, though, with actionable answers beyond just "go read OWASP.")
And here's a hypothesis: You're not doing security if you're not breaking things, i.e. pushing boundaries. Follow-up: Security pursuit breaks things unless you have expert knowledge and experience.
by michael 05.25.11 at 3:50 PM in /general
Microsoft has recently released their Microsoft Web Application Configuration Analyzer v2.0
tool. This is such a straight-forward tool to use, and includes rather clear checks and fixes, that it's really not acceptable to *not* run something like this, especially if you run Microsoft IIS web servers or SQL instances.
The tool has a nice array of checks when pointed against an IIS box, and even does decent surface checks against SQL. While this tool does include "web app" in the name, I don't think it goes much beyond inspecting a site's web.config file on that front. It also requires Microsoft .NET 4.0 on the system you install the tool on, and predictably needs admin rights on any target systems it scans. If you're curious about any checks, they're pretty clearly spelled out. Also, if you want to supress any checks because they don't apply, you can do so. The report then mentions the presence of suppressions (yay!), and you can even take off the supressions after the fact, since the tool still does the checks but just doesn't include them in the end tallies.
This does make a great companion scan tool to add to your toolbelt for appropriate systems, even if it has a herky-jerky interface.
As a sort of cautionary piece of advice, I wouldn't be totally surprised if some organizations request this tool be run by potential vendors/service providers whose systems meet the tool's criteria. Which means you hopefully will have run this tool before such a request! It's much more palatable to request something like this as part of an initial security/fit checkbox when it is an official Microsoft tool. Just sayin'...
by michael 05.27.11 at 9:36 AM in /general
Cringely has a strange article which continues the RSA SecureID attack mystery: InsecureID: No more secrets?
. I can't say I've ever read Cringely before, so maybe he's just some tech commentator with no real insight here other than a wide following and sensational, wild speculations... (After writing this, scanning his recent articles pretty much shows me he's just a tech blogger and that's it. Yes, I mean it when I say that's "it." And yes, I'm being ornery today and particularly spiteful in my dislike of tech commentators dipping a crotchety toe into deeper discussions than they're suited for.)
It seems likely that whoever hacked the RSA network got the algorithm for the current tokens and then managed to get a key-logger installed on one or more computers used to access the intranet at this company.
Wow, that's quite the leap in logic there (on multiple fronts), especially since RSA hasn't revealed what was pilfered from their network. Common discussion tends to speculate that the master list that maps the token seed list to organizations that are issued those tokens (probably keyed by serial number) is the most likely divulged piece.
How would a keylogger assist in this? Well, first, a keylogger alone could be enough to divulge login credentials, although any captured credentials are quite ephemeral when using a secureid token. Second, it could reveal any static PIN numbers used and usernames. I assume the PIN number is the "second password" mentioned in the article. *If* (and that's a big if) the attacker was the same one who *may* have the seed list and algorithm, that attacker could theoretically match up that user and their fob based on keylogged information.
Does this mean "admin files have probably been compromised?" No; that's an even bigger leap in logic. Possible, sure. But only with the correct access and/or expanded control inside the network. Hell, I'm not even sure Cringely knows what he means by "admin files."
Of bigger concern is how a keylogger got installed on such a system long enough to caused this issue. Granted, something was detected (though I suspect it was *after* a theft or attempted VPN connection), but being able to spot such incidents on the endpoints or in the network itself should be a big priority.
by michael 05.27.11 at 12:43 PM in /general