.: June 2011 Archives
While this article on the Michaels breach
is nothing special, I did like the very opening paragraph:
...a sign that strong transaction monitoring and behavioral analytics are the best ways to curb growing card-fraud schemes.
Remove the word, "best," and I really like the application of a paradigm like this in many aspects of digital security.
Of course, none of that level of monitoring and analysis is really new in concept, but organizations still have an issue in both realizing this approach but also doing it effectively with a blend of technology and talent.
It's important to note that without customer reports, even analysis can't tell a bad transaction from a good one....
by michael 06.01.11 at 3:32 PM in /general
Jarrod Loidl has an interesting discussion on the topic Management vs Technical Career
. This is always definitely something to keep in mind as a career moves forward, and I think he really does end up hitting most of the milestone points in such a thought process. It's a long post, but it keeps firing the cylinders even at the end.
I really like the ending tandem points of, "do what you love," and (in a Wolverine voice), "be the best at what you do." Combine that with, "don't be an ass," and you really have a simple guide to work and life.
If I were to look at my own lot, I'd say it certainly is hard to keep current with the skillsets. I remember starting out my career around Windows XP and I still feel like I know it inside and out. Windows Vista/7? I fully doubt I'll ever be as intimate (then again, I don't do desktop support right now). On the managerial side, I feel like I have excellent organization, attention to detail, high degree of problem-solving/troubleshooting skill, and I make accurate decisions quickly (backed by confidence in those skills) when I need to get things done. My downside is that I'm not entirely a people person. Oh, once I get going, I'm fine, but it really takes significant effort and time for me to find my voice socially in a given group, as any introvert is likely to echo.
That said, at this point in my life and career, I could probably swing management, but I get far more enjoyment out of the technical side of the equation, for a variety of reasons that I won't dump out here quite yet.* Management is one of those things I accept I'll do someday simply because of the decision-making support and anlysis skills I have, but I have the luxury of allowing that "someday" to not be tomorrow quite yet. Perhaps if I snag some security consulting gigs that would be enough... :)
The end thought is one Jarrod mentioned: At least spend the time to do this reflection on who you are, what you are, what makes you happy, why it makes you happy, and so on. Too many people never ask these introspective questions, and they should.
* Updated to add this: This isn't to say I wouldn't actually find myself happier in the right managerial position. It's hard to tell since I've not been in a situation other than a team lead/senior sort of role. While I might not look at managerial want ads, that's not to say I'd shy away from the right one whose doors opened for me.
by michael 06.03.11 at 8:30 AM in /general
Lockheed Martin recently suffered a hacking incident. In the days that followed, it was reported by the NY Times that the attack was indeed linked directly to a previous RSA hack
that stole what is still unidentified information from RSA. CNET has posted more information and links
and Wired has a blurb about L3
As I mentioned on Twitter, how much better would we all be if RSA had divulged full details to the public or affected parties? Were they just going to wait and hope nothing came of whatever was stolen from them?
Of course, with something like this the worst should be assumed, but that's not a great strategy to tell your boss or use to formulate your budgets and risk postures. No one assumes the worst; if they (or we) did, we'd have far better security initiatives...
I understand they are certainly fixing whatever was broken and replacing what needs to be replaced, but it's still irresponsible in my book.
by michael 06.06.11 at 9:40 AM in /general
Go read Gunnar's quick piece (and the comment) about Jay Jacob's insight on Shannon's Maxim (can I make this sentence more awkward?): The enemy knows our systems, but the good guys don't.
Even looking at it from the network perspective, the enemy knows your firewall rules, yet so many internal folks do not. It sucks to look at a firewall and ask why rule #267 is present. Only to have no one able to answer it.
Or to have a developer look at the security person who wants security, but the developer has no idea and no one else to talk to on how to fit that in without potentially breaking everything else. As Jay says, "...people aren’t motivated to evaluate all the options and pick the best, they are motivated to pick the first option that works and move on."
(Coders/developers are notorious for this, but so are sysadmins and users as well!)
Essentially, security is often covertly treated as the experts in...everything internal. Which really is a tough requirement to ever meet. Really, the organization needs to know its own stuff intimately.
Before the enemy does it. This is still why I consider pen-testing activities to be valuable; since they often expose exactly what an attacker is learning that an organization hasn't.
As Marcus in the comment to the linked article essentially says, I'm sure the revolving door of (questionably skilled) outsourced and contractor IT doesn't help at all.
by michael 06.07.11 at 9:17 AM in /general
Brian Krebs has an article up on the case of Patco vs Ocean Bank
. The implications of this case could have important industry ramifications as the key point of contention is what technically constitutes "good enough" security from the bank's perspective.
I don't suggest reading too many of the comments. This is a very delicate and not-clear situation, which many commentors don't seem to grasp very well. While some of the angst may center on whether the bank really had 2-factor authentication or possibly the out-dated guidance from teh FFIEC.
Side note 1: I've not read the actual case file, but from Brain's article, I'd say Ocean Bank isn't using 2-factor authentication.
Side note 2: Always asking security questions on every transaction reduces the security value? Actually, sort of, when your attackers are employing keyloggers and you normally don't have transfers that trigger the asking of those questions. Then again, any attacker who runs into those barriers will just keep lowering their transfer amount until they're under the threshold. Hopefully that would trigger some fraud alerts...
Site note 3: At some point consumers (business) need to put their own diligence in doing their banking on trusted systems. If you hire a courier or some other proxy to run to the bank and make transfers for you, if that person ends up skipping town with extra money because they inflated the transfer amount and sent it to themselves, do you blame the bank or your own hiring practices/trust? In this way, computers are a sort of proxy, granted, a proxy that answers to anyone with the right handshake, so to speak...
Site note 3a: Unlike the "simple" maintenance and safety and security of a car or other vehicle, the care and safety and security of a computer system or network is still going to be far above the head of most consumers and workers. Telling people they need to put forth their own effort in maintaining a trusted computing platform is often going to be met with tears of anguish and outrage...as they then turn their eyes to app/OS vendors and their security track records...or to the government's lack of "internet jurisdiction" in keeping foreign attackers out or at least under threat of arrest..and on and on.
Site note 3b: All of this ends up raising questions of what is reasonable in a highly-technical globally-connected digital world? I'm not sure anyone will ever be happy with where the decisions fall in such a discussion.
by michael 06.08.11 at 10:29 AM in /general
I don't yet have a full-on crush on Gunnar, but a bit in a post/interview of his reminded me of a concept I like to drive home. In the third part of his Brian Chess interview
with my emphasis:
Ever wonder why so many programmers are so bad at security? Part of the problem is that most of them don't know they're bad. Generally speaking, people are bad at assessing their own strengths and weaknesses (read this). That means you need to seek objective measures of your work. If it doesn't sting sometimes, you're not doing it right.
In the (too distant) past I've done some weight-lifting, and very recently have taken up trying to get into the running habit. Even in those realms, I darn well know that unless it hurts (in a good way), you're not making progress. This is the same in self-serice car repair. This is the same in learning a new game. This is the same in IT ops (we learn the most when shit is broken.) This is the same in security.
I have this "in-progress" unpublished post (for like the past 2 years!) that is just this beefy list of security "laws" or other rules for those of us in security. One of the most recent ones I added was something similar:
Doing security right means finding that happy medium where you are bouncing in between transparent security and breaking things. For instance, tuning the perfect firewall rule means tightening it down until it breaks something, then loosening it just enough to make it work again.
Brian Chess wasn't necessarily meaning it strictly that way, but we all certainly need to be ok with having something sting now and then, because that is when we get better and learn and also reinforce that we're not just sitting in some ignorant funk where everything is wrong and we just don't know that it's wrong. "Wow, our logs sure are clean," which glosses over the fact that your logging has been broken for weeks.
by michael 06.09.11 at 2:14 PM in /general
In a twist of irony, I'm also stealing this preso link from Gunnar's interview with Brian Chess: How To Steal Like An Artist, by Austin Kleon.
Not a ton of this is new, but it's a new way to look at things, and almost every item pertains to life/career outside of art.
Just in case the site ever dies, I thought I'd also reproduce the raw list of steps here. But this is certainly no substitute for checking out the original post and probably the book. Anything in [square brackets] is me adding to the item, either my own thoughts or the author's subitems.
1. Steal like an artist. [So many subitems!]
2. Don't wait to know who you are to make things. [Just do it!]
3. Write the book you want to read.
4. Use your hands.
5. Side projects and hobbies are important.
6. The secret: do good work and put it where people can see it. [Give away your secrets!]
7. Geography is no longer our master.
8. Be nice. The world is a small town. [Matches a core philosophy of mine.]
9. Be boring. It’s the only way to get work done.
10. Creativity is subtraction. [Concentrate on what's important]
by michael 06.09.11 at 3:24 PM in /general
by michael 06.10.11 at 9:04 AM in /general
(I've sat on this for half a day, but wanted to post it since there isn't enough blog-to-blog talk these days. I may be wrong, I may have good points. Some people learn to shut their mouth over the years, but others of us are learning to actually speak up! So, if nothing else, this is therapy for me! Oh, and I wouldn't even bring this post up if I didn't respect Jay and his blog and his thoughts.)
Go read the post "Yay! We Have Value Now!"
by Jay Jacobs. Then look back over here. Let's put on our diving suit and...err...dive in!
I really should say I hate the idea of saying, "I told ya so." It's insulting, demeaning, whining, and so on. But that doesn't mean I like the situation that may lead to thinking, "I told ya so," and then effecting some change without saying so much. Truly, we will actually never get anywhere if we don't get business leaders to say, "We were wrong," or "We need guidance." These are the same results as, "I told ya so," but a little more positive, if you ask me. But if leaders aren't going to ever admit this, then we're not going to get a chance to be better, so I'd say let 'em fall over.
Besides, you could do security action _____, and I'll always be able to someday say, "I told ya so!" It's totally an attitude thing. Moving on! :)
(Aside: I think the whole topic of 'secure enough' or 'there is no state of security' and such is akin to that old Usenet idea of every argument devolving into a Hitler analogy at some point. I find that there is not much we can discuss in security without eventually hitting that point, either implied or explicit.)
RE: Problem #1: It assumes there is some golden level of “secure enough” that everyone should aspire too.
- I personally don't see that assumption. Pointing out Lulzsec popping other people simply means those businesses had some deficiencies and they were attacked by some criminals. Criminals out for a good time and laughs. Some of those guys certainly have some skills, but lord help those companies who may have had even more talented and insidious attackers finding and leveraging those weaknesses first. (Then again, maybe a business will prefer to be attacked by their competitors as opposed to hooligans out for laughs?) Still, I'm not sure what Jay was meaning when he turns this into the loss of credibility.
I should propose there are two broad types of security engagements. The first is where a business leader wants reasonable
security advice. This almost always begs some metrics, defensibility, economics, and business process/culture consideration. In other words, what should we personally be doing as a business to be secure? The second is where a business leader wants to know what security improvements there are to make. Some of them might end up not being reasonable, but that's for the business leader to decide. The latter probably won't ever lose credibility in the face of public digital hacks. But they will walk out of board rooms rejected on a more regular basis.
RE: Problem #2: Implies that security people know the business better than the business leaders.
- Business leaders are talented; I'll make no effort to dispel that blanket belief. However, if you want to know the real value in a business or how the business is doing, talk to the accountants. If you want to know business process, most likely you'll talk to the IT teams. If you want to know the digital risks, you talk to the security teams. The trend here is that while business leaders (at the risk of not defining "business leaders") can be very good leaders, managers, entrepreneurs, salespeople, and strategic thinkers, that does not necessarily mean they have any grasp on digital risks or what that might mean, or how their important assets are being protected (either physical or digital). They might not even know of some of the damaging assets such as the database storing CC/PII or that their secret formula is on a test server somewhere. Is that bad? No. It just means they're not omniscient.
Does that mean IT or accountants or security know the business? Nope. But that doesn't mean they know the business less than business leaders either. Mostly, I agree with Jay. I think security gets a little too loud with the, "I have your data, your business is over!" ranting. It's possible the business owner knows the cost of security, has a good idea the cost of any realized risks, and has actually chosen a spot on the balancing beam that is security. Maybe that means there is risk left open, and some attacker leverages that opening and pops data off into the public world. Sometimes, the end result is, "So what?" Some people may lose their jobs, media might make things fun for a while, and you might even get to talk in front of a Congressional commmittee that has some ideas on making some digital regulations (how's that for an attack effecting change?). But it truly is just a part of doing business and an incident for leadership and PR to handle and move on through. Now, did Sony understand that if they host a porous web application that, if attacked, might result in not only losing the data it houses on customers but also result in extended periods of downtime and public smearing? Perhaps.
Still, the end point is that this isn't about knowing the business more than the business leaders, but just knowing the digital security posture more. Refer back above to my 2 types of security engagements. It's hard to tell a security geek to stop being a security geek for a bit and be reasonable. Not because of a character flaw, but because of our passion and desire and (often) knowledge. Now, I bet Jay would agree that the security pro who can have that passion but also temper it into 'reasonable security,' is golden!
RE: Problem #3: This won’t change most people’s opinion of the role of corporate information security.
- Diving into this point is not going to be fun, and Jay probably knows it since he admitted the problems in this point. I'll try not to linger...
Statistics just flat out BEG to be manipulated and presented in strange ways that may paint things in an opposite light. How many of those 200 million domain names are even web sites? That have significant enough value behind them to ever even begin to have the potential to be the face of a "large breach?" Not 200 million. And so on. How may were public? (Of course, won't this mean there is such a thing as "good enough" security? Oh shit, I did it...)
Strangely, Jay sort of makes the opposite point he set out to do: "We need more tangible proof to really believe in hard-to-fix things like global warming: we fix broken stuff when the pain of not fixing something hurts more than fixing something."
Wait, what? Watching Sony's network get made into Swiss cheese isn't tangible proof enough? I'm being an ass there, since I think Jay means it needs to happen to us directly.
Here's my favorite security analogy (cue the emotional language!). If you find out and hear the stories and see the shaking and tears of your neighbors who have been victims in a string of home break-ins and theft, will that have any bearing on your own home security posture? Even if for just the short-term?
On the other side of the coin, if you haven't heard a lick about theft or suspicious persons or strange things going on around your neighborhood, I wonder how many such residents will see their security posture loosen over a complacent time.
One might argue that perhaps there is crime there, and they aren't targets, and even zero
effort/time/money spent on security would have the same results. Perhaps! That's where I get into the whole Security Gamble issue. You might have zero security, and be a victim. You might have 95 units of security, and be a victim.
But I would say that in the face of a rash of incidents in your neighborhood, you'll take at least a superficial look at your own posture, and maybe raise just the right questions for a security pro to actually effect some positive difference.
Jay's core point, though, is still true as any security or even IT ops professional can attend to: shit usually doesn't get fixed until there's a problem. That includes flaky servers, poor code, insecure practices, database hashing, vuln scans, and so on. But I'd still say public scapegoats do have a positive impact.
RE: Problem #4: Companies are as insecure as they can be (hat tip to Marcus Ranum who I believe said this about the internet). To restate that, we’re not broken enough to change.
- I honestly don't have much I can say about Point 4. :) I pretty much agree, but it's also general enough to be somewhat unarguable. The one thing I can say: I think Sony is making changes due to these incidents. *shrug* Just sayin'...
By the way, I am entirely neutral with this whole LulzSec thing; but I will certainly use any opportunity I can get to promote security initiatives in people or organizations that I may influence.Yeah, pointing out the inevitable insecurities in others is about as evil and head-shaking as any other FUD, but security is ultimately what we're asked to do.
by michael 06.10.11 at 2:53 PM in /general
The venerable Rothman has started a fire (see what I did there?) talking about truth and (dis)information.
I normally don't dwell on hacker or criminal or even hacktivist groups very much (I prefer to keep my head down), but I'm not surprised at all about the current state of affairs that Rothman speaks of, for two very broad reasons.
First, there's distrust amongst criminals.
Let's be clear: there are two broad types of hacker groups, those that break laws and those that are really just nuisances. Sort of like home thieves vs. train car graffitti vandals. Once you start breaking laws, you get into a whole new game where you are collectively wanted people with penalties over your heads. And not everyone will have the same fortitude and acceptance of those risks. This makes actual criminal groups very unstable and distrusting amongst each other. You never know when someone is LEO,or has been caught and made a deal to be an informant, or if you'll just plain overstay your welcome and become another loose end to tie off. You also never know when you'll be screwed in some way or other.
I would venture to say once a group breaks laws, they've crossed that grey ethical line, and escalating from there isn't so hard. Somewhat like breaking into your first business or beating that first person to a bloody pulp; doing it a second time is far easier. As is escalating. It only takes one splinter group (cell? wut?) or even person to escalate things for the whole group, which means even more distrust.
Second, it would be folly for law enforcement and even governments to *not* have their undercover fingers in these sorts of groups
for a variety of reasons: Sow discord, find criminals, discover incidents no one's reported. But also to gain information into how these groups work, what their tech and methods are, and also gain assets. The latter goes for local agencies as well as foreign, as they attempt to gain talent and bodies and knowledge. Even people like Brian Krebs are involved as observers...
I would even argue that large corporations may have some interest in keeping their noses in the underworld like this, on a purely secure and non-active level. Then again, I doubt many orgs even get that far, as securing their own networks and people is tough enough. (I still have these moments where I think of the worlds painted in Back to the Future 2 or the Shadowrun universe where corporations are a dominant force, and they are quite involved in the under world.)
Now, do I think that someone like NATO has informants at all levels? I'd guess not directly; maybe via proxy when looking at cooperation from member nation agencies and even then counting their tenuous informants.
What I do guess is there are plenty of less-skilled persons in these hacker groups that make for great headlines when they get pinched en masse because they're kids sitting at home making poor security decisions and being traced easily. The more popular they are, the more hangers-on there will be, and collectively the less safe they'll be.
by michael 06.13.11 at 12:49 PM in /general
This is going to be preaching to the choir, but I don't get to link enough to Hoff these days (my head is not up in the cloud, unless you count my 90% virtualized environment), and he gave an easy-to-agree-with post about WAFs
How many infrastructure security folks do you know that are experts in protecting, monitoring and managing MBeans, JMS/JMX messaging and APIs? More specifically, how many shops do you know that have WAFs deployed (in-line, actively protecting applications not passively monitoring) that did not in some way blow up every app they sit in front of as well as add potentially significant performance degradation due to SSL/TLS termination?
I'd add to this even further. What team should be involved in the WAFs? Developers. But which team does this duty get shouldered upon because of the ease to tie a WAF onto the chokepoint appliance in the network? The network team. Maybe even the web systems team. Which team has no idea how to tune a WAF? All of the above. Why? Because it spans layers that none of those teams are wholly familiar with, from data to app to protocols to network. Maybe QA should be the answer, but I don't have the feeling that many shops value their QA that way...
Of course, Hoff's last statement is really a business cultural issue, and unfortunately the only way anyone has a chance to learn how to get any value out of a WAF. Sure, put it in a test environment, but how many developers, infrastructure, or security guys have the time to interrupt the others as they bounce around in the WAF? I don't know about all sysadmins out there, but even "test" environments for developers are "production" in my eyes. If I make a change and bring a test server down for an hour, I get developer managers giving me the hairy eye...
(Side note: Want to gain some real good experience? Adopt a real, honest, working WAF, get permission to implement and toy with it, and go to town. Figure out what kills apps, figure out what attacks look like to the app you're protecting, and figure out how to block them. Sure, this might mean a whole "sub"-project in learning attacks on web apps, but all of this is great practice to gain some experience in an area I think is still sorely lacking in real experts beyond pointing WebInspect at a URL. You can do this at home, but I feel like you're bounded by your knowledge and use-cases, and not all the things that happen in day-to-day dev/qa/network/internet traffic. But if you *do* do it at home, start with all those vulnerable-web-apps-in-a-box setups and learn the attacks. Then put a WAF in front and block said attacks...hmm maybe I should do that again!)
by michael 06.14.11 at 9:23 AM in /general
If you want to read a poorly crafted article, check out this one today from McAfee: Five Simple Steps SMBs Can Take To Prevent A Disastrous Data Breach
. May as well check out these five steps, keeping in mind this is geared to the small/medium business segment.
1. Conduct a Candid Data Quality Assessment -
identifying your data is a noble goal, but for 1 of 5 steps for an SMB to actually prevent a data breach, this item has zero actionable value. And let's just get this out of the way now, even though it permeates the article: Your language is for that of an enterprise with a robust security maturity; not an SMB who is going to go, "Huh? Ok...tell me *what* to do."
2. Create a Detailed Description of all Data Touch Points -
Data touch points? Are you kidding me? I understand the point here, despite the lofty enterprise-level wording, but I was hoping by now I'd have seen some mention of patching your systems. Oh, and this is step #2 that isn't actually doing anything; it's just about taking inventory (which itself should just be one bullet point).
3. Conduct Periodic System Reviews -
Another noble item, but for most SMBs, it's about getting things done more so than yanking on the reins and slowing things down to gather the security ramifications of applications that are rolled out. I was really hoping this item would talk about actual periodic system reviews, which anyway itself is so vague to be useless. Every SMB is just going to "do a system review" that is half-assed, and then say go ahead.
4. Develop Comprehensive and Specific Security Policies -
The first overt bit of upsell for McAfee services. In fact, I'm not even sure what the text has to do with the bullet point, which is useful for a security program, but again doesn't prevent shit. And if anyone is going to write a policy that gathers dust, it will be an SMB.
5. Deploy Comprehensive Solutions -
And here's the big marketing/sales slap to the face. Also, you might as well tell an SMB, "To prevent data breaches buy security tools that prevent data breaches." Yeah, great advice. At any rate, the description given for this monolithic comprehensive security solution means nothing to an SMB and is not actionable. Scales, easy to implement and minimal maintenance, and supports all places where data resides.
My advice on making a better checklist is to drop the enterprise-level lingo and get some actual actionable bullet points. The items have merit, certainly, but are useless to SMBs with bounded time and staff and talent. All of these bits of advice turn into "go-get-em" initiatives that won't go anywhere because they take time, require completeness, and don't even have medium-term results. Sure, the SMB may find out all sorts of things about their data, systems, data touch points, and policies, but none of that actually *does* anything.
So that's it. No advice on patching. Not even some advice on desktop malware protection or even network layer malware detection (which I was expecting and would have *accepted* coming from McAfee...
by michael 06.14.11 at 6:37 PM in /general
If you ever have a chance to assist your boss with job interviews to fill a position, I highly recommend taking the opportunity. Maybe I'll expound on it someday, but even for a quiet, slightly a-social (there's a difference with antisocial!), introvert like me, it's a really useful experience.
You get to see what you look for in potential employees, get to see their strengths and weaknesses, their experiences and work history, and see how that applies to your own situation. In a way, that can also build confidence in your own lot in life. You also get to hear your boss talk about the company and the open role is ways you likely haven't heard spoken since your own interviews!
One thing I can attest to, is having your resume and/or things you talk about ready to match up to the job position as honestly as possible. And try to stress (if it's true) your own geek-like passion that exists even outside the job. I still really feel someone who does sysadmin stuff or networking stuff or security stuff outside of a paycheck (on their own time) is almost always going to be a superior employee just because of their deep interest and passion. Write your own apps? Stand up your own website? Home phone system is fully Asterisk/VOIP? Show it off!
As far as my own reflection, do I have some action ideas? Sure. I've been at my current position 5 years, and I've gotten a bit lax in attending security conferences and plugging in a cert/study activity here and there as well. I wouldn't mind continuing to demonstrate my involvement and personal learning. Maybe a grad cert, maybe another industry cert, maybe just some continuing education class (like something parallel to a bhusa) either in my field or even completely outside it (foreign language), or even contribute to some other project in our area.
Update: I also want to add, don't wait until you're out of a job or on the way out to do interviews. Feel free to just do them and look around, even if you're not truly looking to move on. Get used to them, use them to get ideas and maybe meet people (central Iowa is NOT a large place to disappear in). Who knows, you might find an awesome deal that you weren't expecting. If you *do* interviews just to do them, though, try not to seem like you're knowingly wasting someone's time. Put forth the real effort and then maybe later just say you've opted to remain where you are. There's a certain level of comfort doing an interview when you don't *need* the job. Be picky with recruiters, though. Too many can't walk the technical talk, and your passion can be lost on them, aka a human keyword filter. And make sure they require your permission before they pass you on.
by michael 06.15.11 at 3:02 PM in /general
A vendor today sent me their "PCI certificate." Turns out this was just a site scan for their external mail server. This is a Google result of what their site certificate looks like (this is just a random Google search result, not my vendor): site certificate
That's pretty damn misleading. But then again, so is the entire SecurityMetrics.com website. Check out their steps to PCI compliance.
Yes, that says 25 minutes to PCI compliance.
If you have desktops that fall under PCI scope, you can buy and run a scan from their website.
Oh shit, someone should tell Steve Gibson to rebrand his ShieldsUp! service.
To at least give the benefit of the doubt, there are some hints
that this company actually knows how to do PCI compliance, but the vast majority of their site leads customers down the path of thinking PCI is cheap and easy and takes very little time and only requires making up answers on a self-assessment questionnaire and an external vulnerability scan.
This is really the kind of low-bid crap that causes real security to be elusive.
by michael 06.17.11 at 10:19 AM in /general
It's not news that people and employees (or owners!) need to watch what they "say" online. But it's not (quite) every week that you get a perfect high-profile illustration of this advice: Duke Nukem PR agency Redner Group blacklisting venomous Duke Nukem reviews via a Twitter announcement. And then getting fired.
Why is this particularly apt?
- clearly this is something done, just usually not copped to in public!
- done via official company twitter account, not just some marketing moron
- oh, and it was the namesake owner himself who did it
- pr group successfully is drawing attention to poor duke nukem reviews
- looks like this is their biggest client... oops, was
by michael 06.18.11 at 1:44 PM in /general
(I wanted to spend more time on this post, but my brain hurts now. Keep in mind that I don't have it out for simple or complex passwords; the crux of my post is that neither is de facto better than the other. It all just depends. But if some "normal" person asks me for my advice, I won't say simple passwords are the solution.)
Read and wanted to comment on an article I saw over on Securiteam, but my comment got way longer than I felt like posting, so I figured I'd vomit it out here in full instead. The article, titled, "Simple passwords are the solution,"
made the claim: "The solution is not to make passwords more complex. It’s making them less complex (so that users can actually remember them) and making sure brute force is impossible."
You see what Aviram did there? Took a bad statement and clarified it with the better answer in that last phrase. Cute. :) This is a common approach when dealing with users, particular managers who make decisions. The demand, "I want simpler passwords," is rightfully countered with, "Sure, but in order to do that we need to make sure brute forcing is difficult and cracking is adequately thwarted. Here's what that will cost..."
Let's back up to that first part about simple passwords being the solution and how that relates to the originally-referenced article over on PCPro.co.uk.
That original article is pretty useless, but let me forget that for a moment.
I think there is a problem with saying simple passwords are the solution and complex passwords are bad.
You should be saying: 2F auth is better than complex passwords which itself is better than simple passwords.
If I walk around my business saying simple passwords are better because then you won't have to write them down, I'm spreading around a horrible habit for those systems/apps/sites that may only accept a simple password. This provides a bad mixed message to my users, which has no upside to it. I'm also oversimplifying the problem. If there's anything at all that turns users off to security, it's the mixed, complex messages we can concoct when we're not careful. If I have to go into a deeply technical discussion about simple vs complex passwords and why one is better than the other in some cases but not others, I've already lost them.
Oh, and what is more of a risk? Someone with physical access to a written-down password, or a digital attack that leverages any weakness in that simple password? I'm not sure I'd even begin to say I have an answer for that...
- brute force the login (effective against simple pw)
- hash/encryption cracking
- long-term reuse once found
- acquiring the password in other ways
- hash resuse (which I won't touch on here)
2F auth really helps all of these cases, which isn't really an argument since I think everyone here can agree to that.
(For this paragraph I may have been distracted with the link to the Password: Impossible
article by Aviram.) But password expiry/rotation limits some risk as well. If a password is disclosed, at least the user can change their password or it naturally gets changed during the age expiration. Many attacks are point in time hacks where a hash gets out or a password guessed. Clearly, this isn't universal as an attacker may have another channel to get back in or perform his attack periodically, but certainly it avoids the point in time exposures. Still, if a password is disclosed for whatever reason, you want some automatic method to prevent that knowledge from being useful forever.
The article talks entirely about cracking passwords. 2F auth helps avoid that risk, but otherwise fixing things to make cracking much more difficult is a server-side thing and won't affect users (salts, shadow, time-based tokens...) beyond having more complex passwords. The same goes for simply protecting the hashes (but even I assume that will be exposed at some point).
The article doesn't make a new argument at all. Cracking like this has been around for 20+ years, it's just faster. That's certainly not news that it is faster today, and doesn't change any answers or risks. It'll be faster tomorrow and it'll be faster in 10 more years. And we're still talking about cracking taking longer with complex passwords than it does with simple ones. We haven't changed that. Sure, we might be talking a few minutes, but that's still a few minutes. Being that I'm not a crypto-geek, I'll have to stay shallow in this topic.
It really all gets back to looking at some core security fundamentals. Is there a perfect answer/silver bullet? No. So does that mean we should be accepting any incremental security measure we can that decreases our risk and makes sense economically? Yes. Simple passwords, complex passwords, and passwords of *any* type are not perfect, but at least they help. (And let's remember that passwords are also still a form of security through obscurity....).
We should keep in mind that "writing down" passwords is the same concept whether you write them down on a post-it note under the keyboard, in a journal in a lock drawer or in a digital safe application. Yes, some are easier to break into than others, but we're still talking about recording-them-somewhere-because-they're-too-long-to-remember. And if you do that digitally, you actually *might* increase user risk because they have far less chance to memorize the password and may never actually know it. Which sucks when your digital safe is not accessible at some point for whatever reason.
We should also step back and see that there are certainly different assets that passwords are protecting. Should I use 2F auth when commenting on some forum or blog that has their own login I need to use, whose server-side setup I know nothing about? Certainly not, unless I truly value it. Does this mean I should use simple passwords so I don't write them down? Perhaps, especially if I see very little value to an attacker or even myself in that asset. Certainly the answer is not that I have a 2F auth fob for every login I use, and certainly the answer is not some universal solution so I have just one fob but a federated identity for everything (arguable, and I'll let that one just hang there as a wholly different topic).
Just to get back to the main point, saying simple passwords are better is a bad statement, even if I agree with it given qualified scenarios and restrictions.
by michael 06.20.11 at 10:26 AM in /general
Just a quick pointer/bookmark over to a story on risky.biz about distribute.it, who was digitally attacked and is facing their demise because of a lack of offline backups
I missed the episode, but I like what Patrick quoted from Paul: "We can tell management about the risk all day long and they're not going to believe us until it happens to them. If you told an executive at any one of these companies... They're probably just going to say 'yeah, well we think the business can just recover from that...'"
I like to mention when I pull news off the infosecnews wire! Oh, this is one!
by michael 06.22.11 at 8:32 AM in /general
Put yourself into the position of your CEO. He rubs shoulders at various functions and places with other business owners and CEOs and VIPs. He'd *love* you if he were the one showing off the newest awesome technology to his peers, rather than the one oogling someone else's gear that *their* staff succeeded in implementing. Better yet if he can actually do work conveniently and securely! The same goes when he and 3 competitors are offering presentations to a prospective customer, and he simply has better technology to show off (either on his person or in the demo).
It really comes back to one of those rules of business: always make sure your manager looks good. Don't be the person who makes your manager look bad.
Of course, this whole circular problem with managers (consumers) influencing each other and bringing in technology ideas to the corporation and thus bogging down IT and security is a problem. But some problems none of us may end up solving unless we're in a hard-and-fast regulation-driven organization housed in the Pentagon.
In that case, keep in mind that you can make allies by running ahead of the curve. Just make sure if you stumble a few times, you only scrape your knees rather than take your whole team out of the race.
(I know, it depends on your culture and CEO personality. For many, trying and failing now and then is valuable as opposed to not trying at all. But for some, trying and failing in front of the CEO is just as much a career death knell as anything, no matter how gracefully you handle it.)
by michael 06.22.11 at 4:20 PM in /general
With recent high-profile hacks and "lulz" going around, there has been a marked level of discussion about whether these attacks are useful or damaging, what security is, and why it is failing or not failing. Most of that sort of discussion eventually makes my head hurt, but if there's a blog post worth reading, it's "Take a bow everybody, the security industry really failed this time
," by David Maynor over at Errata Security. I wanted to quote something from it, but the whole thing is quotable and discussable.
So, has the security industry failed? I'm not sure. I'm pretty sure the "real" talent in the security industry knows the problems and knows how to fix specific problems, but as Maynor illustrates, these are often just not listened to for various, ultimately economical, reasons.
Is this a problem of the security industry
however? Certainly not entirely. I mean, what are you going to do when someone doesn't have the budget to stop your extravagent attack? What can security do when companies like Ligaxx and SecuxxxxMetxxxs.com do crap work (if work at all) and still get attention because the customer doesn't know better?
I've long said it, but finally the mainstream media is latching onto the infinite amount of drama that can be found in corporate and public digital security. In other words, security won't ever be perfect. There will always be incidents. This means there will always be a fail, which means there will always be juicy, sensational bits of news to throw out. (Granted, my opinion would be even more cemented if any of the recent examples had been really damned good with their security...)
In the end, I really think lots of things are failing, and there's really no answer to fix it.
Perhaps "security" needs to stop looking beyond its own borders. When we talk about security on a global level, ultimately there is nothing to feel good about. When we talk about security in a single organization, you can actually accomplish some damn good stuff.
Perhaps this is a problem illustrated by a three-way tug-of-war. Security vs economics vs convenience. With other actors thrown in, like consumers, greed, knowledge, and so on. There's just no win there, only various points where everywhere is somewhat satisfied according to their own situations.
Perhaps, perhaps. Anyway, I have no answers here. I'm still trying to frame my perspective on things. It's like not knowing if you like a sculpture or not, because you're still trying to figure out how to properly look at it, what lighting, what angle.
I just know there's a heck of a lot to be excited about and a heck of a lot to be upset about. And that itself is exciting and upsetting! (At some point, the disturbing vision of jerking off gloriously while sobbing in utter despair occurred, and that's just not right at all. Yet I felt compelled to share it...hey, I'm in security, I'm not well in the head by default!)
by michael 06.27.11 at 4:14 PM in /general
(I've hesitated posting this, since I'm myself getting sick of just complaining. But sometimes it helps with the thought process...)
So I've regularly been seeing these announcements that the perimeter is porous and users are adopting "cloud" (in the loose definition) services and consumer products to consume corporate data, and how security needs to accept it and start tailoring data-centric controls and architecture to deal with that reality.
That's all great and fine to say, but there's nothing actionable in these postings. Maybe I'm being dense for the moment, but in all these sorts of grand announcements, no one actually seems to have any idea what to actually do. (Or maybe all of the suggestions are at a developer level and involve more people vetting processes/data usage, and more QA controls, and...I hate to say it, but Lord help us if so. In that case, the security team needs to be part-time developers, and have 48-hour days.)
And no, I don't want to hear (yet again) about education and dialogue and threat analysis. Necessary, yes, but there's no real assurances in that. That's like saying we'll implement a firewall by talking about it in a monthly group therapy session. (Update: Ok, maybe education is the only real answer here. I'll accept that if people say it, I guess. I'll just remind them there's still no *real* assurance there, and you don't have enough security staff to watch everyone all the time.)
If a company is using Google docs, I want to know what a security team can do to keep that more secure.
If a company is using the Amazon cloud to deliver part of their web site content, I want to know what a security team can contribute.
If a company is using Github, I want to know what options a company has with their code security.
If some employees were using Dropbox for the past few weeks to backup business-critical files, what can you assure me with their security? (Or do you even have a chance to know someone accessed your files during the "any password accepted" hours?)
If you're allowing your executive teams to use iPads, I want to know what you're doing to assure some security for those users on those devices with the things they access. (Not counting people who only browse the web and check their web mail.)
And I don't want blog comments, I want to actually see industry blog posts that go into realistic detail, ya know? Not because I want someone to do my job for me, but if we can't solve things behind our curtains, we're damn sure not going to solve things in front of the management teams.
I completely buy that the "perimeter" is porous (my coworkers are getting used to my sighs of exasperation as I hear of yet another service that wiggles and persists itself through any and all perimeter controls. [Strangely, all of this is a *product* of perimeter control, ya know? We stopped things, people still wanted them, it evolved. Just like an attacker!]) But so many articles and blog posts include the reverse implication that you need to forget your perimeter and find something else to do. A something else they never define. They just say we need it. Even Neo needed something tangible to wrap his mind around (pun intended) so he could start buying into this paradigm shift.
I know I'm being self-fulfilling in this, but we have a lot of commentary and not a lot of doing these days. I'm painfully aware of my own coasting over the last year or so. Still, I'd rather have a lot of complainers than a lot of people saying vague general unactionable things, to be honest.
by michael 06.30.11 at 8:29 AM in /general