.: September 2011 Archives
McKeay wrote a great post about blogging
this past weekend, and I think any security blogger should check it out. I really like his subpoints about blogging and working and balancing both:
I’ve learned a number of lessons about blogging the hard way. I’ve learned that no matter what I think I’m writing, what’s important is how other people are reading it... I’ve realized that people are reading and judging what I write, for good and for ill. And when I write something people read, it can get back to my employer.
I also like this:
More often than not, my employers have maintained an air of benevolent ignorance towards my blog, but every so often I’ve gotten the “we’ve read your blog and are not happy” conversation. Not often, but it has happened and it’s never comfortable talk. I’ve actually told at least one manager that my blog and podcast are more important to me than my job.
For me certainly, blogging is a personal thing, a way to organize my own thoughts, record something for the future, or vent a little bit. It's also a way to dive deeper into what would always be a hobby for me, even if not a job. Even if I didn't have a single reader my blogging habits wouldn't change a bit.
Anyway, here are some points of my own that I try to follow.
1. Separate work from personal if you need to.
This is a big deal in the past 5 years, where work and play time are blending together, largely because something you "say" (digitally) during your personal time can now easily persist for years for people at work to discover. Things you could say with buddies or at a bar don't just stay with buddies or at the bar in a single point in time. Therefore, with blogging especially, I try to keep work separate. I don't hide my identity on here, but likewise I don't advertise my blog to work colleagues (they can easily find it on their own if they want) and I don't mention my employer anywhere on here. I also leave deep personal things aside, though some incidents/anecdotes if read by the right people, would recognize themselves in them, but I also try to make sure they're generic enough and have enough of a point to not be uncomfortable. Besides, if I piss someone off, I hope they have my own viewpoint and just move on with life. It's a big deal to be able to agree to disagree; a very useful skill. I like dark grey cars and don't like white cars. You might not agree. And it would be silly to get pissed about that. Same goes for what I post on my blog or elsewhere on the Internet with my screenname.
I admit, my hard divide between work and personal is slowly going away, in part to my next point, but also partly because security work and play is a career goal.
2. Don't present false faces.
I don't like when people "front," or present themselves in a way that isn't in line with who they really are. Life is way too short and precious to not be yourself in anything you do, and if being yourself gets in the way, make changes to be someone better. In that regard, I don't typically pick my words carefully on my blog; if I have an opinion, I'll be out with it. (Though it does help that I'm an easy-going kind of guy anyway...and this is also easy to say for someone who thinks of himself as a very decent guy who is sympathetic to objectivist beliefs...)
3. It's easy to apologize or admit to being wrong.
I don't mean this to sound like a copout for bad behavior, but it is easy to apologize for or admit to being wrong. I find it's more important to put your opinions out there and be contritely wrong, than to bottle everything up and stew. And this is a tough thing for an INFP
to say! (And it's something my risk-averse nature will always fight with me about.) Granted, that doesn't mean you can be an asshole and then be contrite about it and things are fine...be reasonable!
4. Remember the important things in security: integrity and privacy.
This also applies to IT work in general. Typically we are in positions to know very deep secrets and have access (or get access) to very sensitive things. The same principles that prevent me from perusing my CEOs mailbox are the same that dictate what I divulge on a blog, or anywhere on the Internet. Hopefully most people in white hat security are at least aware of these principles in every facet of their lives.
by michael 09.01.11 at 8:54 AM in /general
Swa Frantzen (ISC) has a great discussion of recent DigiNotar drama
going on. I do take minor exception to this statement:
I for one would love to know who that external auditor was that missed defaced pages on a CA's portal, that missed at least one issued fraudulent certificate to an entity that's not a customer, and what other CAs and/or RAs they audit as those would all loose my trust to some varying degree. This is not intended to publicly humiliate the auditor, but much more a matter of getting confidence back into the system. So a compromise that an unnamed auditor working for well known audit company X is now not an auditor anymore due to this incident is maybe a good start.
I totally understand this sentiment, and actually do agree with it. But we do have to be careful that we don't set every single security auditor/expert up for failure, where one mistake causes the hammer to drop. (Speaking of elephants in rooms, the seeking or assumption of perfection is a 'subtle' one...)
Granted, repeatedly missing defaced pages hits the facepalm category, but I think this oversight (from tripwires on attacks to page inventory reviews to edit/ownership times to web app sec checks, etc) can happen to literally every organization if they're not rigorous in their testing, though it still comes down to knowing what is valuable in the eyes of a threat, and being extra careful around those processes (i.e. issuing a trusted certificate!).
Sitting back and pondering this scenario while nursing some scotch illustrates all sorts of things that are wrong with the security mindset in our world, ya know? Maybe "wrong" is a bad word for it, but rather the challenges we face and will eternally face, as a function of reality.
by michael 09.02.11 at 2:53 PM in /general
Lenny Zeltser goes over 8 tips for detecting website compromise
. There are far too few security writers who have technical chops enough to not stand out amongst a pack of geeks at a security conference, but Lenny is one of the better ones, so any criticism I have for his writing is done very lovingly. Snuggly-like, even. I figured I could make mention of this nice article and give my own reactions.
Lenny starts out with 3 overly obtuse best practices. Sort of like saying, "If you want to get into shape, you should run more," which *sounds* easy. I think he takes these items like I do: you'd be remiss if you left them out, but can't do them justice by being less obtuse about it.
1. Deploy a host-level intrusion detection and/or a file integrity monitoring utility on the servers.
- This biggest problem with this (collective) item is how much activity this is going to generate for an analyst. And if you do tune it down to a quiet level, I'd argue that you're not going to see what you want. But, as said already, a necessary evil to a security posture, whether you like it or not. At a bare minimum, you should know every time a file is changed and a new file appears in your web root (with exceptions for bad apps that need to write temp files and other crap to itself- the bane of web root FIM...).
2. Pay attention to network traffic anomalies in activities originating from the Internet as well as in Internet-bound connections.
- This part, "Internet-bound connections," really needs to be implemented as soon as possible
before an organization has so much going on that you can't ever close it down without preparing everyone for the inevitable breaking of things no one knew were needed. Watching traffic coming in? Well, not so easy, and you'll probably just end up looking for stupidly large amounts of traffic (which may be normal if you service a large client sitting all behind 2 proxy IPs) or the most obvious Slapper/Slammer IDS alerts. But you absolutely want to know that you just had 500 MB of traffic exfiltrate from your web/database server to some destination in the Ukraine, and it contained a tripwire entry in an otherwise unused database table. I would drop the buzzword, "app firewall," in this item as well. You should also know what is normal coming out of your web farm (DMZ), and anything hitting strange deny rules on the network should be checked into.
3. Centrally collect and examine security logs from systems, network devices and applications.
- Collect: Yes. Examine: Yes, with managed expectations. I really want to say yes, but having done some of this, 99% of the examined stuff that bubbles up to the top is still not valuable. There's a reason I think gut feelings from admins catch lots of incidents and strangeness on a system/network, and it's not because it shows up clearly in logs. Especially with application logs, if you want them to be trim and tidy, you're looking at a custom solution which includes custom manhours, overhead, and future support resources.
Side note on custom app logs: If you can, log/alert any time a security mechanism routine gets used, for instance if someone attempts to put a ' into a search field (WAF).
4. Use local tools to scan web server’s contents for risky contents.
- Certainly should do this as much as you can. You can scan for even deeper things like any new file at all (depending on the level of activity normally on your web root) or files created by/owned by processes you don't expect to be writing files.
5. Pay attention to the web server’s configuration to identify directives that adversely affect the site’s visitors.
- This can be a bit easier in Apache or even IIS 7.0+, but some web servers like IIS 6 are hideous for watching configurations. Yet, definitely keep this in mind. Thankfully, attackers have extra hurdles than just writing a file in the web root, when it comes to such servers.
6. Use remote scanners to identify the presence of malicious code on your website.
- I agree with the spirit of this, but I think internal scanners need to be done as well. If a bad file is present but not discoverable from an existing link on your site or easily-guessed name, then an external scan will totally miss it.
I should take a moment to stress that many of these items include signatures or looking for known badness, but none of that should replace actually looking, at some point, at every page you have for things that are clearly bad to a human, such as a defacement calling card or something.
7. Keep an eye on blacklists of known malicious or compromised hosts in case your website appears there.
- It's hard to say this shouldn't be done, but it certainly offers little return on the investment of time. If you *do* happen to show up before any other alarms sound, then clearly it is nice.
8. Pay attention to reports submitted to you by your users and visitors.
- I'd personally overlook this item 9 times out of 10, but it is really oh so necessary.
If I wanted to add an item or two:
9. Scorched earth.
- While not a detection mechanism, you do run certain risks if your web code sits out on the Internet for a long period of time and grows stale. You should refresh from a known good source as often as you think you need to. This can include server configs and the like. (For instance, I roll out web root files every x minutes on my web servers at work, and I regularly wipe out web server configs and rebuild them via automation.) Don't be that company that had a backdoor gaping open to the world for 2 years, when a simple code refresh would have closed the hole. A diff comparison mechanism may satisfy the 'detection' criteria of this list.
by michael 09.02.11 at 3:42 PM in /general
Courtesy of Mr. Chief Security Monkey (and Defcon), a talk from Jayson Street is available from Defcon 19: "Steal Everything, Kill Everyone, Cause Total Financial Ruin"
I really dig his main point about security looking at not-so-rosy situations, rather than just the cute things and nice things and digital things. Think about kidnapping; poisoning; fires; wanton theft.
by michael 09.05.11 at 3:23 PM in /general
I'm dangerously behind on my CPE earnage. If you see me post more videos and webinars and things like that in the coming months, that's why.
by michael 09.06.11 at 8:41 AM in /general
Lots of talk recently about DigiNotar and Iran. I'd posit this problem is more impacting than people think, but not for reasons that are being bandied about. I don't usually don quite so big a tinfoil hat, but I certainly don't want to act naive about realistic risks. I'll try to keep my statements brief, though a bit rambling.
Hypothesis: Iran made legitimate requests of DigiNotar for certificates. This is normal business for a CA. (This may or may not be true at all, but it still stands to illustrate a point.)
Iran cares about intercepting communications for governmental security purposes.
Every dang nation in the world cares about intercepting communications for governmental security purposes, though in some cases we really hope it is with documented procedures and reasons (i.e. like we hope for the US).
Every CA has a way to request any sort of cert you want to aid governmental interception. You really think any CA that does business in country X will be able to still conduct business if they rebuke the host government? No. (Apply this thinking to things like Skype or Google's portals to request data on people of interest for some precedence.)
The government(s) isn't going to let there be some completely private global (or even national) means of communication without leaving them the ability to tap into it if needed. I'd posit that this partially explains various not-optimized communications security like CDMA and such.
The web of trust for SSL/CA/web infrastructure is weak, and maybe even broken, but that's unfortunately part of the (mostly accidental) design, if you ask me. Granted, this was all devised long ago when scale wasn't a huge concern. Before having 600 CAs in the world that most every browser just inherently trusts because it is good for business because it eases user frustration and efforts (if you run an e-commerce website, just think how awful it will be to work with every user when their browser won't trust everything inherently). Sadly, it is inherent that a "web of trust" is only as trustworthy as the least trusted part of it, and it only takes one mistake to let that in. Maintaining that trust amongst general public does not outweigh business health/profits
At some point I have to trust something, because I am not smart enough to really be able to intelligently verify my trust in most things encryption. It's a quandary, certainly.
Getting back to DigiNotar, what's the best way to cover your ass when someone finds out you've been giving shit away to other governments when they force you to or pay you enough? Pre-existing hack proof to give you deniability.
Anyway, that's one way to look at it. Honestly, I'm sympathetic to typical LEO thinking: the simplest solution is almost always the correct one: someone broke into DigiNotar and issued themselves certs. But I'm also sympathetic to the idea that govs require access, even if the common person thinks they're communicating securely.
by michael 09.06.11 at 10:52 AM in /general
One point I've not effectively made that I should before I stop adding nothing to the discussions about CAs and DigiNotar: scope.
It's one thing for this to happen to DigiNotar over in the Netherlands. But think about the impact of this if you live in the Netherlands.
Or what if this had been Network Solutions or Verisign or Thawte? And suddenly browser vendors shunned their roots or CNet and other journalists gave your userbase instructions on shunning root certs. Think of the impact to your users if you run websites, to your own users who browse other websites, and your own desire to buy something off Amazon whose cert may now not be trusted for a few days.
I know tons of blogs posts and articles explaining how to block trust (or untrust) DigiNotar roots. But that's a pretty damaging, somewhat "scorched earth," approach to addressing the problem.
Besides which, other than an incident currently happening, what reason should Network Solutions be given more trust than DigiNotar? Of the 600 CAs, how do you stratify which are better than others?
by michael 09.06.11 at 11:18 AM in /general
by michael 09.06.11 at 4:34 PM in /general
If you read nothing else each week as far as infosec blogs, always check out the weekly Incite at Securosis
and weekly reviews at Infosec Events
. Yeah, it's kinda cheating since both branch out and point elsewhere, but at least it's not nearly as static a list of links as any of our RSS feeds end up being.
Over on the Incite, I particularly like a piece by Rich Mogull which I'll blatantly steal and repost here because, well, it's truth (emphasis is mine):
...But if you want to quickly learn a key lesson, check out these highlights from the investigation report – thanks to Ira Victor and the SANS forensics blog. No logging. Flat network. Unpatched Internet-facing systems. Total security fundamentals FAIL. ...Keep in mind this was a security infrastructure company. You know, the folks who are supposed to be providing a measure of trust on the Internet, and helping others secure themselves. Talk about making most of the mistakes in the book! And BTW – as I’ve said before I know for a fact other security companies have been breached in recent years and failed to disclose. How’s that for boosting consumer confidence? – RM
I've recently been talking about elephants sitting in our infosec rooms. There's a lot of them. The first bit I bolded above is one of them, and I really feel that very few organizations get the fundamentals even started, let alone tight (that's as much a statement of ecnomics reality than a criticism). Still, DigiNotar's state is pretty egregious.*
But Rich's point drives home: DigiNotar is a friggin' security industry company (maybe they forgot that, maybe that should be their mission statement). Yes, utter fail. (Now, back to who audited them in the face of such fail, or who lied to the auditors?)
The second bolded statement is also something I have to reluctantly agree with: Reported incidents are just a tip of the iceberg. And we're not talking solely executive decision to hush up events for fear of public humiliation, but also middle management and even techies staying quiet about things. I absolutely am not surprised whenever I hear at the bar the inevitable tales from auditors and security folks about incidents that are hidden up or poor security that is hidden with smoke and mirrors.
From top down, this is classic negative conditioning: you get slapped for action X, so you either stop doing or try to hide action X. If you try to stop it, but it costs money that you get slapped for...
* As a bonus discussion, Richard Bejtlich
has been talking a lot recently about threat-centric security vs vulnerability-centric security. DigiNotar is clearly an entity that needs to apply threat-centric principles (who are your threats, what do they want that you have?). But can you do that when you're not even doing the fundamental vuln-centric stuff?**
** For those who've played StarCraft II
, I could use an analogy for you. Perhaps threat-centric security would work, but I feel like it is definitely a sort of "all in" approach you have to take in order to be effective. There's no doing some things here, and some things vuln-centric. You'll just spread your resources too thin and not be good at either side. Sort of similar to multiplayer SC2. You could build a few of every unit, but you're going to get trounced; you really want to focus all of your efforts on one strategy, and adapt/change only as a reaction to what your opponent is doing. <--There's seriously a big blog post comparison waiting to happen there.
by michael 09.07.11 at 9:29 AM in /general
I have two more thoughts on this whole DigiNotar mess before I hopefully never post about it again.
First, DigiNotar gets breached and trust in their process is broken. We shun them like the lepers they are! Earlier this year, RSA gets breached and trust in their process is (arguably) broken. We wring our hands and wait. The reaction to DigiNotar is not scalable.
Sure, it perhaps is the correct approach for various reasons (a- protect yourself, b- give them an economic lesson in the risk of insecurity, c- trust is never "slightly" broken, it's all broken!...), but it just doesn't scale to a more important CA or 3rd-party trust provider.
That bothers me. There are lots of innocent victims of DigiNotar who could have done nothing to prevent this issue or better vet DigiNotar. Is that the fault of the people/orgs who shunned DigiNotar, or the fault of DigiNotar? If we, as reasonable security practioners hold fast to the idea that Breach is inevitable, then it's the fault of the trigger-happy fingers who shunned them, right? Otherwise, why are we placing trust in anything outside our walls at all?
I'm not entirely sure I buy my own arguments yet, but that'd be discussion-for-thought...
Second, I listened to the Cyber Jungle podcast
(my first time even hearing about them) specifically to hear the interview of Venafi's Jeff Hudson who recommends an SSL Certificate breach response plan (keeping in mind his company offers solutions in this space). I was a bit keen to hear what insight someone might have on such a response plan. His plan (min 27:00) takes three general steps/questions (I'm not sure if he's talking only about SSL certs or more broadly in what he calls your overall 3rd party trust):
1. Who are you using for trust?
2. Where are the certificates?
3. Be ready to replace certificates in response to a problem.
These make sense, but I guess I was already mentally past the first two items and really wanted to hear a strategy for #3. No such luck, and I guess I'm not surprised since that's really the problem.
At my day job I manage over 100 web sites, most of whom have SSL certificates (to keep this simple). If my CA (Network Solutions) happens to get breached and their roots shunned, in the short term I'm fucked no matter what I do or how much I plan. This is because my domains are hosted by Network Solutions, and I cannot buy a certificate for one of those domains from a different registrar.* I mean, that's the whole point about making sure certificates are valid! So if tomorrow NetSol is shunned, I have to "quickly" move all my domains elsewhere, and initiate the SSL process. By the way, almost all of my certs are EV SSL certs (yes, I hate them) and they're not quick to issue, by design. I'd probably have to short-term downgrade them and then field any questions about lack of pretty green colors in the damn address bars.
And that's just the "simple" 3rd-party trust that is web-borne SSL.
There's really no BCP/DR plan other than having a pre-existing relationship with another CA that you can migrate to quickly. There's no high availability, though, and no quick failover. You also need to at least have a few domains/certs on the second provider so that your staff is used to working with them (and they're used to working with you!), but clearly that increases administrative overhead just a bit.
This gets even worse for those people (not me) who not only use their CA just for domains and certs, but also for their actual hosting. Now there's a nightmare I don't want to imagine!
* Strictly speaking, you can do this, but it illustrates and puts further pressure/exposure on a process that is flawed. If I go to an SSL provider and ask them to issue me a cert for a domain hosted by NetSol, their only recourse is to email the publicly listed contact and use that response as the full authorization. This process does not make any reasonable security person feel joyful and has been the source of abuse in the past (we're talking reliance on automated processes and/or low-on-the-pay-totem-pole customer support).
by michael 09.08.11 at 8:15 AM in /general
I followed a link to a detailed article on laptop security
. I think everyone should read this article, even if you're not of a mind to go to these technical lengths to protect your device from an attacker. Props to the author for also mentioning browser-borne attacks, as I feel most common users are far more commonly catching their own trojans and keyloggers during their own use than any attacker trying to put one on physically.
The steps themselves may seem over-the-top (they fall in the scope of the article title!), but I definitely have to stop and think that there are people who have an expensive laptop as their only device, and they have work/personal stuff on there that is worth money to them and maybe to other people. Me, I probably would write off a stolen laptop, take mental inventory of what I have lost data-wise, and assume that the thief is not someone looking to steal my identity or leverage my browsing history to start SEing me. Honestly, the chances of that happening (and happening to me!) is exceedingly slim. Not because I'm impervious, but because the "common laptop thief" here in Iowa is just looking for a computer to use or to liquidate as quickly and safely as possible. They're not going to whip out the cold boot attack or boot-loaded keylogger. (How come we don't delve into wallet security quite as extravagently as laptops? Or home security?)
I also have multiple devices, and partly because of the need to use them all, I don't have my important stuff stored in just one place on an easily-stolen device (ok, that's arguable, but you have to get into my apartment...).
Some of this position is certainly influenced by my enterprise experience. To a business, writing off a laptop expense is nothing compared to the expense of losing a laptop with client-sensitive information stored in the clear on it. Or the loss of the common local admin username/password. Or VPN credentials. The only scalable solution is to make such device loss a simple hardware cost that a business isn't even going to blink twice about.
I will say, though, I still like the idea of a protected USB key as a complement to laptop devices. And I've long since lost any skill I had at creating and maintaining one. */me marks that down as a rainy day project this fall.*
by michael 09.09.11 at 8:27 AM in /general
It's been a while since I shared my monthly Windows patches write-up that I typically do for work, and I probably should just post them, even though they have a heavy slant towards the server side of things, since that's what I manage. Ok, so this isn't verbatim, since I scrub some particulars that apply to my company; specifically I mention our risk to each patch as well as list the actual specific updates that I release because they apply or may some day apply to us. Also, I should add the target audience for this is somewhat technical, but not really other server administrators. More like other IT staff and managers. They're also largely written for my own notes so I know what is being changed in our environment. I pull all actual updates straight from WSUS syncs.
And for the record, the new look of the Microsoft bulletin pages looks lame. Also, one of the very few months we don't have any IE patches. Strange.
Further information on patches can be found at isc.sans.org
or even eeye
SEPTEMBER SECURITY UPDATES
MS11-070 Vulnerability in WINS Could Allow Elevation of Privilege (2571621)
An attacker with a valid login could send a specially-crafted WINS packet to a listening WINS server (loopback interface only) and exploit a local escalation of privilege vulnerability. This update fixes that vulnerability, and should be considered critical to install on any servers with WINS listening.
MS11-071 Vulnerability in Windows Components Could Allow Remote Code Execution (2570947)
This update fixes the way Windows may load nearby malicious DLL files (DLL linking vulnerability) when opening .txt, .rtf, or .doc files over a network share or WebDAV connection. This isn't a big deal from an external attacker perspective since we block SMB and WebDAV traffic from exiting our network, but this type of vulnerability is still very important if not critical to get patched on systems, partly because of the ubiquitous nature of .txt and .doc files in a typical enterprise network, but also the commonly-held assumption that .txt files are "safe." The details of this vulnerability were made public this past month. It is interesting that this patches core Windows components and not software that typically reads these files, like Microsoft Office, Wordpad, or Notepad.
MS11-072 Vulnerabilities in Microsoft Excel Could Allow Remote Code Execution (2587505)
This update fixes 5 issues with how Microsoft Excel opens specially crafted files. This update should only apply to a handful of servers that have Microsoft Excel or Office components installed.
MS11-073 Vulnerabilities in Microsoft Office Could Allow Remote Code Execution (2587634)
This update fixes 2 issues in Microsoft Office, one that loads nearby DLL files when opening other files (DLL linking vulnerability), and another that deals with how Office opens specially crafted Word files.
MS11-074 Vulnerabilities in Microsoft SharePoint Could Allow Elevation of Privilege (2451858)
This update fixes 5 issues found in Microsoft SharePoint, all generally affecting the web interface and behavior of a SharePoint installation (XSS, script injection, and file disclosure).
MISCELLANEOUS SECURITY UPDATES
DigiNotar fraudulent root certificate revocations
In the past few weeks, a security incident has been discovered with a Dutch Certificate Authority company, DigiNotar, in which malicious hackers were able to get fraudulent SSL certificates issued. These certificates were issued using widely-trusted DigiNotar root certificates. These updates revoke the trust that Windows (and Internet Explorer) had in place for the affected DigiNotar root certificates. Not trusting these certs should have no impact to us, as we have no relationship to DigiNotar or any of their customers. This largely is a client/workstation sort of update, rather than servers, but does still apply.
by michael 09.13.11 at 9:39 PM in /general
If you want to get a toe into the world of analyzing malicious PDF files, check out this analysis walkthru
Now, if you *really* want to know what the resultant code does, you'll need a bit of Assembly/shellcode knowledge, process debugging, and probably access to vulnerability/exploit resources to see common exploits and leveraged vulns. More than likely, you just need to investigate a PDF enough to get some good strings to search for known malware.
Follow links on that blog plus others in the posts to web your way through various other analyses by various other people.
by michael 09.14.11 at 2:48 PM in /general
From the "we're too small/it won't happen to us" file (and via infosecnews
) comes this article about a crew of cyber-thieves who would break into business wireless networks or even physical buildings
to do some digital mischief and steal money. This article seems well-written, and here are some key points I want to highlight:
The indictment accused the men of "wardriving" — cruising in a vehicle outfitted with a powerful Wi-Fi receiver to detect business wireless networks. They then would hack into the company's network from outside, cracking the security code and accessing company computers and information.
Another way to say it, random guys wardrive and find random wireless networks to attack. And they do so!
In other cases, they would physically break into the company and install "malware" on a computer designed to "sniff out" passwords and security codes and relay that information back to the thieves.
Physically break into a business, and plant malware or other devices to try to get at juicier loot. That's a pretty big deal and hard to find if you're not specifically looking for something like that after a break-in.
It also means you have some decently intelligent criminals who aren't necessarily doing what usually gets thieves caught: liquidating their loot or associations with other criminals. And they also can be pretty random with their attacks while they wardrive. Intelligent, random criminals with few opportunities to get caught until after the fact, are a typical nightmare for LEO.
As this next blurb says, debit cards and online purchases and things that make our lives convenient also make criminal lives convenient:
"Everything that makes it easy for us to do our business online makes it easy for them to commit crimes online," Durkan said.
I also like this:
At Wednesday's news conference, representatives from three of the victim businesses explained how they believed their networks were secure and how quickly the thefts occurred.
I really strongly believe all of the victims were small enough to not have a security role in their business, and likely no security interests other than anything learned in consumerland by employees and default physical security from their leasors.
The only way to fix that is continued proactive education and, unfortunately, examples and lessons from other victims. I'm not about to say they need to create a security role or get an in-house security expert, and maybe not even a high-end pen-test, but rather just pick up a local security expert for some verbal consultatation and some technical chops to do small-time assessments and fixes. That's really all it takes to keep a business from being the easiest target on the block.
Also, don't skip over the sidebar in the article, which contains some helpful tips. I'm actually a bit surprised by a few of them, as they're good! (You can, however, skip over the comments, because they'll make you feel dumber for having read them.
by michael 09.23.11 at 9:49 AM in /
, I read this SearchSecurity article on the problem between CISSP value and security industry growth
. Disclaimer: I'm a CISSP-holder.
“I need to find 2 million people in three years to come close to meeting the expected need,” [(ISC)2’s Executive Director] Tipton said in reference to the information security-related job growth his organization forecasts.
I read that and my first reaction was, "That's not your problem."
*You* don't need to *find* these job-fillers. *You* need to just keep certifying *qualified* people to hold your certification. There's an extremely subtle difference there. A difference that isn't so subtle once it permeates years of efforts and turns things into, well, this currently watered-down certification where I see very basic questions coming from CISSP-holders as well as just plain lack of knowledge and value from many. I hear, constantly, tales of people getting a CISSP just because they need to for maybe a sales role or something. And it's simply possible to do that, with a book-based test.
Thankfully McKeay actually essentially echoed my sentiments:
“But the CISSP doesn’t really meet that need because it’s not training per se for any particular discipline,” McKeay added. “It’s simply a way of registering people who have learned enough to pass a test, not necessarily learned enough to do a particular job or even be successful.”
I really think this is a problem where greed is a key factor. Where capitalistic growth is the default goal of a business. If you're not growing revenues and fattening pockets, then you're failing. A non-profit (yeah right) like ISC2 should *not* actually be interested in growing numbers on any artificial platform or reason. It should be just fine and dandy with maintaining a status quo of incoming cert-holders. If it *needs* to grow revenues, perhaps look into sanctioned training in security topics (though that might put it in direct competition with places like SANS, which is sort of a good thing). But it's also not like the CISSP needs to gain credibility. It's *had* that for years, and it's not quite understanding how that is going to erode itself (much like Microsoft certs).
by michael 09.23.11 at 11:27 AM in /general
Schuyler Towne has shared a massive 24-part lock picking series
on YouTube. Check out the first one, and the rest will auto-play.
by michael 09.24.11 at 9:02 PM in /general
I've been silently
musing over Alan Shimel's recent post about optimism in security
(btw, *love* me some Louis CK!). Then I saw Securosis mention it
, and I thought I'd echo some thoughts out.
I could rant a lot about this and make a long post, but not only would I add nothing new, I'm sure I've said it all here before anyway, and I agree with both Rothman and Shimel above, for the most part.
What I will say, however, is that optimism/pessimism is a relative thing, and it depends on how you define your happiness. Which in turn depends on how you view your current position in relation to your goals. I think way too often security folks don't think about their happiness and goals consciously enough. They just want perfect security and solutions and get upset (deeply) when it doesn't happen, or can't happen. It's fine to hit that wall and be frustrated, but you have to accept that that is our reality and not let it define your underlying happiness. Strive for more, but be happy with where you are. There are endless cliches on this sentiment, such as stopping to smell the roses, or life's a journey, etc.
I for one have no problem going to a conference and bitching, sharing war stories, drinking frustrations away, and being generally pessimist. I'd rather do that than pretend everything is shiny and happy and sit back and pat our own backs. That's fine, but one approach will more probably result in steps forward while the other is really not going to result in progress. I know that might be conflating Shimel's point about celebrating our victories and being enthused about how far we've come in such a short period of technological change.
My own philosophy on happiness (which is sort of influenced by Randian objectivism, though maybe not too obviously from this simplification): Either you're happy or you're not. If you're not happy, change things to attain that happy state. If you're unable or unwilling to make those changes, then you *must* change your viewpoint such that you become happy.
Take for instance a minivan driver. He wants to drive his minivan like a sports car, but it's just not built for that, so he's not happy. He has two options: buy a car that suits his wants, or change his viewpoint to become happy with the minivan, i.e. stop driving like it's something that it's not, and enjoy it for what it is and the things it does well. The worst outcome is to do nothing and remain unhappy. More people in security (and in general everywhere) really need to put more conscious thought into their fundamental happiness, which goes deeper than point-in-time moments of celebration and joy.
Personally, the angry pessimistic state of security is comforting and actually does make me happy.
As a parting philosophical shot, I will say just be happy with the world around you right now. Enjoy our progress and enjoy nature at every moment you can.
by michael 09.28.11 at 8:31 AM in /general
If anyone has any suggestions on this topic, please comment or tweet or email me!
On page 10 of the PCI DSS v2.0 document, before the actual requirements, there is a section on determining the scope of an assessment, which includes these lines:
The first step of a PCI DSS assessment is to accurately determine the scope of the review. At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope. To confirm the accuracy and appropriateness of PCI DSS scope, perform the following:
- The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined cardholder data environment (CDE)...
The key word in that whole part is that pesky, "should." Changing that word would make this an unnumbered unrequirement. In my case, my particular QSA has opted to make this a requirement of the scope, i.e. I need to scan my entire network for stray bits of cardholder data.
Let me say I completely agree with this need. There is everything to gain from a scan like this. And not only should it be necessary, but having the ability to perform a scan like this would mean being able to leverage it for other purposes, like client-specific data, porn (conditionally), or anything else hiding in places it shouldn't.
But, this isn't a small deal (windows servers, linux servers, file servers, encoded files, databases, workstations, email servers...), and I don't actually know of any tools to actually do all of this short of buying into a DLP product whose first phase of implementation probably involves exactly this task: scan everything to see what needs protecting. That's a heavy pill (full DLP licensing cost) to swallow for just one task (the initial scan). I'm actuall quite amazed that DLP providers aren't yet offering this as a standalone service/product.
I have stuck my fingers into a few tools, and so far none are satisfactory. Disclaimer: I have only done *extremely* limited testing, and have not even begun to tackle the database aspect.
recently hit the blog posts, though everyone regurgitates the same old intro blurbs without any real details. PANBuster is a small non-installed exe file that you can run on the command line of a system and it will scan a target file or path for PAN data. The scan is quicker and more lightweight then other options. But the results haven't been all that exciting as I find more hits with other tools (both false and potential positives). The biggest drawback, however, is the lack of any UNC or network path support. Extreme bummer. Scripting would probably mean interrogating servers for all physical drives, and remote execing the install file. Really messy.
Spider from Cornell
(currently Spider4 aka Spider2008) is a tool that can be installed and run from a local GUI, but can also be command line-driven as well. Executing a scan via the command line is a bit tricky, but certainly can be done. Executing a subsequent scan will not succeed when unattended unless you do some magic (ok, you delete the locally saved scan state file) each time. Configuration can be governed by an XML file, but the values are arcane at best (wtf does option 1048 mean?) and not documented. The fat GUI app also is really actually executed even when done by a command line, and then exits out. Any strangeness and it'll sit there waiting for an operator to click an "Ok" button.
On the plus side, Spider *can* technically be scripted, and I already have a plan of action to do so with PowerShell. It will save hits to a discrete log (the file names and paths, but not the actual hit data; that can be saved in an encrypted local database). It can also scan UNC paths, including admin shares with the proper permissions.
That alone is a huge plus.
On the negative side, scans are long, can include tons of hits, has no scan result management at all, and really doesn't make me feel very warm. I'd expect a month of execution to scan my network, but I'd have to constantly check it to make sure it's not hung on something.
is a tool from UTexas and I've not tried it out extensively yet. Like Spider, it is made for educational institution purposes where the institution holds system users responsible for the data on them, and thus provides the tools plus instructions so they can scan their own systems and send in reports. SENF is written in Java, which doesn't excite me, and none of the literature appears to support UNC or network-bound scanning of any type. I've not gone far enough to actually try it yet. Examples of use are few and far between, and the tool does not come with predefined reg expressions...
Tools like CardRecon
are commercial tools, but just fill the same need as the above options: scanning a discrete single machine and/or local drives. I'm not about to install an agent or tool on 500+ workstations and 200+ servers if I don't have to.
pretty much universally tout their first phase of deployment to be automated discovery of sensitive information that then needs protection. I've not seen more than limited demos of DLP solutions, so I can't comment on them, but the capital outlay for something to fill this need is annoying. Still, I'm close to actually going through the motions to get some ideas on how they solve this issue.
Forensics tools like EnCase
can also help in this regard, but are expensive and also not specifically tailored for network scanning; again they're a bit more suited to discrete system scanning.
Questions to peers have yielded zero actionable answers. The end result so far is my own conclusion that no one is actually scanning their whole network to validate their expected scope, and this need has been unfulfilled.
by michael 09.29.11 at 4:20 PM in /general